Developer experience and codebase control are essential elements for any digital project. Salsa projects are built on standardised and proven development processes and tools. This delivers rapid deployments, greater predictability, reduced risk of regression errors and overall developer confidence and happiness.
We recently put together a tender response that included a look at best-in-class developer experience, with a great dive into this subject by Salsa's Stuart Rowlands. The content is too good not to blog about...
Best-in-class developer experience
A best-in-class developer experience is important both during the initial project setup and into the future. It’s imperative the foundations for a good development workflow are set up early and adhered to long-term to achieve rapid development cycles and a lower total cost of ownership of a project.
Salsa projects are built on standardised and proven development processes and tools. These processes are present from local development all the way through to deployment into environments running in the cloud.
These processes and tools allow for more rapid development and deployment cycles, while minimising the risk of introducing regressions into a project.
All developed code is maintained in Git, and modifications (features, bug fixes and release management) are controlled through a workflow called Gitflow. This workflow helps to ensure that all changes are tracked in a centralised location, and code can’t be added or deployed without code review and quality assurance checks.
Git also maintains revision history and authorship for a full historic view into a project’s development lifecycle.
Development environments and cloud environments often differ, which can cause compatibility issues that are hard to debug. Salsa uses standardised Docker images to standardise development and cloud environments, to ensure a project will run the same in a local development environment or continuous integration (CI) and testing environment as it will in the cloud.
Onboarding and development environments
Due to the standardised nature of development environments using Docker images, the onboarding process for new developers is simplified. Getting started on the project is covered within the codebase’s README file, and generally involves one or two commands to get a functional local environment for development.
Automation reduces human toil, and ensures tasks can run again and again in a standardised and consistent way. This reduces human error and the manual effort associated with common tasks.
Our continuous integration and deployment strategies are made up of three automation steps in a pipeline:
- Coding standards
1. Coding standards
Coding standards are automatically checked before code is merged into the codebase. This ensures any issues in code that can be picked up automatically (standards in commenting/documentation, security issues, cyclomatic complexity) are resolved before being deployed to any cloud environment.
Automated testing is critical to give confidence in deployments. These tests trigger to ensure that no functional or visual regressions are introduced. Testing can produce screenshots and other artifacts throughout the process, providing visual output of what an end user would see at points in time.
Tests are written in a way that makes them easy to understand, and align to acceptance criteria on functional requirements.
Deployments to cloud environments occur automatically pending the completion of previous testing, both automated (as above) and manual (code review, QA). These deployments include running processes on the cloud environments such as installing additional software packages (e.g. Composer or node-based software packages) and performing follow-up tasks (clearing caches, importing configuration, etc.).
The end result is a one-click workflow from approved to deployed.
API definitions and mocking
Applications that have heavy API requirements should have the schema and structure of the API output defined upfront. This can be done through the RAML specification to quickly and efficiently define data structures required on both the backend and frontend.
These APIs can then produce functional mock API endpoints, effectively providing a living blueprint of schemas that a frontend developer can build against while backend work continues.