Software development processes in a team

In this brief article, we focus on one of the skills that enhances the role of a Tech Leader (TL) within Develer, namely the ability to organize the software development process. In other words, the TL is responsible for structuring the development process and the tools used, ensuring that the team understands the flows involved in making changes, testing them, and verifying for regressions, all while meeting the client’s needs.

Team Composition

First, let’s see how a team is typically composed, from different perspectives. One perspective is to evaluate the roles involved around the project, which is the central element.

Composizione del team di sviluppo

Without diving into other roles, we can understand that there are multiple needs at play. For example, the client’s need will be different from those of a developer. In the first case, it is probably more important to identify the latest release and all installation files. On the other hand, for a developer, the focus should be on implementation details, such as executing code tests. For each of these roles, it should be clear how to operate on the project without having to consult or ask for help from others.

Another perspective on the project involves the development tools. To work efficiently and meet the needs of the individuals mentioned earlier, it’s essential to decide which tools to use and configure for the project.

Strumenti di sviluppo

Assuming we manage the source code with the git configuration system, several tools have established themselves over time to enrich its functionality. We are talking about GitHub, GitLab, but also third-party applications such as Codecov, Trello, and so on. However, these tools are not set in stone. The key point is that the choice of tools depends on the goals we want to achieve, and therefore, they are completely flexible.

Obiettivi progetto

The tools serve to ensure the achievement of certain goals deemed important for the project. Indeed, the project should be consistent in its architectural choices, even across different code components. This consistency simplifies understanding. Moreover, the project should also be more readable, partly due to the programming style used, which should be as uniform as possible. These factors increase the overall quality level of the code, also through coverage testing. Consequently, the code becomes more maintainable, and development time decreases in the long run. The people working on the project can learn more effectively and more quickly, which also allows for rotation among team members.

Therefore, the role of the TL primarily serves to analyze all these needs in terms of roles and project requirements, choosing the right procedures and tools to facilitate development. Furthermore, it is not a position of “command” or imposition; rather, it is something entirely different and embraces the principle of servant leadership. This means that the TL serves the team, aiming to make it more productive and autonomous. The TL will have fulfilled their role when the team is capable of self-organizing in both decision-making and management.

Development Workflow

A fundamental decision for all projects is determining the steps needed to complete a specific feature. In this article, we will assume branch-based development as a distinction from the main codebase, regardless of the branching methodology used. Additionally, the steps and tools should always be tailored to the type of project and the goals we want to achieve.

A possible workflow is shown in the figure below. It starts with a card/task that describes the activity to be done. This initiates a development process that diverges from the main codeline, thus continuing independently on a parallel branch. Later on, we will see the strategies for managing these branches and how to integrate them back into the main codebase. Once the feature has been implemented, the developer can open a pull request (PR), which is a procedure to request approval and subsequent integration into the main branch. This approval process is guided by code review principles, which we will not cover here. However, beyond how the code is reviewed, it is crucial in the development process to use tools for performing additional automated checks on the pull request code. The first automated check involves running unit and integration tests, which must always be included as part of the new feature. These tests can be run manually, but it is more advantageous to execute them automatically, displaying the results. A well-known tool for running tests is CircleCI, but it is not the only one. For instance, we could also use GitHub Actions, Bitbucket Pipelines, or others. Running these tests exercises parts of the code that are then considered “covered” by the tests. To visualize this coverage, there are other tools or applications that provide a percentage value. Coverage is also fundamental to verify that the added parts of the code have been properly tested. Alternatively, it can also highlight areas that cannot be tested because they are unnecessary. Typically, a good target is to achieve overall project coverage of 80% (not everything is testable). However, at the pull request level, it is useful to check that all the newly added code is covered and then evaluate with the team if there are any exceptions. Managing the pull request up to this point, as described, almost certainly requires setting up a continuous integration (CI) tool, such as GitHub, GitLab, Bitbucket, or others. Therefore, we refer to continuous integration as a tool to execute actions automatically on the pull request. Later, we will see that the same term is also associated with a branch-based development strategy using git. It is fitting that these two aspects share the same name as they are closely linked.

Workflow di sviluppo

In addition to automated checks on the source code, manual activities may also be identified, depending on the type of PR. For example, a feature may make changes to a GUI, this means the user interface will need to be tested in some way (in this case, automatic tests are difficult to implement). Additionally, specific hardware may be involved requiring dedicated testing. If the project also needs to meet performance requirements, we must ensure there are no regressions, by setting up a metrics system using tools like Grafana, Prometheus or others. The CI system can even create artifacts to facilitate potential project deployment, making testing easier.
When all these tests have successfully passed, it’s time to integrate the new feature into the main codeline. Depending on the branching strategy used, this merge phase may involve resolving conflicts in the source code arising from new changes introduced into the main code. Finally, every time the main branch receives new changes, a maintenance phase may be necessary. This means that any other branches that are still alive before being integrated may also need to be updated with the latest changes.

Feature Tracking

Throughout these development steps, there are countless details and nuances that must be taken into account to facilitate project management. However, one aspect that almost always proves useful is feature tracking. This process depends on the tools being used. For instance, sometimes a PR can be linked to an issue/task, ensuring that each PR can be traced back to its specifications. However, if Trello or other equally flexible tools are used, this linkage is not automatic. Therefore, we can reinforce traceability with some best practices:

Development Strategies and Git Flow

The primary tool for code development that I have referred to so far is git, which is a tool for source code version control (SCM). It is not the only one – there are many others, such as SVN, Mercurial, etc. However, the concept of branch-based development is more abstract, regardless of the SCM implementation. With git, we do not have this issue, as branches are efficiently supported, which is likely one of the key factors in establishing its strength.

Parallel Development

When we talk about parallel development, we often imagine branches as parallel lines moving forward in unison. In reality, a more accurate representation would be curves that progressively diverge from each other. Indeed, this is exactly what happens, which leads to a certain “fear of merging” due to the many conflicts in the source code that developers have to resolve every time they need to integrate different branches. These conflicts can be textual (e.g., same line of code is modified in different ways) or semantic (e.g., a parameter to a function). In the first case, git helps by clearly highlighting the conflict. In the second case, however, the conflict will only surface when attempting to compile or execute the code, making it more insidious. For this reason, it is important to have development strategies that simplify teamwork.

Sviluppo parallelo

Git Flow Model

Git Flow is a branching model that was extensively described in a well-known article from several years ago. The following image provides an overview of its structure:

Modello git flow

The key points of this strategy are:

This model laid the foundation for feature development principles. However, it is clear that its numerous rules can sometimes be complex, and managing multiple branches can become burdensome. The model fundamentally relies on feature branches and several primary branches.

Continuous Integration

With the advent of CI applications and services, the development strategy often considered more flexible is the one that avoids feature branches, known as continuous integration or trunk-based development. 

Continuous integration

This strategy can be summarized as follows:

Working with this methodology is advantageous because it limits management complexity as the project and team scale in size. Since the branches are shorter, conflict resolution becomes much easier.

Of course, there are also aspects to keep in mind:

Thus, there are also variants of pure continuous integration or best practices, primarily concerning how new feature releases are managed.

Feature flags

New features are added in isolated or inactive code paths, with their activation deferred until the decision is made to release them. For example, a web application might contain the code for a new page, but if users cannot access it via a link, it remains hidden and unusable. Alternatively, activation can occur at compile time or run time through configuration files.

Release train

This strategy involves releasing new features at regular, scheduled intervals. For example, a new branch is created monthly to bundle and release all the planned features for that month.

Release train

It is determined which “train” each feature will board. At the end of this time slot, no new changes can be added, and they must wait for the next train. Similarly, at the point of “feature freeze,” a release branch can be created to stabilize all the planned features. This release branch will then serve as the starting point for the next release train. To apply this effectively, a structured feature planning process is needed, and some changes will need to wait for the next train.

Automation

In addition to choosing the most suitable workflow for the project, it is essential to include a series of automated processes to enhance reproducibility. This characteristic is important when onboarding new developers, when training juniors, for quickly resolving bugs, performing testing, and interacting with the client. During the initial approach to the project, the goal is to manually install as little as possible and then use properly configured scripts or tools to perform the most common operations. A widely adopted convention is to include a README file. This file is not architectural documentation for the project; rather, it is aimed at developers and provides simple steps for setting up the project.

But what are the most common operations? Obviously, it depends on the type of project, but we can certainly list a few of them:

Some tools commonly used are Makefile, Scripts, Docker, Foreman, Doxygen, etc. Editors are also essential tools for managing software; however, each developer should have the freedom to choose their preferred editor. The important thing is the editor configuration, such as automatically formatting the source code every time a file is saved.

Let’s now briefly examine at some of these tools.

Makefile

A Makefile can be a valuable tool for managing scripts. This is not about manually writing a Makefile to compile a project. For that, there are dedicated build systems that handle that task more effectively. Instead, a Makefile is useful for providing high-level commands (like `make test`, `make all`, etc.), for cross-compiling application, for using the same targets in the CI system, and so on. Similar to a README, using a Makefile is a well-established convention and also serves as a form of documentation for developers.

Foreman

Foreman allows multiple components to run concurrently, providing integrated output in a single console. For example, consider a project organized into microservices. All processes stop if at least one of them fails. Configuration is done through a text-based Procfile. It does not require deep knowledge and can be easily used and modified by anyone.

Docker

The functionalities provided by Docker are extensive, and this software is significantly more complex compared to the other tools mentioned. Introducing Docker requires supporting multiple development platforms, effectively enabling the vendorization the entire development environment. It is configured through a Dockerfile, but in this case, we need to provide “Docker-aware” scripts – scripts designed to run within a Docker container.

Vendorizing the Project

In industry, it is crucial to be able to restore a project even after a considerable amount of time has passed. Tools evolve, external dependencies advance in version, and our scripts or software may no longer compile. Therefore, the concept of vendoring involves integrating everything needed to compile the project within the software repository. With Docker, it is even possible to vendorize the entire development environment. However, we can also choose to vendorize only the source code. Modern languages like Go and Rust are already built around this principle, providing appropriate commands for it. Alternatively, vendoring can be achieved using git itself. The git subtree strategy allows us to integrate a copy of an external module directly into our project. This contrasts with the `git submodule` strategy, which keeps only a reference to the external module. Of course, this reference could become invalid in the future. Using `git subtree` also simplifies working with other git commands, such as `git bisect`. For example, during a binary search for a bug, when checking out a specific commit from the past, we can be confident that the code will compile correctly and that tests will continue to run without requiring further alignment operations of external modules.

Conclusions

As we have seen, organizing a software project involves many facets. An important aspect is supporting and facilitating day-to-day development while also considering the other roles involved in the project. From time to time, the client will likely request a new software release or a changelog between versions. By setting up a structured process, manual operations such as creating a changelog can be automated with a single command. For example:

$> git log --merges v10.0..v.11.0

This is possible because each merge branch is linked to a specific card or issue, and all features are integrated into the same shared codeline. As a result, it becomes easy to trace back to the high-level description of each change made to the code.

For these reasons, structuring the project enables greater speed and flexibility in problem-solving, adapting the development team, and addressing new requests. The tools and platforms to achieve this are varied, and while their use is not mandatory, following good practices is strongly recommended.