At Propel, we ship improvements to our service continuously, often multiple times per work day. Our continuous integration (CI) and continuous delivery (CD) processes ensure we can do this consistently and confidently, and our deployment notes help communicate these changes internally and externally.
But if we zoom in, how do we actually approach the different types of improvements we ship? How do we implement bug fixes, feature additions, and refactors? In fact, what are these different types of improvements, and why is it worth distinguishing them?
Types of improvements
The list below isn’t exhaustive (for example, there are improvements we make to tests, documentation, and performance, as well as dependency upgrades); however, bug fixes, feature additions, and refactors are the most common improvements we make, and deserve special consideration.
“Bugs” are software behaviors that differ from the documented or intended behavior. For example, maybe a form should be restricting the length of a text input; or maybe an operation should not be allowed from a particular state. If the software doesn’t implement these behaviors and allows too long of a string or an invalid operation, those would be bugs.
A “bug fix” fixes the bug by correcting (or avoiding) the buggy behavior. Usually, bugs negatively impact the customer experience, and bug fixes improve the customer experience; however, in some cases, customers may actually come to depend on the buggy behavior, and “fixing” it could actually worsen or break the customer experience.
For these reasons, it’s important to consider the impact of each bug fix and to communicate the change to customers. Even if we think a bug fix will go unnoticed, we put it in our deployment notes.
“Feature additions” introduce new features to the software for customers to use. Ship the features that customers want, and usually they’ll be happy! 🙂
Features are typically additive. In other words, shipping a new feature Y usually doesn’t limit or restrict a previously shipped feature X, since this could be a breaking change for any customers who depended on feature X. Instead, we add to available feature set.
Features are customer-facing, so we usually announce them with blog posts, updated documentation, and a changelog entry (of course, we also include these in our deployment notes).
“Refactors” are changes we make to improve the organization of our code or to improve the maintainability of our code. When implemented correctly, refactors do not result in an observable behavior change, unless the refactor improves performance. In other words, customers rarely notice when we ship a refactor. That doesn’t diminish the importance of refactoring, but it does place the responsibility on engineers to communicate why a particular refactor is necessary.
Why distinguish types of improvements?
The different types of improvements deserve different consideration when it comes to prioritization, implementation, testing, review, deployment, and communication. Let’s dive into each of these topics in the following sections.
There are no set in stone rules for prioritizing bug fixes, feature additions, and refactors, because it will always depend on the impact of a particular change; however, we typically try to prioritize improvements in this order:
- Bug fixes: If unaddressed, bugs pile up, making your software harder to reason about and difficult to trust, both for your engineering team and your customers. It’s difficult to ship new features when the software is buggy, so for these reasons, bug fixes come first.
- Feature additions: Shipping features that customers want is how we grow our business and improve our product, so this is second highest on the list.
- Refactors: Refactors rarely effect the customer experience, so they come last; however, a careful refactor can make features easier to implement or improve your engineering team’s velocity. For this reason, we sometimes prioritize refactors ahead of new features.
At Propel, we track work items In Linear, where we have specific labels for bug fixes and feature additions (we prefix them “T” for “type”). This helps boost these work items’ visibility in our backlog, ensuring we prioritize correctly.
Implementation and testing
This section is called “Implementation and testing”, because the two go hand-in-hand. For example, how do you know if a change is correct, unless you test it? We take different testing and implementation strategies, depending on the type of change we are making:
- Bug fixes: Regardless of if you are “all-in” on test-driven development (TDD), TDD is one of the best ways to implement a bug fix. First, we write a failing test that demonstrates the bug. Then, we implement the fix. Finally, we show the test passing. Doing it this way proves we understand the nature of the bug and are fixing the right thing.
- Feature additions: When implementing a new feature, we always write tests which exercise the “happy path” and the failure scenarios. New features which interact with existing features may also require extending existing tests, and performance-sensitive features require extra performance testing. Our exact testing strategies — be they unit, integration, performance, or end-to-end tests — will be covered in another post, but any time we write a new feature, we’re adding tests.
- Refactors: Refactors rarely effect the customer experience, so they come last; however, they may be necessary to implement new features or they may improve your engineering team’s velocity, by making it easier to work in the code base. For this reason, we sometimes prioritize refactors ahead of new features.
Each change should be documented with a descriptive commit message. We follow the Conventional Commits specification, and so we land each type of improvement in its own git commit, with an appropriate type. For example, bug fix commit messages are prefixed “fix:”, feature addition commit messages are prefixed “feat:”, and refactors are prefixed “refactor:”. Taking the time to structure our improvements this way helps us communicate exactly what’s changing and why.
Review and deployment
So we’ve implemented some improvements. Great! But we don’t just YOLO them into production. Any good engineering team implements a review process, wherein we as engineers ask our teammates to review our work. Depending on the nature of the change, reviews can be short and simple, or they can be long and involved, requiring careful scrutiny.
There’s a few things we can do, as engineers, to make reviews easier for our teammates, to be respectful of their time, and to ensure high priority changes — like bug fixes — ship sooner than later.
- Avoid mixing high priority bug fixes with feature additions and/or refactors on the same pull request. If there’s a high priority bug fix that needs to go out, it shouldn’t require reviewing and deploying unrelated feature additions and refactors. These extra changes require more of our reviewers’ time to review properly and may introduce unintended changes, requiring a rollback, further delaying the bug fix.
- Separate different types of improvements at either the commit-level or the pull request-level. For example, avoid mixing bug fixes, feature additions, and refactors in a single commit, unless they’re exceedingly simple. Consider merging complicated sets of changes incrementally, over a few pull requests, instead of in one big one. Our goal should be to make our changes clear and easy-to-review, so that our reviewers clearly understand what is changing and why.
- Prefer smaller changes over fewer, larger changes. Smaller changes are easier and quicker to review. Additionally, if you need to rollback a change, reverting a single commit or pull request containing only that change can be easier than reverting a monster pull request with many unrelated changes contained within.
- Keep the main branch in a releasable state. Ensuring that main is always in a releasable state means that you can always create a new branch, implement a bug fix, and ship it to production.
We’ve found that working this way requires a fast, automated CI/CD pipeline so that, as soon as a change is reviewed, it can be merged to main, passed through automated testing, and shipped out to production.
Once an improvement has shipped to production, our work isn’t over. We typically want to communicate bug fixes and feature additions to customers, and that may required updating changelogs and documentation. For example, if we shipped a new API parameter, we should go update sample apps and API documentation to demonstrate the new behavior. We’re confident our work is complete once a customer is benefiting from our changes in production.
At Propel, we’re continuously improving our software and making changes to production. Different types of improvements require different approaches, especially in testing, and we try to divide up our changes in order to facilitate fast reviews and frequent deployments. Consider this post a snapshot of our development processes that we’re always aiming to improve! If you like something you read, feel free to borrow it.