Monday, 11 April 2016

Why Wait to Test or Deploy?

As a result of certain management decisions the development of my company's software product has been forced into "maintenance mode".  As there's no further development of the product's features then this has meant the elimination of my development team.

"Maintenance mode" does not mean that absolutely no development work is required to be done and that I'm now sat twiddling my thumbs all day with only one other colleague to keep me company.  For the software product to remain useful to its current set of users (yes, there are still some users) it needs to be actively maintained - fixing bugs, making minor enhancements, etc. This drastic shift has forced me into a very lean way developing, testing and deploying - one which we should probably have been following previously.

I am now the only one doing any of the maintenance development work, while my sole remaining colleague handles the testing and support activities. With nobody else doing any other development, there is nobody else's code to wait for.  So the question naturally arises:
Why wait for another piece of development from me before testing and deploying?
The only reason I could think of for holding back on testing and/or deploying a piece of code was to do with the overhead of testing and/or deploying that piece of code.
If testing is related to just a single bug fix or feature enhancement then the scope of testing is limited, which reduces the overhead.  If a few related bugs can be fixed in the maintenance development then they can all be tested at the same time, further reducing testing overhead.
It was felt that deployment might be a significant overhead for the IT support doing the deployment, but checking with IT support revealed that there was no significant overhead, though there is scope for further automation.
The other deployment activities involve build creation (mostly automated, but there is scope for further automation), release notes generation (working from a template and mostly copying changeset comments) and demonstration/feedback meetings with users.  If the release is just bug fixes or minor behaviour enhancements then we don't see the need to have a demonstration/feedback meeting to show people that an error no longer occurs or that a selection is automatically population in response to a click elsewhere.  Demonstration/feedback meetings with users can wait until there is something of a more tangible difference to demonstrate.
So, we're down to the development, testing and deployment of a single feature at a time (plus a handful of related bug fixes and enhancements that are undertaken in the course of the feature development through the normal refactoring and fix-it-when-you-find-it approach).
The result: faster deployment, almost continuous deployment.
As each release now consists of a single feature another question naturally arises:
Why keep a backlog of other feature requests?
Each demonstration/feedback meeting with users finishes with asking them to vote for the feature which will be of most value to them.  Not the top 5 features, but just the single feature that will give them the most value.  With only one feature being worked on at any time there is no point in maintaining a list of any other features.  At each demonstration/feedback meeting new ideas may be discovered for something that will give the most value to the users, so keeping a record of the second most valuable feature request from some point in history is just administrative overhead.
So, there is now no backlog.
The result: no backlog maintenance is required.
If we had more staff why couldn't we continue doing the same?  Couldn't we have worked like this when there was a development team?  As each feature became ready, couldn't we have deployed it?

No comments:

Post a Comment