Let’s assume that you are part of a project team creating a new product – The Whizbanger – and you’ve just completed your major release. It’s in. It’s complete. What’s next?
Well, in some organizations, the product is passed off to the maintenance team. But you’re in a progressive organization where the product team does everything: all major releases, minor releases and bug fixes. So the next step is obviously to fix bugs and work on the next release.
Oh, you’re not in progressive organization? That would mean that you are handing off the application to a maintenance team that will do bug fixes and perhaps minor releases while your team … works on the next major?
So who gets all of the environments you created? The project team working on the next major release or the maintenance team working on the next minor release? Since we know you’re not a progressive company, it probably means that the maintenance team needs to get some servers created. If you’re lucky it is a matter of cloning some virtual machines and you’re done.
After a few years of this you’ve got the request to work on multiple releases at the same time: two minor releases and one major release. Do you need another set of environments? And where do bug fixes go?
This is the situation in which many organizations find themselves. There is a solution, but, unfortunately, because your company is not progressive you’re not going there. instead let’s get you some more servers. And more servers. And more servers. After all, hardware is cheap isn’t it?
That’s the usual excuse from someone who hasn’t designed something properly.
Time to move away from reality into … the rest of the world? There are a lot of organizations that have solved this problem, but it requires an application architecture designed around continuous integration and continuous deployment. If your changes can be small enough, tight enough, then everyone, maintenance people, project team members, can share the same set of environments as the code is only going to be there for a very short period of time.
Lyft , an Uber competitor, does an average of 240+ deployments per weekday. They even have a staging environment before production. I don’t think that they have dozens of environments waiting around to be used. No, they use microservices in combination with a shrinking monolith. Testing is built into process. Monitoring is built into the application. The system is robust, stable and doesn’t require a new environment for every microservice.
The cure to requiring multiple environments is to change the architecture. I can personally name a “program” that has 21 different environments over 90 machines with only 1 in 5 being production. This means that over 80% of the machines are in developer hands. This application is a monolith like Stanley Kubrick’s and needs to be changed. But it’s not, it is spreading, growing, making it even more difficult to coordinate and implement. Yes, they are automated, but automating bad still means that it’s bad.
Just because the hardware is cheap doesn’t mean that you should use it. Making effective use of resources usually means a better product and that, ultimately, is what the business needs.