Automated Builds, Continuous Integrations, DevOps, Oh my!! There are a ton of buzz words surrounding the different methods and ideas for managing hosting environments and deployments. One of the greatly implied concepts that isn’t really discussed very much is the importance of keeping different hosting environments for the same application (ala Dev, Test, UAT, Staging, Production, etc.) separate / isolated. During the development and maintenance processes these different environments can easily be mixed and messed with. This article will explain some details on why they should be kept separated, and how to keep them separated.
How Environments Die
Let’s start with a real world example of a new applications development cycle; starting from app creation to production roll out:
Step 1: The development team creates a new web application and need to deploy it for testing, so they create a UAT environment.
Step 2: There are some new features that need more detailed testing by the developers so they create a new pseudo “DEV” environment for the web application, but use the same database as the UAT environment. “Hey, why create a new database if we don’t need to.”
Step 3: Development is going really well and a release to the customer that they can use needs to be made, so the team creates a new “PROD” (Production) environment with it’s own database.
Step 4: The application hasn’t been rolled out to Production yet, but the PROD environment has been created. The client needs a site to do User Acceptance Testing (UAT) that is different than what the QA Tester on the team is using; which is UAT. So the team creates a UAT2 environment for the client to do User Acceptance Testing that points to the PROD database.
Step 5: All is golden and everything is ready for a Production roll out so the entire web application is rolled out to the PROD environment.
Step 7: The development team hands off the project to the developer maintenance team.
Step 6: The client is testing a new bug fix in UAT2 and changes some data; then confusingly reports that for some reason that change occurred in Production (PROD.) At the same time, a maintenance developer changes code and data in the DEV environment, and the QA Testing confusingly reports the data they relied on for testing just disappeared.
Step 7: After much confusion and many discussions since the original development has since left the project; the developer maintenance team spends a few weeks sorting out the hosting environments so everything is documented.
Result: The result of the above development process is that the environments are a bit comingled. Here’s a breakout of what environments are mixed with what:
- DEV Site with UAT Database
- UAT Site with UAT Database
- UAT2 Site with PROD Database
- PROD Site with PROD Database
Resurrection via Correct Isolation
Implementing correct environment isolation is very important, and the best time to implement it is directly from the start. However, you don’t often have the luxury to starting over with existing systems, so correct isolation is something that needs to be worked towards. No matter where you start, the end desired result is the same. Now, lets take the above “bad” example, and outline how a straightened up, “good” version of it looks like.
It may seems simple, but each environment should be completely isolated from the rest; complete with it’s own database and other resources. Here’s a breakout of what the above “bad” example should look like:
- DEV Site with DEV Database
- UAT Site with UAT Database
- UAT2 Site with UAT2 Database
- PROD Site with PROD Database
Transforming a jumble, mess of “bad” environment isolation isn’t necessarily easy depending on how jumped it is; especially if it’s as bad as the previous example. However, it can definitely be done.
Transforming Bad to Good
Transforming from “bad” isolation to “good”, correct isolation is done by taking a few simple steps in one environment at a time. While implementing correct isolation is done from the bottom up in a brand new system, the transformation of an existing system needs to start at the top down.
The reason transforming an existing system to “good”, correct environment isolation starts at the top is because Production is the highest priority; so that’s where you start. That means disconnecting any environment connected to Production (PROD) resources except the Production environment itself. In the above “bad” example, that means disconnecting UAT2 from PROD. This could be done by taking UAT2 down, or pointing it to the UAT database for the interim.
Point UAT2 to UAT database in the interim? Isn’t that bad too? Yes, but it’s far better than pointing it to PROD. Additionally, by keeping the UAT2 environment up allows for the development, testing and release cycle to continue while the environments are straightened up.
Once UAT2 is disconnected from PROD, the next step is to get UAT2 a proper database and setup it’s own isolation correctly. After that, just move to UAT, then DEV, etc.
Sometimes, the easiest way to transform an existing system to correct isolation is to move the environments. This could be done by taking down the UAT2 environment that is hosted on-premises and moving it into the Cloud; into Microsoft Azure.
How Azure Helps
While Microsoft Azure surely isn’t the only option for implementing correct environment isolation, it definitely can be leveraged to help ensure this isolation. With using features like Azure Web Apps, Azure SQL, Azure Storage and Resource Groups you can logically separate the different environments very easily.
In addition to resource groups, the Shared Access Keys for each Azure resource in addition to possibly using multiple Azure Subscriptions you can tightly control the permissions surrounding the various environments making it difficult for a “rouge” developer to “just” point one environment to another.
A specific feature of Azure Web Apps that helps with this is the ability to override application settings and connection strings from within the Azure Portal. This way you could allow your developers or infrastructure admins to deploy new code to the Azure Web App without letting them even see the appSettings and Connection Strings that particular environment uses.
With Azure SQL, you can grant each developer specific access to each environments database, thus making sure they don’t have an “owner” or “admin” account and connection string they can use to point one environment to another.
There are many other Microsoft Azure features that can be used to help implement a “good”, correct environment isolation Architecture. In the coming weeks, there will be many new articles coming to this site outlining many of these features.
Could you please let me know how are we isolating Active Directory ….like if we have DEV,QA,PROD domains On prem. Can we have the same kind of isolation in Azure per subscription. Is there a blog or tech note which talks about this.
Yes, you could setup a separate Azure Subscription that has Azure AD sync’d for each of the on-premises domains (DEV, QA, PROD). That would keep them all completely isolated. I would be interested to hear your reason to having separate environments that include separate domains too.
We have one subscription under one account. Under the subscription, we have one virtual network with only one subnet. In that virtual network, we divide our departments’ production resources into separate resource groups under one subnet. Recently, a department just requested to have development server thus we decided to separate the development environment of each department into separate subnets and resource groups. Is this a good idea? Would it affect the production subnet if developers play around in their specific resource group and subnet?
If you isolate the subnets appropriately then you shouldn’t have much interference. However, I would recommend creating a different Virtual Network for each of your different environments (Prod, Dev, Test, etc). You could also have multiple Virtual Networks connected using either a VPN or with VNet Peering. This would enable you to create a Virtual Datacenter in the cloud that would be much easier to manage than using a single Virtual Network for all Subnets and everything.
Thanks Chris. We eventually split the Prod, Dev, and Test environments into separate subscriptions and separate VNets and then implemented VNet peering between them if they need to communicate. About the subnet isolation, each department will have their own subnets in their respective Prod, Dev, Test subscriptions separated using NSGs. How stringent should these NSG rules be? At the moment we have (in order of priority):
130) Allow anything from the same subnet to the same subnet.
139) Allow anything from the subnet Load Balancer to the same subnet. This makes sure any traffic from the subnet load balancer is only heading to the same subnet.
140) Deny anything from the current VNet to the current VNet. This overrides rule 65000.
141) Deny anything from the subnet load balancer to anything. This overrides rule 65001.
Default rules:
65000) Allow anything from the same VNet to the same VNet.
65001) Allow anything from the subnet Load Balancer to anything.
65500) Deny anything from anything to anything.
Is this too stringent or redundant?
Strict NSG’s are always good added Security to have. I don’t know how many applications you have, but it doesn’t hurt to have multiple VNets for each environment to further separate things out to be more manageable on a per-workload basis. It’s good to hear that you already separated things out, that should prove to be much more manageable for you. 🙂