From the course: VPC Networking on AWS: Configuration, Security, and Connectivity

Plan the creation of VPCs

- [Instructor] In this lesson, we're going to look at the planning and the creation of VPCs. Or another way of looking at it is, how many VPCs do I really need? You might think one network is good enough. Let's see. If I go with a one VPC solution, at the start, it's more simplified because I have a collection of subnets in a single VPC. Everything's there. Nice. Or is it? We do have a unified configuration in that we have a single CIDR block of IP addresses, one set of route tables, a single internet gateway to get to the internet. So it's simple, but is there growth? Should everything be there? Should development, test, and production all live in the same network? But for now, we'll say okay, unified configuration, something small, maybe a small startup, single web server. This is what I'm thinking of doing. It's also simpler to monitor. Everything's within the one network. I run network traces. I can do it on the one network, and it's easier to build. It's easier to automate. I can use tools such as Terraform or CloudFormation or even the built-in automation within the VPC console to quickly create a network. So, so far, so good, but let's look at the other side of this discussion. One VPC is going to potentially give you some problems. If you are a company that has multiple applications, you probably don't want a single VPC because there's no isolation between the two or three different applications that you're building. Amazon likes to use the term blast radius, and it's a good term. If I have a problem on a particular subnet, and that subnet hosts multiple databases for multiple applications, that could be a problem. If I have somebody access my network and on the single network, they can access all my data, maybe that's a problem. Maybe there are compliance and data symmetry issues. Maybe you're a bank, and you're told, "You can't have all of your resources on a single subnet. Our compliance rules and regulations forbid that." And that's very common. Maybe you're dealing with HIPAA or PCI DSS or GDPR over in the EU. They have very strict rules and regulations, so you'll probably find a single VPC is not going to work in these sorts of scenarios. Ultimately, it can be very complex with one VPC if you have production, test, and maybe quality control inside every single network. So we want to split this apart. But before we get there, let's look at a couple of other problems. You might have an issue with a single network of integrating with an on-premise network. Let me give you an example. Let's say you set up a network, and you decided, "For the IP address range, I'm going to pick 172.16.0.0/16," and you're thinking, "That's a good number. I wonder why it sounds so familiar. Oh, wait a minute. That's the address scheme of my on-premise network." How are two networks with the same network addressing scheme going to talk? Well, they're not? Okay, so maybe separate VPCs might make more sense. There also could be a cost management issue. If you're trying to separate costs into different departments, different ways of looking at the costs, it's a little harder if it's all running on the same network. It also gives you issues with trying to experiment. If you can imagine playing with a production network and saying, "Well, I'm going to change this subnet. I'm going to add a subnet. I'm going to add resources here," you can run into some real issues. The compliance and the data sovereignty, this is probably the most important consideration of networks in that you are probably going to be regulated once you start talking about customer data. If you have networks at AWS, you have a choice where those networks are going to be created. A VPC in separate regions have different rules and regulations because they're in different countries. If you're a financial institution, you're probably going to be forced to have separate VPCs to hold onto the customer transactions separately and the auditing analytics. That's going to be somewhere separate as well. The auditors generally will want a lot of granular access, granular control, as to what's happening on your particular network. One feature for VPCs is a flow log. You can have a flow log per VPC for isolated network traffic logs, so the separation ends up being a good thing. Another way to think of this is separating the environments, separate VPCs for development, testing, and production. The developers can experiment in the Dev VPC without risking anything in production. So logical isolation is the way you want to think about it. Developers, yeah, full access to their network. Production, not full access. We want to separate out what the jobs are and where the locations for their jobs. So developers working on developers, production for the production application after it's been okayed. If I have separation, i.e., separate VPCs, I can then have different security rules, and I can address the different compliance requirements. For example, production, I need better auditing. I need less access from developers. I have different rules and regulations based on the application. I also can have different performance needs met on the different networks. Developers can work in a smaller environment using the same web application and database servers, but they can be smaller. They don't have thousands of customers. Production, thousands of customers, has to be bigger, so the scaling can be different, and the management of the resources can be different as well. Here's an example for you. We have a company that has created an accounting application for their accounting department, but it was developed on another network. The accounting department is using the production application. The development environment is working on updating that application, but they don't want to be updating the production application. We also have a shared services VPC. This is where everybody authenticates when they come to work. This is where DNS is running. This is where potentially Active Directory could be running. This is where security services can be running, forcing all of the communication through the shared services VPC, and then being able to connect to either the accounting or the development department. Separation, little more complicated to set up, little easier to troubleshoot because we have this separation. For testing and innovation, a sandbox environment is absolutely necessary. So Explore California needs a dev environment to actually test out this new application that they're building and not get into any trouble by working in the production environment. They're also going to get into, over time, different versions of their application, and they're going to want to isolate the versions in the test environment away from the production environment, and look at how to set up testing the production environment at a small scale before making it a larger scale. We can only do things like this when we have multiple VPCs at the backend hosting our resources. So Explore California has a test environment. It's a certain size. The production environment will be even bigger. The test environment for testing for development. The production environment for customers. Now, the test environment could be expanded into testing after development. So they could have three, four, or five VPCs. It just depends on the design. So there's always trade-offs to consider. There's management overhead. More VPCs, more configuration. If I have more VPCs, it gets more complex. Maybe I need advanced routing knowledge. Maybe I need additional tools. But you're probably going to find out that having multiple VPCs is well worth it. We'll end off with a couple of examples. We have a health care provider, and they're isolating their patient data in one VPC and the administration in another VPC. We're ensuring HIPAA compliance with the boundaries that are enforced. A streaming service uses a primary VPC in one region, us-west-2, and the back of VPC is in the southeast-1. If we have an issue in one region, we can fail over using DNS, the Route 53 service, to the other region. And finally, we have a financial institution in one region, separates out their VPC for customer transactions in the US. For analytics, they're over in Europe, and they have to adhere to local laws while maintaining their audit requirements. So for Explore California, they're going to create three VPCs for each application stack. They need a development, a test, and a production VPC for each application.

Contents