ADD SOME TEXT THROUGH CUSTOMIZER

AWS Transit Gateway and Multi-VPC Design Options, for Hybrid Cloud Architecture

First of all, from a solutions or cloud architect point of view, why do we really need to use multiple VPCs or When would it make sense?

Considering multi-VPC design architecture means segmentation, in which you may have a business need to segment workloads.

However, this segmentation can take different forms depends on the company structure, security policy, business functions and model, etc.  for example: a segment per Environment (Production, Testing, Development), or a segment per Security Zone (DMZ, Management, Internal), Or Segment  per business function or department (IT, HR, Marketing) and so on. Or it could be combination of these segmentation options. Also, the driver of the segmentation can vary, could be security and regulatory driven, cost driven, technology driven or might be based on a certain business model and offering. In addition, from architecture point of view, breaking single complex domain into smaller manageable chunks, almost always make it, more modular, scalable and flexible.

As a result, there is no single best design option that can fit all the different requirements, even though it is based on Multi-VPC architecture, because the actual need, scale and drivers can vary. That being said, there are always proven design pattern that can be considered as the foundation and then on to of it the specifics can be added/integrated based on the current and future requirements.

Let’s start with the basic design shown below. This is a simple Multi-VPC design that requires direct communication between each VPC and the On-Prem DC. With direct VPN ( assuming ~1.25G per VPN tunnel is enough), this is simple and easy design option.

What if there is a need to provide VPC to VPC communication as well (full mesh connectivity). Here we have two options, add VPC peering as illustrated in the figure below

Or we could consider VPN among the VPCs. The advantage of using VPC peering is the higher bandwidth and ensuring the traffic will be transported over AWS backbone, however, VPC peering is not a transitive connectivity, in which centralized services such as centralized internet access or access any other VPC through a middle VPC over the VPC peering, cannot be achieved. This is where the VPN can be used as an alternative, taking into consideration bandwidth capacity limit, as well as the possibility that the traffic might be sent over the public internet at any point throughout the path.

So far so good. What if the number of VPCs increased along with multiple connectivity options to the On-Prem DC. From operations point of view, building and managing full mesh connectivity with 10s of tunnels/peerings among the VPCs will become complicated.

Also, connecting the on-premises links/tunnels, will require each AWS VPN to be attached to each individual Amazon VPC. This connectivity option is time consuming to build and hard to manage because its unscalable when the number of VPCs grows to tens of VPCs. This is where we can start look at the Transit-VPC architecture. As illustrated in the figure below, the Transit-VPC create a Hub&Spoke type of topology for the VPCs within AWS, the Transit-VPC ( the Hub VPC) provides connectivity aggregation (typically the aggregation of the VPN tunnels) as well as, centralized access to the On-Prem, and any additional services such as network and application services (NGFW, SDWAN etc.). this is a proven architecture and is used by many organizations today.

As the organization network grows to support larger number of users in different parts of the world, typically AWS services need to scale as well within the organization network. As highlighted earlier, connecting and managing 10s or even hundreds of VPCs via peering requires enormous route tables which is difficult to deploy, manage and can be error prone.

Although, the Transit VPC architecture overcome most of these issues, still scale in number of VPCs, manageability, and bandwidth capacity is a limitation that an organization may face when the scale is becoming too high, especially when you need to quickly and easily add more Amazon VPCs from multiple AWS accounts to support the increased demands on your workloads. Technically, The maximum limit is 125 peering connections per VPC, and managing large number of VPN tunnels and peering connection is not a scalable nor manageable solution at  scale.

With AWS Transit Gateway, you only have to create and manage a single connection from the central gateway in to each Amazon VPC, on-premises data center, or remote office across your network. Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act like spokes. This hub and spoke model significantly simplifies management and reduces operational costs because each network only has to connect to the Transit Gateway and not to every other network. Any new VPC is simply connected to the Transit Gateway and is then automatically available to every other network that is connected to the Transit Gateway

AWS Transit Gateway TGW, acts as a connectivity aggregation point/hub you can easily share AWS services, such as DNS, Active Directory, and IPS/IDS, across all of your Amazon VPCs.

Let’s review the basic operation of the AWS TGW. Technically, it acts like a big elastic router connecting VPCs as well as On-Prem (today VPN supported, according to AWS, Direct Connect will be supported by early 2019). As illustrated below, each VPC or VPN is treated as an attachment and this attachment then is associated with the TGW route table or domain (think of the routing domain as a VRF routing table in classical routing concept), which means a TGW can hold multiple routing domains/tables, a VPC can be associated with one route table, but it can propagate route to more than one route table. This is useful when there is some complex segmentation routing required. Also, cloud Admin, can create static route entries as well as static blackholing entries for explicit routing and traffic engineering control. These static/blackhole routes takes precedence over the propagated routes.

From VPC architecture point of view, this flat open communication model looks like the one illustrated below.

Also, isolated or segmented routing can be designed with the AWS TGW. For instance, the below routing domain design has three different routing tables, VPC A and B propagating its routes only to the routing table where the VPN is attached, this means there will be communication between VPC A and B with the VPN/On-Prem, but there is no direct communication among VPC A and B.

From VPC architecture point of view, this isolated/segmented communication model looks like the one illustrated below ( think of it like having a shared services VPC (VPC C ), while communications among the VPCs is not permitted)

As it shown in the above examples, the TGW routing table allows you to define the Next-Hop as “attachment” VPC/VPN to forward packets to (A transit gateway attachment is both a source and a destination of packets).

The isolated/segmented routing architecture can be extended to the VPC design, for example the design shown below, a private subnet in each AZ associated with the TGW to provide internal backend connectivity to the other AWC VPCs/On-prem where a network interface is created automatically per AZ for the attachment VPC subnet with the TGW, while the VPC routing tables used to control how to route the traffic within the VPC.

Similarly, the connectivity can be terminated into different routing domains/tables of the TGW to provide isolated routing among different VPC. this can be used for some security inspection etc. type of design using some virtual appliances for traffic passing between different security zones/VPCs.

Since the AWS TGW can act as the connectivity aggregation point, services VPC(s) that provide specialized functions such as security inspection, SDWAN etc. can be moved from Transit VPC architecture to be connected as a service VPC to the AWS Transit GW as shown below. considering multiple VPN tunnels offer high aggregate bandwidth with the ability of the TGW to support multiple tunnels from multiple virtual appliances (horizontally scalable).

This requires the virtual appliance to support, VPN, BGP and Source NAT.  SNAT helps with stateful instances to ensure return traffic use the same path/appliance. While BGP dynamic routing with VPN dead peer detection help to maintain HA and handle failover among the different tunnels/virtual instances.

If VPN tunnel or BGP, is not an option from the virtual appliance, the ENI can be used for the TGW VPC attachment, however, it will lose the ability to detect route or peer failure that was done by BGP over the VPN tunnel as it has not built-in health check mechanism. Also, from performance point of view, this means there will be almost always one TGW attachment per AZ, and traffic will not be distributed evenly across instances as ECMP requires multiple learned routes over multiple paths with the same cost over BGP to work.

Note: the termination of the On-Prem connections can be done at the TGW for simplicity as the aggregation point, and by using multiple routing tables, traffic routing can be controlled to pass through the inline services VPC, as shown below

According to AWS, AWS is partnering with Cisco and other vendors for TGW edge services.

Let’s look at these different VPC design options, and think about it like an architect, to decide when to use what (the info in the table below are generic to a certain extent, you will need to do more deep dive into each aspect when making the design decision for a real solution)

  Direct Full/partial mesh Transit VPC Transit GW
Scale Very Low Low to medium Very high
Performance VPN 1.25G per tunnel VPN 1.25G per tunnel VPN 1.25G per tunnel with ECMP may go beyond 50G
Security Encryption with IPsec

Segmentation limited options

Encryption with IPsec

Segmentation using virtual appliances as the hub VPC

Encryption with IPsec

Segmentation options are flexible (routing tables, virtual appliances, shared services/security VPC etc. )

Manageability The larger the scale the more complex to manage The larger the scale, the more complex to manage ( increased number to tunnels etc. ) Single management plane of the different routing domains and routes propagations
Flexibility Limited Intermediate to high depends on the scale and design requirements High
Interrogability Limited at scale ( difficult to integrate and control routing and secure segmentation at scale) High, centralized VPC can provide centralized integration and shared services etc. Very high, single aggregation point for routing and segmentation, integration with the specialized VPCs (security, SDWAN, shared services etc.)
Potential use case Very Limited number of VPCs Medium number of VPCs to provide centralized connectivity, segmentation with hybrid Cloud such as SDWAN with On-Prem Medium to very large number of VPCs to provide centralized connectivity, complex segmentation with hybrid Cloud such as SDWAN with On-Prem. VPC Routing across multiple AWS accounts

VPC Sharing

Shared VPC or sharing VPC, is a great approach to achieve flexible, efficient as well as cost effective VPC design when dealing with Multi-VPC architecture across different accounts/logical groups, as it allow multiple AWS accounts to create their application resources, such as Amazon EC2 instances, Amazon Relational Database Service (RDS) databases, and then can be inter-connected/shared (centrally-managed) by a unifying (shared) Amazon Virtual Private Clouds (VPCs). With this approach,  the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations. After a subnet is shared, the participants can view, create, modify, and delete their application resources in the subnets shared with them. Participants cannot view, modify, or delete resources that belong to other participants or the VPC owner.

In other words, VPC owners are responsible for creating, managing and deleting all VPC-level resources including subnets, route tables, network ACLs, peering connections, VPC endpoints, PrivateLink endpoints, internet gateways, NAT gateways, virtual private gateways, and transit gateway attachments. However, VPC owners cannot modify or delete participant resources including security groups that participants created (‘Separation of Concerns’ principle).

One of the key benefits of using such approach  apart from separation (separation of concerns), is that data transfer within the same availability zone (uniquely identified using the AZ-ID) is free irrespective of account ownership of the communicating resources.

Note: there are other proven means to provide shared services such as the concept of privateLink “You can create your own application in your VPC and configure it as an AWS PrivateLink-powered service (referred to as an endpoint service). Other AWS principals can create a connection from their VPC to your endpoint service using an interface VPC endpoint. You are the service provider, and the AWS principals that create connections to your service are service consumers.”

The following, are the limits of the TGW, at the time of this blog writing

Support VPN to On-Prem, Direct connect according to AWS early 2019

Supported with a region, roadmap to support inter-regional TGW

Supports 5000 TGW attachments

Number of route tables 20

Number of routes 10000

 

further reading:

https://aws.amazon.com/transit-gateway/

https://aws.amazon.com/blogs/aws/new-use-an-aws-transit-gateway-to-simplify-your-network-architecture/

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html

Categories :
Marwan Al-shawi – CCDE No. 20130066, Google Cloud Certified Architect, AWS Certified Solutions Architect, Cisco Press author (author of the Top Cisco Certifications’ Design Books “CCDE Study Guide and the upcoming CCDP Arch 4th Edition”). He is Experienced Technical Architect. Marwan has been in the networking industry for more than 12 years and has been involved in architecting, designing, and implementing various large-scale networks, some of which are global service provider-grade networks. Marwan holds a Master of Science degree in internetworking from the University of Technology, Sydney. Marwan enjoys helping and assessing others, Therefore, he was selected as a Cisco Designated VIP by the Cisco Support Community (CSC) (official Cisco Systems forums) in 2012, and by the Solutions and Architectures subcommunity in 2014. In addition, Marwan was selected as a member of the Cisco Champions program in 2015 and 2016.

4 Comments

  • Like!! Great article post.Really thank you! Really Cool.

  • JaymeBig says:

    Hi. I have checked your netdesignarena.com and i see
    you’ve got some duplicate content so probably it is the reason that you don’t
    rank high in google. But you can fix this issue fast.
    There is a tool that generates articles like human,
    just search in google: miftolo’s tools

  • Maracamp says:

    Doctors can also give steroid shots in the knee for the conditions that cause inflammation such as arthritis. Steroid shots are actually corticosteroid injections, which can relieve pain for several weeks or even months. While this is a temporary solution, in the meantime the patient can work on physical therapy and getting rid of the condition that caused the inflammation.
    diartrose

  • Joniver says:

    Fixing a "Push Button" Sink: We have these spring loaded push button sink plugs and from time to time they stick in the closed position so you cant drain the sink. If the plug is jammed shut the push down on the top surface while trying to
    Bathroom sink drain installation instructions

Leave a Reply

Your email address will not be published. Required fields are marked *

Order Now