ADD SOME TEXT THROUGH CUSTOMIZER

Blog 6 – AWS Hybrid Connectivity Design Considerations

Disclaimer: the content of this blog is solely based on my personal view/experience, and it’s not a company or someone else’s view. The content is intended for educational purpose only, and it’s not an official whitepaper or best practices document. Therefore, you must always refer to the official and latest AWS documentations, before considering anything discussed in this blog series, in any AWS environment.

Previous blogs “Hybrid Connectivity Design Options – part-1 “  and “Hybrid Connectivity Design Options – part-2“ covered connectivity options (VPN & DX) of hybrid model (On-Prem DC site(s) to AWS). This blog analyzes and discusses some of the key design considerations with these connectivity options.

Note: this blog won’t dive deep into the design considerations with multiple VPCs, as this topic will be covered separately in a future blog.

There are many aspects to be taken into consideration when it comes to designing a hybrid connectivity model, each aspect and its level of criticality can vary based on the targeted architecture. This blog will cover some of the most common aspects, listed below:

  • Connectivity Option Selection Criteria
  • Resiliency & Traffic engineering Considerations
  • Transitive routing Considerations
  • Hybrid DNS

Note: Although cost is another important aspect to be considered in such a design model, as highlighted previously, this blog series focuses on the technical aspects only. Therefore, you as a designer/architect need to weigh the cost aspect when there are two design options that both meet the business and technical requirements, or in some scenarios the cost might be the most critical aspect, in which the business may accept the tradeoffs of considering a lower cost design option, that may not meet all the requirements optimally. That’s why there is no best single answer here.

Connectivity Option Selection Criteria

As you may noticed from the previous blogs, AWS offers multiple connectivity options and each can be designed in a different way that can offer different capabilities. From a design point of view this is great, however, the decision to pick a certain connectivity option vs. another must be evaluated carefully, to avoid any limitation or complexity in the future that may require a major design change(s). To keep it simple, refer to the the following table which compares the options based on different design attributes.

  VPN-VGW VPN-TGW DX DX-DXGW DX-TGW
Scalability (with many VPCs) Limited High Limited High Very high
Bandwidth   Limited High with ECMP High High High
Inter-VPC N/A Yes, via TGW N/A No Yes
Inter-Region No Yes, with TGW peering No Yes Yes, with  DXGW or TGW peering
Manageability at scale (many VPCs) High Low High Medium Low
Encryption in transit Yes Yes No, Requires IPSec VPN overlay No, Requires IPSec VPN overlay No, Requires IPSec VPN overlay

 

Resiliency & Traffic engineering Considerations 

When it comes to resiliency with a hybrid connectivity model, it is important to look at the following aspects.

Redundancy

This term can be interpreted differently based on the scale and criticality of the service and the impact of its downtime. This can be understood as redundant connections, however this is one aspect of it. It also refers to redundant devices, redundant regions/locations as well as redundant physical network paths, power sources to the devices, etc. Therefore, it is very difficult to recommend a single combination or all together, as mentioned earlier, “it depends”  based on the scale, service criticality and the impact magnitude of a failure to the organization’s business and if the design has to comply with certain predefined rules. To keep it simple in this blog we refer to redundant paths as having a secondary connection to your on-premises that could be either over DX or VPN.

Dual DX links to single location in the same region vs. dual links across two DX locations in the same region

If the architecture is global with multiple on-premises DC sites, consider two or more DX links across different DX locations if this model is feasible with regard to the global solution in terms of traffic load, traffic patterns, application architecture etc.

There are different ways this can be looked at; the best way is to look at the end to end big picture and identify “Where are the possible single points of failure in the architecture?” to give you a good indication what might need to be improved to overcome any major outage in case of the failure of a single component.

Although there are several redundancy design options, it is very difficult to recommend one over another because it depends on multiple factors such as criticality level and impact magnitude of a failure to business operations, reputation, acceptable downtime and performance degradation, etc. Therefore, the designer/architect is the best one who can determine which option should be considered and why.

Failover Time

I used the figure below in my Ciscopress CCDE book, and I always like to refer to it when it comes to discussing network resiliency and failover time. Simply, you can see that there are ‘at least’ three operations with time ‘T’,

The main one that I wanted to focus here is the failure detection; this is a very important aspect since you may have a routing design or protocol tuned to failover very quickly, however, this failover operation won’t be triggered fast if the failure itself is not reported fast enough. There are many techniques to speed up the failure detection depending on the connectivity type, physical medium etc. With AWS hybrid connectivity, if you are using VPN you many need to look into VPN dead peer detection and if you are working with AWS DX connection, you need to look at Bidirectional Forwarding Detection (BFD), which allows for a faster routing re-convergence time.

Failover & Operational Quality

The term ‘operational quality’ here refers to some scenarios where a system might be technically up, yet is not performing its functions at the minimum required or expected/intended level.

It is important to note that system reliability is one of the primary contributing elements to achieving the ultimate level of system or network availability.

This is where the ‘design for failure’ concept should be considered to evaluate the impact of a link or network path failure. On the operational quality, for instance, failing over from a 10G link to 1G or 2G link may introduce a major performance degradation.

Traffic Engineering

If you are from a networking background, this should be a fun topic to work with, even it can be one of them most complex ones. Typically, here we need to understand how the traffic will flow across the available network paths, and then we can re-engineer it to flow in the way that we need it to, based on the requirements. Normally, these are the applications’ requirements which in turn are driven by business requirements. First we need to look at the big picture before we can jump into the technical design, to understand what needs to be achieved, what are the available paths, is it regional or global setup, is there any preference to a certain link based on location or capacity, how the on-premises interact with the VPCs, any application dependencies, etc.

Designing traffic engineering in networking in general is not an easy task, because it’s not about “I have two paths and I can use my high speed path for my applications and failover to the one with lower capacity in case of failure”. For example, in the figure below there are two paths: short and high speed and long with slower speed. If all the applications take the short and high speed path, then there is higher possibility of traffic congestion, which will lead to a longer time to reach the destination compared to the other path. This is where traffic engineering helps to control when and if a certain link or path should be used.

In other words, the requirements need to be analyzed to understand how traffic should flow in normal and failure scenarios. Remember, always design for failure, where you need to look into how the solution will operate in different failure scenarios and whether it will be acceptable by the business or not. If you use a VPN tunnel as a backup to a 10G DX link, in a DX failure scenario there will be a significant performance impact! Is this acceptable? If not, how could this be designed to achieve an acceptable level? Maybe by bundling multiple VPN tunnels to a TGW with ECMP, or a secondary DX link, etc.

Once these requirements are identified then they can be applied to the available connections. Or, if these requirements are gathered during a planning session, then next step is to decide what should be a suitable connectivity option for the scenario. Because the combinations of the scenarios can vary to a large extent, we will look into what needs to be taken into consideration and what might need to be tuned to achieve the desired traffic engineering model, based on a selected use case.

As we know from the previous blogs, in the hybrid model we simply have the AWS/VPC side and the on-premises side and between them you can have either a VPN or DX, however the way that these connections are deployed can vary.

Based on that, let’s divide it into three parts when we look at the path or route selection (VPC, AWS Connection Model [VGW, DX, DXW, TGW], and the customer side/router).

The decision tree, depicted in the figure below, summarizes the route selection logic from AWS side. In each list, route preference is listed ‘in order’, from the most preferred to the least preferred.

Still, you need to make sure that the customer/on-premises side needs to be aligned with the configured traffic engineering logic from the AWS side in order to avoid asymmetrical routing.

Traffic engineering design scenario examples

 Scenario 1: Dual CGW with VPN to multiple VPCs using VGW

The scenario illustrated in the figure below shows you how to influence path selection using BGP attributes by using local preference toward the on-premises iBGP peers and AS_PATH toward the VGWs for the same advertised IP prefix.

Scenario 2: Dual CGW with VPN to multiple VPCs using TGW

The scenario illustrated in the figure below shows you how to influence path selection using BGP with TGW and longest prefix match. From the on-premises side, HSRP as a first hop redundancy protocol is used with multiple groups to align both inbound and outbound paths, avoiding asymmetrical routing. Also, with TGW we can take advantage of ECMP to maximize the bandwidth.

Scenario 3: Dual DC Sites over Dual DX Connections to DXGW 

The scenario illustrated in the figure below shows two on-premises data centers connected to each other over a layer 2 data center interconnect (such as be over dark fiber with 802.1Q, provided by an SP as a virtual L2 link or deployed over VxLAN). The key point is that the same IP prefix is advertised from both DC sites that are located in different geographical locations. Whether it’s a good practice or not to extend L2 between different geographical locations is outside of the scope of this blog – the goal here is to focus on the routing and traffic engineering over AWS DX in such scenario. As it’s shown below, although the secondary DC site is advertising the IP prefix with longer BGP AS_PATH attribute, traffic from a VPC located in region A is still going to the secondary DC! The reason why, if you remember from the routing decision tree covered earlier in this blog, there is path ‘cost’ that is considered before AS_PATH, and because the secondary DC is connected to a DX location in the same region as region A, VPCs in region A will prefer the secondary DC for the same prefixes advertised by the primary DC.

To overcome this issue, we could change the local preference attribute associated with the IP prefix from the Primary DC to a higher value, toward the DXGW. The can be achieved by simply associating the BGP community attribute value/tag as shown in the figure below.

For more info about routing polices and BGP communities, refer to the link below

https://docs.aws.amazon.com/directconnect/latest/UserGuide/routing-and-bgp.html

Scenario -4: Bring Your IP (BYOIP) to the Cloud

The figure below depicts a simple scenario of using active-hot standby design, in which the solution setup in AWS VPC can be activated any point to advertise the same Public IP prefixes used by the on-premises DC site when for example there is a need to invoke a DR situation. That being said, this is only one use case, the BYOIP has multiple benefits/uses cases:

Why should I use BYOIP?

You may want to bring your own IP addresses to AWS for the following reasons:

  • IP Reputation
  • Customer whitelisting
  • Hardcoded dependencies: Several customers have IPs hardcoded in devices or have taken architectural dependencies on their IPs.
  • Regulation and compliance: some customers may need to use certain IPs because of regulation and compliance reasons.

For more info: https://aws.amazon.com/vpc/faqs/#Bring_Your_Own_IP

In the design example above, the IP range can be provisioned but not advertised, until DR is invoked where the IP range can be advertised from the region where the IP range is provisioned

Note: there some requirements and procedures need to be considered before the IP range can be provisioned as specified in the link below

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-byoip.html

Scenario -5: Dual CGW with DX Connection and VPN to TGW

The scenario illustrated below, shows a active-standby use case where the active/primary path between the on-premises DC site and AWS is over DX with TGW over DX GW and Transit VIF, while the redundant path/back over VPN tunnel. As it shown in the figure below TGW rote table has two routes for the on-premises prefix with the same prefix length propagated dynamically over BGP. Following the route priority logic of TGW, the path over the DX will be w the preferred path in this case. Also, with the scenario TGW facilitating the communication among the VPC with the same region. (if connectivity to a remote region with TGW is needed, TGW peering can be used)

Let’s zoom in to one of the VPCs, and consider the VPC to TGW connectivity setup illustrated below. With this setup, if traffic sourced from a subnet in AZ2 toward the on-premises prefix, can it reach the remote on-remises network?

The answer is no, as you remember the previous blog highlighted that: “When you attach a VPC to a transit gateway, resources in Availability Zones where there is no transit gateway attachment cannot reach the transit gateway. If there is a route to the transit gateway in a subnet route table, traffic is only forwarded to the transit gateway when the transit gateway has an attachment in a subnet in the same Availability Zone.”

To avoid such connectivity issues, attach a subnet from the AZ that needs to communicate over the TGW.

As you may remember from blog-1, AWS global infrastructure has many regions, POPs and edge locations that are distributed globally. Is there any way that we can take advantage of this high speed globally distributed infrastructure and steer traffic destined to an AWS VPN endpoint public IP, to enter the AWS global backbone from the closest POP/edge location?

The answer is: yes! This can be achieved by using what is called accelerated site to site VPN using AWS Global Accelerator to “route traffic from your on-premises network to an AWS edge location that is closest to your customer gateway device. AWS Global Accelerator optimizes the network path, using the congestion-free AWS global network to route traffic to the endpoint that provides the best application performance.” AWS Global Accelerator provides you with static IP addresses that serve as a fixed entry point for your applications hosted in one or more AWS Regions. These IP addresses are anycast from AWS edge locations, so they’re announced from multiple AWS edge locations at the same time. This enables traffic to ingress onto the AWS global network as close to your users as possible

Transitive routing considerations

When designing a hybrid connectivity model, especially with multiple VPCs, it is very important to take into considerations the reachability aspect when crossing a VPN or DX link.

Let’s take a simple example to explain what is meant by transitive routing. In the figure below, there are three VPCs A, B and C connected in a hub and spoke model using VPC peering. In theory, with this connectivity model, VPC A needs to reach VPC B over VPC C, but in reality, this is not a supported connectivity option, which is known as transitive routing, where VPC C act as a transit VPC between VPC A and B over VPC peering.

Although there are ways to overcome this, such as using a proxy or by using Transit Gateway, as an architect or designer you need to be aware of what connectivity model is supported and what is not.  For instance, as illustrated in the figure below, the reachability to the VPC endpoints types discussed in blog 2 can vary when traffic originates from on- premises.

 

Private Hybrid DNS

Practically the communication over a hybrid connectivity model, is not always based on IP targets directly, almost always there is a need to use DNS for all or certain types of communications and applications. To enable an on-premises site(s) reach AWS VPC resources by using DNS names, and vice versa. Route53 Resolver need to be used.

For more information refer to the following resources

AWS documentation:

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html

Amazon Route 53 Resolver for Hybrid Clouds

https://aws.amazon.com/blogs/aws/new-amazon-route-53-resolver-for-hybrid-clouds/

Simplify DNS management in a multi-account environment with Route 53 Resolver

https://aws.amazon.com/blogs/security/simplify-dns-management-in-a-multiaccount-environment-with-route-53-resolver/

Categories :
Marwan Al-shawi – CCDE No. 20130066, Google Cloud Certified Architect, AWS Certified Solutions Architect, Cisco Press author (author of the Top Cisco Certifications’ Design Books “CCDE Study Guide and the upcoming CCDP Arch 4th Edition”). He is Experienced Technical Architect. Marwan has been in the networking industry for more than 12 years and has been involved in architecting, designing, and implementing various large-scale networks, some of which are global service provider-grade networks. Marwan holds a Master of Science degree in internetworking from the University of Technology, Sydney. Marwan enjoys helping and assessing others, Therefore, he was selected as a Cisco Designated VIP by the Cisco Support Community (CSC) (official Cisco Systems forums) in 2012, and by the Solutions and Architectures subcommunity in 2014. In addition, Marwan was selected as a member of the Cisco Champions program in 2015 and 2016.