ADD SOME TEXT THROUGH CUSTOMIZER

Blog 5 – Hybrid Connectivity Design Options – part-2

Disclaimer: the content of this blog is solely based on my personal view/experience, and it’s not a company or someone else’s view. The content is intended for educational purpose only, and it’s not an official whitepaper or best practices document. Therefore, you must always refer to the official and latest AWS documentations, before considering anything discussed in this blog series, in any AWS environment.

Previous blog, discussed and analyzed the VPN connectivity options between on-premises network and AWS VPC(s).Although VPN is a simple, fast to provision and cost-effective connectivity option, connecting an on-Prem data center (DC) site to the cloud over VPN may not always provide the required performance or security compliance for large scale networks. therefore, AWS, along with its colocation exchange partners, offer the ability to establish connectivity over a dedicated link/network over a standard Ethernet fiber-optic cable to one of the AWS Direct Connect (DX) locations, which can provide a more guaranteed performance and more consistent network experience than Internet-based transport. This connectivity model helps enterprises to establish a dedicated network connection from an on-prem DC to AWS.

Note: as highlighted in previous blog, the focus now is on the connectivity options and the possible ways to be utilized, in a future blog, these connectivity options will be discussed from a multi-VPC global architecture point of view. The topics are intentionally divided in such manner, to simplify it and avoid any confusion.

AWS Direct Connect Components

  • Physical (link and customer device)
  • Virtual Interface
  • Virtual Private Gateway (covered in previous blog)
  • Routing sessions and policies
  • Direct Connect Gateway & TGW

Physical

AWS DX Network Device Requirements

AWS DX supports 1000BASE-LX or 10GBASE-LR connections over single mode fiber using Ethernet transport, also the customer device must support 802.1Q VLANs this is required to create sub-interfaces and tag the traffic so it can be associated with different virtual interfaces from the DX side, will be covered in more details later in this blog. For more information about the DX Network Connectivity Requirements refer to the below link.

https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html#overview_requirements

AWS DX Physical Connectivity Options

Option-1: Customer site located in close proximity DX location, where a direct fiber link (1Gbps or 10 Gbps) can be provisioned from customer site to the DX location.

What if more than 10Gbps is required over a single connectivity path?

If more than 10Gbps is required over a single connectivity path to AWS, you can consider link aggregation group (LAG) in which you can create a logical interface that uses the Link Aggregation Control Protocol (LACP) to combine multiple physical links at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection.

AWS offer the ability to specify what is the required minimum number of active links in a LAG group to consider this LAG as operational, for instance, let’s an organization has a LAG aggregating 4x links in two regions ( each region has a LAG with 4x links), if this organization wants to avoid over utilizing a single direct connect path when there are 2x physical links down, in this case the minimum can be set 2.

For more details refer to the links below

https://docs.aws.amazon.com/directconnect/latest/UserGuide/lags.html

Option-2: Customer site located far from DX location, or the customer does not have any fiber to that location, in such scenario a 3rd party service provider could offer to provision the required fiber link from the DX location to the on-premises site.

Options -3: Hosted DX via AWS Partner Network (APN)

Although AWS offers a dedicated Interconnect service, which allows customers to establish high speed direct circuit between their On-Prem datacenter(s) and AWS, this connectivity model requires proximity to one of the company’s regions or points of presence. On the other hand, the DX via AWS APN extends this service to a wider scale and coverage of enterprises that aren’t geographically close enough, or even may not require all the power of a high speed dedicated circuit, as the hosted connections are available from 50Mbps up to 10Gbps (AWS Direct Connect Partner approved to support this model). Depending on the scenario, this connectivity model could be provisioned as a dedicated L3 link from the partner carrier or can be integrated to an existing MPLS L3 VPN provided by the same carrier in which it will be added as an additional site to the customer MPLS L3 VPN.

Based on the above, AWS Direct Connect can be categorized into two types of connections:

  • Dedicated Connection: A physical Ethernet connection associated with a single customer. Customers can request a dedicated connection through the AWS Direct Connect console, the CLI, or the API. Option-1 and option-2 above applicable
  • Hosted Connection: A physical Ethernet connection that an AWS Direct Connect Partner provisions on behalf of a customer. Customers request a hosted connection by contacting a partner in the AWS Direct Connect Partner Program, who provisions the connection.

Virtual Interface (VIF)

Virtual interface is established over the direct physical link with 802.1Q tagging to identify which interface/sub-interface from the customer router maps to which virtual interface.

There are three types of VIFs Private, Public & Transit.

Private virtual interface: this VIF should be used to access an Amazon VPC using private IP addresses.

The ability to have multiple VIF and each can be attached to a different VPC, can be used to extended separate physical or virtual networks at the On-premises side, each to communicate with a separate VPC over single physical direct connect link (DX). The figure below illustrates the control and data planes of this concept. Whether this is a scalable design option or not, this is not the point, the main point to understand here, is the concept behind it, which might be applicable on certain use cases, or might be used in combination with other options.

Note: Hosted Connections with up to 500Mbps capacity support one private or public VIF. Hosted Connections of 1Gbps or more support one private, public, or transit VIF. Customers who need more than one VIF may obtain multiple Hosted Connections or use a Dedicated Connection which provides 50 private or public VIFs and 1 transit VIF.

Public virtual interface: A public virtual interface can access all AWS public services using public IP addresses.

After AWS verifies the ownership of the public range to be announced to AWS over the DX Public VIF, the advertised BGP routes over the public VIF will be accepted by AWS. As result, all network traffic from AWS destined to the advertised routes (including traffic from other AWS customers, sourced from: EC2 instances with public or Elastic IPs, NAT gateway, or other services that can make outbound connection like a lambda function with internet access, etc.) will traverse the AWS DX > public VF > Customer DX/edge router.

Therefore, this is something that need to be taken into considerations when designing the routing advertisement scope, as well as the configurations of edge security services like  firewalls whether to accept or reject this traffic to comply with your organization policies. However, AWS does not re-advertise customer routing prefixes among each other for any route advertised over DX public VIF (won’t act as a transit BGP AS).

Note: The prefixes that AWS announces may change, therefore, AWS publishes its current IP address ranges in JSON format. To view the current ranges, download the .json file and a current list can be obtained using the public ip-ranges.json and check the publication time in the current file, the file maintained and available at https://ip-ranges .amazonaws.com/ip-ranges.json.

https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html

Routing sessions and policies

When you are peering with AWS using BGP over a Public VIF, you need take into considerations the BGP polices along with BGP communications that can be used to control route propagation scope. For more details please refer to the following link

https://docs.aws.amazon.com/directconnect/latest/UserGuide/routing-and-bgp.html

the following figure provides an example of how BGP community tag can influence the scope of a public IP range of an advertised route by customer A.

BGP attributes over a private VIF also can be used to control path selection. This topic will be covered in more details in the subsequent blog.

Note: at the time of this blog writing, hosted connections with up to 500Mbps capacity support one private or public VIF. Hosted Connections of 1Gbps or more support one private, public, or transit VIF. Customers who need more than one VIF may obtain multiple Hosted Connections or use a Dedicated Connection which provides 50 private or public VIFs and 1 transit VIF.

Direct Connect Gateway & TGW

The scope of AWS Direct Connect is within a region, which means if you have VPCs in different regions you may end up setting up different direct connects per region.

What if you need to have more cost-effective solution in which VPCs in different regions can take advantage of single or multiple DX link(s) in different regions?

One possible solution is to take advantage Public VIF and build a VPN on top of it, in which VPN traffic will traverse the direct connect over the Public VIF to reach the VPN VGW endpoint public IP, or even a customer managed VPN, as these public IP ranges can be reached over the Public VIF ( assuming the desired prefixes to reach are not filtered, as well as the routing of the on-premises side advertising the VPN endpoint public IPs with preferred metric to avoid reaching it from an on-premises Internet path).

This is where Direct Connect Gateway (DXGW) can help. As illustrated in the figure below, with the Direct Connect Gateway you can connect up to 10 VPCs across multiple regions. This Gateway is not intended for passing traffic between VPCs. There multiple options can be considered for inter-VPC communication will be covered in more details in an upcoming blog.

As a result, an On-Prem DC can have global access to the different VPC located in different regions (within the same or different accounts)

We can take the above design a step further and integrate it with AWS Transit Gateway. The driver for such consideration can vary, such as, aggregating multiple VPCs with complex routing, having logically centralized hybrid connectivity termination and routing (DX and VPN).

As illustrated below with this connectivity model we will need to use Transit VIF to the DXW and we need to take into consideration the supported maximum amount of routes to DXW and TGW in this architecture, this is at the time of the blog writing.

At the time of this blog writing, from connectivity point of view, it is important to note that “When you attach a VPC to a transit gateway, resources in Availability Zones where there is no transit gateway attachment cannot reach the transit gateway. If there is a route to the transit gateway in a subnet route table, traffic is only forwarded to the transit gateway when the transit gateway has an attachment in a subnet in the same Availability Zone.”

https://docs.aws.amazon.com/vpc/latest/tgw/tgw-vpc-attachments.html

Note: TGW peering is required to inter-connect TGWs in different regions, in a sperate blog, we will discuss and analyze the considerations of having multiple VPCs and design options with more details. Keep following this blog series to build up the knowledge in sequence.

The subsequent blog, will discuss and analyze some design considerations of these different hybrid connectivity options.

Categories :
Marwan Al-shawi – CCDE No. 20130066, Google Cloud Certified Architect, AWS Certified Solutions Architect, Cisco Press author (author of the Top Cisco Certifications’ Design Books “CCDE Study Guide and the upcoming CCDP Arch 4th Edition”). He is Experienced Technical Architect. Marwan has been in the networking industry for more than 12 years and has been involved in architecting, designing, and implementing various large-scale networks, some of which are global service provider-grade networks. Marwan holds a Master of Science degree in internetworking from the University of Technology, Sydney. Marwan enjoys helping and assessing others, Therefore, he was selected as a Cisco Designated VIP by the Cisco Support Community (CSC) (official Cisco Systems forums) in 2012, and by the Solutions and Architectures subcommunity in 2014. In addition, Marwan was selected as a member of the Cisco Champions program in 2015 and 2016.