Today with the increased demand on IoT, ML, Big Data processing and analytics etc. hybrid Data Centers (also known as hybrid cloud model) is becoming more popular. From IT point of view, the IT infrastructure is, as strong as the weakest link. Today, with this model the weakest link is the traditional WAN routing approach. Why?
First of all, with the hybrid cloud/DC model, almost always connectivity needs to be provided over multiple transports that may include the public Internet as well.
Second, provisioning a flexible connectivity for globally distributed virtual networks (commonly referred to by cloud providers as virtual private cloud (VPC) may require the need to deal with multiple MPLS WAN providers (region based)
In addition, for organizations that have multiple Cloud VPCs, the connectivity of these VPCs back to the remote branches and the On-Prem DC, must also be taken into considerations and this connectivity should cater for:
Taking the above, into considerations when you are designing or optimizing the WAN routing architecture, classical way of doing WAN routing using multiple providers, transports etc. can be a big challenge as well as inflexible in such connectivity model.
You might be thinking, these challenges can be avoided by using a tunneling mechanism such as DMVPN and build an overlay that is transport independent and route the traffic across multiple WAN networks.
Although, this is technically a valid option, but its not an optimal option from design point of view.
Because with this approach you will introduce additional layer of complexity, when you are dealing with different WAN providers, Internet etc. and multiple Hub locations that not always structured enough to build a stractured multi-Tier DMVPN.
In addition to that, dealing with different types of traffic flows (regional locations and DCs etc.) you will need a solution to provide you with the flexibility to push polices based on applications, region, site specific etc. without the need to deal with complex routing polices and filters. And this is where SD-WAN can help. To simplify the concept let’s consider the below example
NetdesignLearning. is a global learning provider that has two Data centers in Europe, and one DC in Sydney, Australia. In addition they have in total 100 branches globally (45 EU, 20 US/CAN, 25 AU, 10 other regions)
Because the number of branch sites is increasing in the US and Canada, they need a regional/local DC to serve these sites within that region for a better user experience.
NetdesignLearning IT team has decided to use AWS to host some of their media-rich learning applications that use three tier type of application (web, App and DB) in two regions in the US (US-West and US-East). Also some of the public online services will be served from the nearest physical or Cloud/VPC data center using DNS source/location based routing, also known as “Geolocation routing”
The DB servers were built using a separate VPC to host students info and restricted access is required, therefore its placed in a VPC that has no internet gateway and communicate with other VPCs using Cross/Inter-Region VPC peering to ensure the traffic stays within the AWS backbone (east-west). Remember that VPC Peering communication is not transitive, therefore, traffic will be one to one only, between these VPCs.
VPC Direct connect with the On-Prem DC in EU region was added to handle DB replication traffic requirement, and to serve as secondary path for remote branches when required.
The existing physical DCs uses Cisco ACI and in Europe, the two DCs are built using Cisco ACI Multi-POD concept.
For more info and design considerations about Cisco ACI Multi-POD and Multi-Site you can refer to this blog:
AU and EU regions are using the same MPLS provider, while the US uses different provider. communication from the US branches to other regions has to be via the US hub/DC site. Traffic Engineering Requirements:
How can we optimize and redesign the WAN connectivity to achieve these requirements in such architecture?
Technically you may:
Add AWS Direct Connect Gateway to facilitate the connectivity of the direct connect link(s) to the different VPCs
Add a virtual router and establish tunneling over the Internet transport or both Internet and WAN
Although the above, technically doable and valid, as mentioned earlier in this blog, this will add complexities from scale, flexibility and traffic engineering/policies point of view.
The Direct Connect Gateway is a very nice and flexible solution, but as we all know, not every design option can fit all the different requirements/scenarios. In this scenario (which is one of the most common ones) the drivers are scale, flexible connectivity options as well as flexible and easy to deploy and manage routing polices across different transports, regions and DC sites at a scale.
With Cisco SD-WAN, you can easily address these requirements, as its Cloud ready solution. From the vManage (the management plane) you can discover and provision the vEdge virtual nodes that will act as the hub/or just like a new remote site from the SD-WAN point of view. without the need to worry about any routing protocol to configure over the data plane IPSec tunnels/overlay virtual network. not to mention that creating and provisioning region based polices can be done in minutes without any need to do traditional complex routing polices and filters that can limit the solution scale and manageability .
The connectivity model in such architecture is called Transit VPC as illustrated in the figure below. This Transit VPC aggregate the connection from the VPC over redundant IPsec tunnels that terminate at the vEdges in the Transit VPC, and the vEdge in turn will build the Overlay across the different transports and you can build the desired routing and application polices to achieve the desired traffic engineering.
Also, you can map each VPC to a different SD-WAN VPN in which you can build end to end separation as the application and security policy prefer it. for instance, in this example the DB servers might have its own ACI EPG/BDs etc. that map to a different VxLAN VNI/VRF instance in the ACI Fabric, and this can be mapped to different interface/sub-interface, at the WAN vEdge and then the traffic can ride over a separate SD-WAN VPN ( i.e. VPN 10 in the table below) to provide end to end separation and exclude the Internet Transport for this VPN as a possible path.
If you’re interested to get more insight into Cisco SD-WAN Designs, register and join us in these upcoming Cisco SD-WAN webinars
Cisco SD-WAN Design Webinars at the Cisco Learning Network