This is the third part in a multi-part series on the topic of Long-Distance vMotion. Part 1 introduced us to disaster recovery as practiced today and laid the foundation to build upon. Part 2 built out the Long-Distance vMotion architecture with a few different approaches.
There are some limitations and challenges that we must consider when designing LDVM or other workload mobility technologies. If it were too easy, everyone would be doing it. The first area we’ll address is commonly referred to as the traffic trombone.
Traffic Trombone
Understanding what I mean by a traffic trombone requires a bit of visualization. When we have one site, everything is local. My storage is connected within my datacenter, my other subnets are routed by a local router. The path to these devices is very short, measured in meters, so latency is very small. As we migrate VMs to another datacenter, the VMs have moved, but the network traffic and storage traffic continue to go back to their original router and storage controllers, if we don’t add a little extra prevention. When we send a packet or a read/write, it goes back to the original datacenter, gets serviced, then returns back to our new datacenter where we’re now running. That backing and forthing is what we refer to as tromboning, hence the traffic trombone. (My PowerPoint presentation drives this home.) I’ll address this in two parts: network and storage.
When addressing the IP network, the first thing I’ll say about the network trombone is this is a desirable effect for existing network stateful connections (TCP). We want those connections to stay alive without disconnecting. For all new connections, we’d like to optimize the path to go through the local site. When we optimize the path, I’ll further break this down between coming to and leaving my subnet. For the remote subnets, Cisco GSS and ACE play a role with vCenter awareness to point to the correct site where this service is running. GSS points to the external ACE vIP where the workload currently lives. For leaving our subnet, we use HSRP default gateway localization forwarding traffic to the local Cisco ACE device for processing. This helps preserve symmetrical routing so our firewalls don’t drop our packets thinking something has gone wrong.
An alternative emerging technology is Locator-ID Separation Protocol (LISP). This protocol runs on both the outside of our subnet pointing to the correct site, as well as within pointing to the correct default gateway out. Think of LISP in the terms of cell phones. We used to have phone numbers that were tied to a specific cell-phone provider. When we switched carriers, we needed to get a new phone number. Phone number portability untied our phone numbers for a particular provider, it can now go with us and points to our new company we switched to. LISP does a very similar thing to IP addresses, it lets us take it with us to a new site and points to where we now live. LISP is available on certain blades in the Nexus 7000, and is also being ported to other products.
When addressing the storage area network, some products are already tackling this problem. EMC VPLEX and NetApp FlexCache each open their volumes locally without MPIO drivers extending across the datacenters, eliminating any traffic trombones. When dealing with IBM SVC and NetApp MetroCluster varieties, since they are split-controller designs the MPIO paths will have active paths to one datacenter and passive paths to the other. When the VMs move to the other site and back, one of those two sites will trombone traffic back to the primary controller for that volume. This will add latency into the IOs. In the case of SVC, we can only go campus today anyway (< 10 km) so the distance and latency is pretty short. In the case of NetApp, we need to stay with synchronous distances, but my MetroCluster customers haven’t had adverse impact to their IO. Of course, your mileage will vary depending on how much you stress your storage.
It is always important to have competent network, storage and virtualization architects take all latency, routing and cluster impacts into account to have a successful implementation. Excessive network or storage traffic needs to be understood, and QoS is always applied. A vSphere architect can help design how the cluster will be laid out and take DRS and fault-tolerant VMs and their associated traffic into account.
Future Directions and My Wish List
While EMC VPLEX and NetApp FlexCache has node redundancy at each site, IBM SVC and NetApp MetroCluster do not. This has scared one or two customers away from these solutions. It also can make code upgrades require a bit more planning. I’d like to see NetApp and IBM come up with solutions with node redundancy at each site. NetApp’s evolving cluster mode for block storage (FC/FCoE, iSCSI) may provide some of this. IBM has other technologies in the DS8000 base, such as Open-Systems Hyperswap that could possibly hold promise here.
When it comes to the storage trombone, both IBM and NetApp need to eliminate all storage trombones and IBM needs to go beyond 10 km. This could possibly be with more intelligent multipath drivers that plug into vCenter and automatically swap active/passive paths as the VMs move.
VMware vSpehere itself needs some work to go beyond their current limit of 5 ms RTT, or 400 km under perfect conditions. This is one of the biggest limitations of the technology today. In EMC’s recent EMC World, the announced VPLEX Geo for regional protection sending Microsoft Hyper-V VMs about 3000 km. EMC has a product roadmap with VPLEX to go around the world (NetApp FlexCache already does this) with VPLEX Global.
Conclusions
While I suddenly have a number of customers investigating and deploying Long-Distance vMotion, I do understand it is not an inexpensive solution. First we need a Metro Area Network (MAN) capable of doing 622 Mb/s or greater speed with under 5 ms round trip latency. We’ll need to transport the storage traffic as well. This doesn’t come cheap, but depending your options in your metro, the prices are more attainable every year.
I currently favor EMC VPLEX Metro and NetApp MetroCluster because both are validated, tested and referenced in KB articles by VMware (see below). I also have customers deploying these solutions. IBM just doesn’t go far enough, which is too bad, since I have a large SVC install base wanting this technology, but the EMC solution can front end an IBM SVC. It’s been hard-pressed finding a lot of NetApp FlexCache solutions outside of Hollywood. The technology has promise, but again is all NFS.
A lot of these same principles also apply to IBM PowerVM Live Partition Mobility, Microsoft HyperV Live Migration and Oracle VM for x85 Live Migration. Each hypervisor will come with it’s own limitations and peculiarities, so make sure you fully understand them before deploying them. Most of these solutions can be deployed within the same infrastructure as LDVM.
This technology seems to have caught some of the storage vendors off guard. When I first presented this in March of this year, I had spent six months preparing for it. I already had customers ready to deploy. Since March, I’ve run into around five more locally, and more nationally looking at these solutions. As prices in hardware and bandwidth fall, it will become as common as storage mirroring has become in the last decade.
References
- Cisco/NetApp Whitepaper: Workload Mobility Across Data Centers
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-591960.pdf - Cisco Overlay Transport Virtualization (OTV)
http://www.cisco.com/en/US/prod/switches/ps9441/nexus7000_promo.html - Enabling Long Distance Live Migration with F5 and VMware vMotion
http://www.f5.com/pdf/white-papers/cloud-vmotion-f5-wp.pdf - EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
http://www.emc.com/collateral/hardware/technical-documentation/h7113-vplex-architecture-deployment.pdf - EMC VPLEX 5.0 Architecture Guide
http://www.emc.com/collateral/hardware/white-papers/h8232-vplex-architecture-wp.pdf - f5 VMware vSphere Solutions
http://www.f5.com/solutions/applications/vmware/vsphere/ - IBM Redbook: Implementing the IBM System Storage SAN Volume Controller V6.1 SG24-7933
http://www.redbooks.ibm.com/redpieces/abstracts/sg247933.html?Open - IBM: Split cluster configuration
http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp?topic=/com.ibm.storage.svc.console.doc/svc_hasplitclusters_4ru96h.html - Implementation and Planning Best Practices for EMC® VPLEX™ Technical Notes
http://www.emc.com/collateral/hardware/technical-documentation/h7139-implementation-planning-vplex-tn.pdf - NetApp: A Continuous-Availability Solution for VMware vSphere and NetApp TR-3788
http://www.netapp.com/us/library/technical-reports/tr-3788.html - NetApp: MetroCluster Compatibility Matrix Dec 01, 2010
http://now.netapp.com/NOW/products/interoperability/MetroCluster_Compatibility_Matrix.pdf - VMware KB: Using VPLEX Metro with VMware HA
Mar 7, 2011 KB Article: 1026692
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1026692 - VMware KB: vMotion over Distance support with EMC VPLEX Metro
Jan 21, 2011 KB Article: 1021215
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1021215 - VMware KB: VMware support with NetApp MetroCluster
Jan 21, 2011 KB Article: 1001783
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1001783 - Vmworld 2009 TA3105 – Long Distance VMotion
ftp://ftp.documentum.com/vmwarechampion/Events/VMworld/VMworld_2009/US/VMworld_Presentations_Final/TA3105/TA3105.pdf
Thanks for the clear explanation and the excellent examples. really good post. keep it up..
ReplyDeleteVideo conferencing uk
Reliance on data management systems has its own advantages and disadvantages. It may make our work faster, but there are also challenges in overcoming natural disasters and maintaining computer security. Even data systems such as Long-Distance Motion need back-up systems in case emergency strikes.
ReplyDeleteMac Pherson
Mac,
ReplyDeleteI couldn't agree more, that's why we separate disaster recovery from business continuity. BC is Long-Distance vMotion, or Workload Mobility. We still use traditional SRM-type recovery for going beyond the local metro to keep disaster recovery. That is specifically what you're talking about.
You can never sacrifice disaster recovery and LDVM doesn't aim to do that.