VMware EVO:Rail

For the tens of you who read this site you know that I am a believer in hyper converged architectures. Like anything there is benefits and drawbacks, but I will say that there is a cooleness factor and simplicity that make hyper converged infrastructure really appealing. As it is VMworld this week I am sure many of you have heard the announcement of EVO: Rail and I am behind the bigger blogs on posting something.

The reason for my waiting is as a vSpecialist I didn’t want to get ahead of EMC announcements. Now that Chad has done his sessions I feel safe to at least start talking about the public aspects of EVO: Rail and what EMC is planning to do.

So the let’s start with terms:

Hyper Converged Infrastructure Appliance (HCIA) is hardware that incorporates storage, compute and networking into a single appliance.

EVO: Rail this one I have heard different takes on, EVO is for evolution that’s agreed upon, but the Rail part is interesting. I’ve heard that it refers to the fact that its an appliance that slides in a rack on rails. This could make sense as EVO: Rack is going to be in tech preview soon and is the larger scale solution of VMware’s OEM EVO suite. But I also have heard that the Rail portion refers to the fact that like train tracks the product lays the foundation for additional appliance to be added like rail road tracks to keep you moving forward.

Regardless of name though the solution will look very similar across most of the EVO: Rails offerings from a hardware perspective. Expect to see Dual 10GbE that can be set up as fiber channel or copper. These will be used to port channel and provide logical separation of Storage, Management (vCenter), and vMotion. Then there will be at least 1GbE this will be for out of band management and remote support. All will come with solid chip sets and scalable storage and memory.

From Duncan Epping’s blog:

Each of the four EVO:RAIL nodes have (at a minimum):

  • Two Intel E5-2620 v2 six-core CPUs
  • 192GB of memory
  • One SLC SATADOM or SAS HDD as the ESXi™ boot device
  • Three SAS 10K RPM 1.2TB HDD for the VMware Virtual SAN™ datastore
  • One 400GB MLC enterprise-grade SSD for read/write cache
  • One Virtual SAN-certified pass-through disk controller
  • Two 10GbE NIC ports (configured for either 10GBase-T or SFP+ connections)
  • One 1GbE IPMI port for remote (out-of-band) management

So what is the difference between this and Nutanix? Well first the product is new so there is some maturity that will happen as EVO:Rail moves forward, the DR functionality will come from the OEM’s. But most importantly the virtual storage management is embedded in the hypervisor kernel which means that while there is out of bound communication for the hosts to let each other know that the data is replicated across the VSAN datastore and there isn’t a risk of a controller VM dying or losing communication. It doesn’t make it perfect but it changes the layer that the storage is talking at.

From EMC we have had a compute appliance that has been used for the last couple of years in our denser storage stacks. This appliance is code named Phoenix, this is not to say we are going to be making and selling servers just that we have a compute appliance that will be the basis for our HCIA offering.

EMC_HCIA What will separate EMC is the integration to our backup and DR solutions, our enterprise class support services that customers have come expect.  Read Chad’s blog for even more info.

 

Hopefully this wets your whistle for things to come but expect more on this as the HCIA OEM’s announce their offerings and we can come out of our beta process at EMC.

EVO: Rails Info:

http://www.yellow-bricks.com/2014/08/25/introducing-vmware-evo-rail-new-hyper-converged-offering/

http://www.vmware.com/products/evorail

 http://virtualgeek.typepad.com/virtual_geek/2014/08/vmworld-2014-evorail-and-emcs-approach.html