VMware Server VMware EVO:RAIL

EVO:RAIL

EVO:RAIL combines compute, networking and storage resources into hyper-converged infrastructure appliance to create a simple, easy to deploy all-in-one solution.

Simplicity transformed.

EVO:RAIL enables power-on to VM creation in minutes, radically easy VM deployment, one-click non-disruptive patch , upgrades and simplified management software, working synergistically with trusted and reliable hardware supported by Thinkmate - The leader in High Performance computing for over 25 years.

What does EVO:RAIL do?

It simplifies VM creation and management.

EVO:RAIL Configuration walks users through initial parameters, validates settings and then configures ESXi hosts, vCenter Server and Log Insight automatically. Global parameters such as passwords, NTP servers, logging and time zone are all set centrally and applied to all the ESXi hosts in the EVO:RAIL cluster.

Select the guest OS, VM size, VLAN and security options. EVO:RAIL hyper-convergence simplifies virtual machine sizing by offering single-click small, medium and large configurations.

It makes it simple to add capacity as needed.

Increase compute, networking and storage resources by powering up a new appliance to join an existing EVO:RAIL cluster. EVO:RAIL automatically distributes the configuration to seamlessly add new appliances with zero additional configuration.

EVO:RAIL with Horizon lets you start small with as few as 25 users. You can deploy, manage and linearly scale your virtual workspaces as demand grows. EVO:RAIL with Horizon simplifies and streamlines the management of your desktop infrastructure. Best of all, your end users get predictable performance across devices and locations.

Get started for free

Try out the EVO:RAIL experience with your own server hosted at Thinkmate’s Datacenter.
No Installation Required. Manage From Your Browser. No obligation to buy. Fully Featured.

required
required

required

What's Included

EVO:RAIL is perfect for...

EVO:RAIL is perfect for...

Frequently Asked Questions



Source: Yellow-Bricks

What is VMware EVO:RAIL?
VMware EVO:RAIL™ is the next evolution of infrastructure building blocks for the Software-Defined Data Center (SDDC), and is the fastest way to build out the core virtual infrastructure services required to implement SDDC. It delivers virtualized compute, networking, storage, and management in a 2U / 4 node package with an intuitive interface that allows for full configuration within minutes. The EVO:RAIL appliance bundles hardware, software, support, and maintenance to simplify both procurement and support in a true “appliance” fashion. Each appliance comes with 100GHz of compute power, 768GB of memory capacity and 14.4TB of raw storage capacity (plus 1.6TB of flash for IO acceleration purposes).
How is EVO:RAIL different from VMware Virtual SAN?
EVO:RAIL is the first prebuilt and fully optimized hyperconverged infrastructure appliance that is powered 100 percent by VMware. The Qualified EVO:RAIL Partner (QEP), through their preferred channels, sells EVO:RAIL via a single SKU that includes hardware, software, service, and support. The QEP offers single point-of-contact support. EVO:RAIL leverages VMware vSphere® with Virtual SAN™, but removes the need to check Hardware Compatibility Lists, then manually build and configure the solution.
  • EVO:RAIL incorporates VMware compute, networking, storage, and management software and delivers Time-to-Value to first VM in 15 minutes once racked, cabled, and powered on.

  • The EVO:RAIL engine is responsible for deployment, configuration, and management of the appliance. EVO:RAIL management provides automated scale-out, automated upgrades, and simplified VM management capabilities.
What does EVO:RAIL replace?
EVO:RAIL automation replaces the manual process of creating a network, adding virtual machines, and creating a Virtual SAN datastore with vSphere® Distributed Resource Scheduler™ (DRS), High Availability, and vSphere vMotion®.

EVO:RAIL also provides an automated patch and upgrade mechanism for vSphere, vCenter and the EVO:RAIL engine itself. It is integrated within appliance user interface. Patch and upgrade is often seen as the strongest value to a customer. It removes the risk that customers face with interoperability issues.

Basic VM lifecycle management can also be performed with the EVO:RAIL engine.
What is the minimum number of EVO:RAIL hosts?
Each appliance contains four nodes; each node has an independent ESXi™ host running on it. This means that one EVO:RAIL appliance with 4 hosts is the minimum and provides redundancy within a single appliance. By adding a second, third, and fourth appliance, EVO:RAIL scales to automatically combine 8, 12, or 16 ESXi hosts in a cluster with 1 Virtual SAN datastore.
What if I want to add a second set of 4 appliances?
Each cluster of up to 4 EVO:RAIL appliances (16 nodes) will operate independently in the current version.
What is included with an EVO:RAIL appliance?
The appliance includes:
  • 4 independent nodes, each with the following resources:

    – 2 x Intel E5-2620 6 core processor (may vary slightly by vendor)
    – 192GB Memory
    – 3 x 1.2TB 10K RPM Drive for Virtual SAN
    – 1 x 400Gb eMLC SSD for Virtual SAN
    – 1 x ESXi boot device (either a drive or an internal SATADOM)
    – 1 x dual 10GbE NIC port (optical or copper)
    – 1 x IPMI port

  • EVO:RAIL Deployment, Configuration, and Management (DCM) engine

  • vSphere Enterprise Plus

  • vCenter Server™

  • Virtual SAN

  • vRealize™ Log Insight™

  • Support and Maintenance for 3 years
What is the total available storage capacity?
After the Virtual SAN datastore is formed and vCenter Server is installed and configured, there is about 13.1TB for virtual machines; however, this will vary based on the redundancy level set for Virtual SAN as it will copy data to tolerate failures.
How many VMs can I run on one appliance?
That depends on the size of the virtual machine and the workload. We have been able to comfortably run 250 desktops on one appliance. We ran 100 average-size VMs on one appliance. However, this will vary with the workload, capacity, etc. Specific size information can be found on the EVO:RAIL data sheet.
If licensing, maintenance, support is three years, what happens after?
When 3 years of support and maintenance expires, it can be extended through the QEP. As technology advances and resource demand increases, the appliance can be replaced to ensure it meets these demands.
How is support handled?
All support is handled through a QEP. This ensures that “endto-end” support is consistently provided for all hardware and software.
How much does an EVO:RAIL appliance cost?
Pricing will be set by QEPs as a single SKU for hardware, software and 3 years of support.
What kind of NIC card is included?
10GbE dual port NIC. Most QEPs will offer the option of SFP+ or RJ45 connectors.
Is there a physical switch included?
A physical switch is not part of the “recipe” VMware provides to QEPs. However, many QEPs may package a switch or switches with EVO:RAIL to simplify green-field deployments. We strongly recommend including a data switch with any trial.
What is MARVIN?
MARVIN (Modular Automated Rackable Virtual Infrastructure Node) was the internal codename used by VMware for EVO:RAIL.
Are there networking requirements?
Multicast traffic on L2 is required for Virtual SAN (see the vSphere documentation center for more specific Virtual SAN information). IPv6 is required for auto-discovery on the topof-rack switch to which EVO:RAIL is connected, but the entire network does not need to support IPv6. IPv4 addresses are used to configure EVO:RAIL.
How is network traffic prioritized?
To ensure vSphere vMotion traffic does not consume all available bandwidth on the 10GbE port, EVO:RAIL limits vMotion traffic to 4Gbps.
Where is the EVO:RAIL engine running?
The EVO:RAIL engine starts on the boot drive to build the appliance. Then it runs on the same VM as vCenter Server™ on ESXi host #1. vCenter Server is powered-on automatically when the appliance is started and the EVO:RAIL engine can then be used to configure the appliance. During compute load-balancing, it may move to other ESXi hosts in the cluster.
Which version of vCenter Server does EVO:RAIL use, the Windows version or the Appliance?
To simplify deployment EVO:RAIL uses the vCenter Server Appliance.
Are all vCenter operations included in EVO:RAIL?
No, a subset of vCenter operations is available. We will continue to refine the capabilities over time, keeping in mind the simplicity of the EVO:RAIL user experience. We are not trying to duplicate all of the features of other VMware products, such as vCenter, but rather capture the critical configure, deployment, and management steps in a simplified appliance model.
Can you create custom VM sizing from within the EVO:RAIL Management interface?
EVO:RAIL was designed for simplicity and to incorporate our recommended best practices for small, medium and large VMs for each Operating System (OS). When you create a VM in the EVO:RAIL user interface, you will upload your ISO and select the OS type. Then configuration details for small/ medium/large VMs are displayed based on recommendations for the specific OS type you are creating.

EVO:RAIL was designed for simplicity and to incorporate our recommended best practices for small, medium and large VMs for each Operating System (OS). When you create a VM in the EVO:RAIL user interface, you will upload your ISO and select the OS type. Then configuration details for small/ medium/large VMs are displayed based on recommendations for the specific OS type you are creating.
If you create a VM via the vSphere Web Client instead of via the EVO:RAIL management interface, will it show up in EVO:RAIL?
Yes. EVO:RAIL leverages the same database as vCenter; this avoids any issues with the two interfaces being out of sync.
Can I re-use an existing vCenter Server in my environment to manage EVO:RAIL?
No. EVO:RAIL is delivered as an appliance. It is a standalone, simplified environment to deploy, manage, update and scale. As a hyper-converged infrastructure appliance, it deploys a separate instance of vCenter Server.
How do EVO:RAIL clusters work?
In the current version, you can have up to four EVO:RAIL appliances represented as one cluster and serviced by one Virtual SAN datastore. Each EVO:RAIL appliance is comprised of four ESX hosts configured into a single cluster under the management of vCenter Server, and serviced by a single Virtual SAN datastore. If you add a new appliance to an existing EVO:RAIL cluster, the new appliance is managed by the original vCenter and EVO:RAIL engine, the ESX hosts are automatically integrated into the existing cluster, and the Virtual SAN datastore is automatically expanded by approximately 13TB with each appliance added. Storage varies based on the Virtual SAN fault tolerance setting.
How can I control on which ESXi host a VM will be placed or run on?
It is not necessary to determine which ESXi host a VM is placed or runs on as it is under the control of DRS which was configured during the initial setup of the EVO:RAIL Appliance.
Can I connect existing vCenter Operations Manager or vRealize Automation environment to EVO:RAIL?
Yes, EVO:RAIL can be connected to vCenter Operations Manger and vRealize™ Automation. Because EVO:RAIL resides on vCenter and vSphere, all APIs and interfaces remain intact.
Does EVO:RAIL support compression, deduplication and replication?
EVO:RAIL leverages Virtual SAN 1.0, which currently does not support compression or deduplication. Replication, on the other hand, can be provided by the deployment of vSphere Replication™. Further data recovery automation can be substantially increased with VMware Site Recovery Manager. Also for virtual desktop environments, linked clones allow massive storage capacity savings, beyond what compression and deduplication offers.
Is EVO:RAIL compatible with the current backup systems in the third-party market?
Yes. EVO:RAIL is based on the recent release of vSphere 5.5 U2. If a backup vendor supports this release of vSphere Enterprise Plus, then there should be no compatibility concerns. Please refer to your backup vendor’s documentation and release notes. Also EVO:RAIL includes VMware vSphere Data Protection™ for basic backup protection.
My organization has commitments to existing external storage including iSCSI and NFS, can these be used with EVO:RAIL?
Yes! Remember, EVO:RAIL uses vSphere, which supports these common protocols. NFS exports and iSCSI targets are accessible using the standard tools via the vSphere Web Client. Fibre Channel SAN storage cannot be connected, as there are no spare expansion slots for Fibre Channel Host Bus Adapters in each of the four physical server nodes.
For virtual desktops – does EVO:RAIL support Horizon 6 and acceleration hardware such as NVIDIA and PCoIP APEX cards?
As with other vSphere 5.5-compatible software, EVO:RAIL is compatible with VMware Horizon® 6. EVO:RAIL currently does not support the advanced GPU features leveraged by Horizon View™, nor does it support expansion cards such as the PCoIP APEX PCI cards. The current 2U/4-node physical infrastructure does not have the physical expansion capacity to support these cards. Alternatives such as the Virtual SAN Ready Nodes or custom configurations should be considered over EVO:RAIL.
If a host gets too busy, are VMs moved automatically to balance the load?
Yes, vSphere DRS is fully configured out of the box. vSphere DRS is triggered every 5 minutes or whenever a new VM is created, powered-on, or powered-off.
When adding VMs via EVO:RAIL, how are the VMs allocated to the hosts? Are they load-balanced, round-robin, etc?
vSphere DRS takes care of compute-based loads for all ESXi hosts in the cluster. vSphere DRS is triggered every 5 minutes or whenever a new VM is created, powered-on, or powered-off. Virtual SAN distributes the storage based on its total capacity.
When a new EVO:RAIL appliance is added to a cluster, do VMs move to rebalance? Does data get rebalanced?
Yes, any change in the compute cluster will trigger vSphere DRS to run.

Virtual SAN does not redistribute DATA because the cost of moving data around is simply too high. Only when disks begin to reach ‘disk full’ (greater than 80 percent) will a re-balance occur.
Where is the Virtual SAN cache for a new EVO:RAIL appliance?
The Virtual SAN cache is always local to where data sits. Example: Given a virtual machine with 1 disk and 2 copies of the disk, if those copies sit on host-1 and host-2, then cache for those disks is also located on host-1 and host-2. This arrangement avoids problems when hosts are isolated.
Does Virtual SAN spread or rebalance data onto a new EVO:RAIL appliances or does it only put new writes onto the new EVO:RAIL disks?
Only new virtual machines will be stored (from a data point of view) on the new EVO:RAIL disks. Virtual SAN does not redistribute data because the cost of moving data around is simply too high. Only when disks begin to reach “disk full” (greater that 80 percent) will a re-balance occur. However, the compute side (VM memory and CPU) can run on the new nodes immediately
Will EVO:RAIL leverage vSphere Update Manager for patching and updating?
No, EVO:RAIL does not leverage vSphere Update Manager™ (VUM). Patching and updating are built into EVO:RAIL DCM engine, designed and developed by the EVO:RAIL team, for an optimal experience with minimal manual effort to move from version to version. EVO:RAIL provides a seamless upgrade process by taking one ESXi host at a time out-of-service, without extra prompts from the user. Unlike VUM, EVO:RAIL does not require a Windows VM. The QEPs will provide updates to their customers, ensuring they have tested it and released it with their hardware.
Can firmware be updated directly from the EVO:RAIL Management interface?
No, but firmware update integration is on the roadmap; we will be working with our QEPs on this integration.
When there is a node failure, are all VMs moved to remaining hosts? Are they spread across all other hosts?
Yes. vSphere HA restarts all VMs across all hosts in a cluster. If a host is down for more than 60 minutes, then Virtual SAN recreates the “missing components” (copies of data) of the impacted “objects” (virtual machines).
When a failed node is replaced, are the VMs rebalanced again to the new host?
When you replace a node, all VMs are rebalanced from a compute point of view. Storage is only rebalanced when there is a direct need because of the cost of rebalancing.
How long after a drive is inserted does it take for a Virtual SAN rebuild?
When a drive fails, Virtual SAN immediately starts rebuilding components (copies) of impacted objects (VMs). The time can vary based on usage and other processes.
Do you have specific EVO:RAIL performance data?
EVO:RAIL leverages vSphere and Virtual SAN at its core. Virtual SAN performance data can be used as a reference point today, we are working with our QEPs regarding performance testing on their specific platforms. See http:// blogs.vmware.com/vsphere/2014/03/supercharge-virtualsan-cluster-2-million-iops.html
How can I migrate VMs from an existing virtualized environment to EVO:RAIL?
EVO:RAIL uses its own vCenter Server instance, and VMs cannot be simply moved with vMotion and vSphere Storage vMotion. There are two options to migrate virtual machines:

Scenario 1:
  • Connect current NFS or iSCSI datastore to EVO:RAIL using Web Client
  • Browse Datastore for registering VMs
  • Look up the VM you want to migrate
  • Make sure it is powered of
  • Register the VM to the EVO:RAIL vCenter Server
  • Migrate (Storage vMotion) VM from NFS/iSCSI Datastore to Virtual SAN Datastore
  • When done, remove iSCSI/NFS datastore from EVO:RAIL


Scenario 2:
  • Open Web Client of EVO:RAIL environment
  • Add existing host to EVO:RAIL Datacenter (NOT the cluster!)
  • Note that this means it is removed from original vCenter Server instance
  • Move workloads using “migrate” (Storage vMotion®) from original host to new EVO:RAIL hosts
  • When done, remove original host from cluster
Is EVO:RAIL covered by my existing ELA with VMware?
EVO:RAIL is a new Hyper-Converged Infrastructure Appliance that offers a brand new delivery and consumption model, and a new way to stack and scale the core virtualized infrastructure services in a simple and easy way. It is not currently included as part of any existing ELA agreement or on the VMware pricelist.
Will VMware NSX work with EVO:RAIL?
EVO:RAIL uses vSphere 5.5 networking and Virtual SAN. Anything that works with vSphere and Virtual SAN will work with EVO:RAIL. Although VMware NSX™ has not been explicitly integrated and included with EVO:RAIL, there is no technical reason that it would not work.
EVO:RAIL has its own in built-in DNS service, can a customer use their existing DNS service?
Yes, the EVO:RAIL built-in DNS service resolves names when other DNS services are not present, during initial configuration or in a newly created ROBO environment. Once configured, customers should use their own corporate DNS server for fully qualified domain hostnames or use static IP addresses.

Need Help?
We're here to help.

Unsure what to get? Have technical questions? Contact us and we'll help you design a custom system which will meet your needs.

Discounts Available
For Students and Institutions

Thinkmate offers discounts to academic institutions and students on purchases of Thinkmate Systems. Contact us for details.

GSA Scheduling Available
For Government Purchases

We offer rapid GSA scheduling for custom configurations. If you have a specific hardware requirement, we can have your configuration posted on the GSA Schedule within 2-4 weeks.

CONTACT US

Call 1-800-371-1212 E-mail us