Virtensys, headquartered in Manchester, England, originally developed its PCIe sharing technology as a high-speed PCIe switch fabric. The founders discovered they could exploit the technology for I/O virtualization. Stephen Spellicy, vExpert in Marketing at Virtensys, spoke with John Furrier, Founder of SiliconAngle and Dave Vellante, Co-Founder of Wikibon at VMWorld 2011 about Virtensys and I/O virtualization.
Spellicy explained that they take standard, off the shelf adapters in a concentrator appliance style solution, plug those into their chassis, and then share using PCI Express based switching and a virtual proxy controller which handles the entire virtualization layer of the hardware. The technology is sold for standard server environments. Virtensys has an OEM arrangement with NEC Corporation in Japan, who is also selling a Blade version of their product.
Vellante asked why I/O virtualization is needed and how server virtualization drives that need. Spellicy acknowledged that I/O is one of the big bottlenecks in a server virtualization environment. He said, "The environments we're in are very diverse. They have needs for multi-protocol storage, for 10 gig iSCSCI, for NFS, for fibre channel, and they even leverage traditional direct attached storage architectures, and emerging technologies, like PCI express based SSDs." With all the various technologies, there's a need to address connectivity and the management of pairing those technologies with servers, whether those are blades or standard servers. Virtualized I/O provides both greater choice and control of the types of interfaces that are presented to standard servers and blade systems, as well as a greater amount of density. He noted that customers can drive these servers harder by offloading the I/O process into hardware, which results in lower CPU utilization on the server host, thus increasing I/O performance.
Spellicy's view of the cloud is an anonymous pool of infrastructure that serves an application which can be accessed anywhere, by any device. He outlined the baseline requirements for underlying infrastructure: it has to be easy to use, easy to implement and easy to manage. It must be supported and be able to leverage standard protocols. He stated that if a user can get access to those multi-protocol environments, such as iSCSI, NFS and fibre channel all within a few clicks, that reduces the typical two or three days to provision a network tap or fibre channel SAN port down to minutes. He believes that's where their customers are leveraging virtual I/O in the cloud space.
Vellante asked how cloud service providers can guarantee quality of service (QOS). Spellicy responded that their product supports QOS and bandwidth allocation on the 10GB side. He pointed out that their native GUI contains some interesting toolsets and feature capabilities, as well as their command line, powershell interface, and the recent release of their vCenter plug-in. With these tools, an admin can fine tune the percentage of guaranteed bandwidth on a given interface. Spellicy summarized by saying, "That virtualized ten gig pipe can be assigned to any server in your pod, and then you can literally govern, monitor and watch it being utilized by customer or by environment."
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
VMworld 2011 | Las Vegas. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For VMworld 2011 | Las Vegas
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for VMworld 2011 | Las Vegas.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
VMworld 2011 | Las Vegas. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to VMworld 2011 | Las Vegas
Please sign in with LinkedIn to continue to VMworld 2011 | Las Vegas. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Stephen Spellicy, Virtensys | VMworld 2011
Virtensys, headquartered in Manchester, England, originally developed its PCIe sharing technology as a high-speed PCIe switch fabric. The founders discovered they could exploit the technology for I/O virtualization. Stephen Spellicy, vExpert in Marketing at Virtensys, spoke with John Furrier, Founder of SiliconAngle and Dave Vellante, Co-Founder of Wikibon at VMWorld 2011 about Virtensys and I/O virtualization.
Spellicy explained that they take standard, off the shelf adapters in a concentrator appliance style solution, plug those into their chassis, and then share using PCI Express based switching and a virtual proxy controller which handles the entire virtualization layer of the hardware. The technology is sold for standard server environments. Virtensys has an OEM arrangement with NEC Corporation in Japan, who is also selling a Blade version of their product.
Vellante asked why I/O virtualization is needed and how server virtualization drives that need. Spellicy acknowledged that I/O is one of the big bottlenecks in a server virtualization environment. He said, "The environments we're in are very diverse. They have needs for multi-protocol storage, for 10 gig iSCSCI, for NFS, for fibre channel, and they even leverage traditional direct attached storage architectures, and emerging technologies, like PCI express based SSDs." With all the various technologies, there's a need to address connectivity and the management of pairing those technologies with servers, whether those are blades or standard servers. Virtualized I/O provides both greater choice and control of the types of interfaces that are presented to standard servers and blade systems, as well as a greater amount of density. He noted that customers can drive these servers harder by offloading the I/O process into hardware, which results in lower CPU utilization on the server host, thus increasing I/O performance.
Spellicy's view of the cloud is an anonymous pool of infrastructure that serves an application which can be accessed anywhere, by any device. He outlined the baseline requirements for underlying infrastructure: it has to be easy to use, easy to implement and easy to manage. It must be supported and be able to leverage standard protocols. He stated that if a user can get access to those multi-protocol environments, such as iSCSI, NFS and fibre channel all within a few clicks, that reduces the typical two or three days to provision a network tap or fibre channel SAN port down to minutes. He believes that's where their customers are leveraging virtual I/O in the cloud space.
Vellante asked how cloud service providers can guarantee quality of service (QOS). Spellicy responded that their product supports QOS and bandwidth allocation on the 10GB side. He pointed out that their native GUI contains some interesting toolsets and feature capabilities, as well as their command line, powershell interface, and the recent release of their vCenter plug-in. With these tools, an admin can fine tune the percentage of guaranteed bandwidth on a given interface. Spellicy summarized by saying, "That virtualized ten gig pipe can be assigned to any server in your pod, and then you can literally govern, monitor and watch it being utilized by customer or by environment."