Does VDI Vendor Lock In Exist?

Yesterday, I read an article about a customer who had chosen for Citrix XenDesktop on VMware vSphere. By choosing Citrix, according to the customer, he wasn’t “vendor locked in”. As many people know VMware View requires VMware vSphere. The Citrix frontend can run on multiple hypervisors. I came across 1 other case where vendor lock in was used as a reason to go for a combination of products from different vendors; a customer explained to me he decided to go for vSphere in the backend and Citrix XenDesktop in the frontend. Also, for Application Virtualization he was going for a specific product. He wanted to create a flexible building block kind of environment; no vendor lock in and be able to change a building block when necessary.

Is seeing VMware View as “vendor lock in” a good driver for your decision? Does “vendor lock in” in this case exist?

VDI environments basically consist of 2 parts; the hypervisor layer (hypervisor, its features and management platform) and the VDI frontend. These 2 parts make up 1 solution. In the case of VMware VDI it is difficult to actually see them as individual products. The VMware back- and frontend integrate, almost unite and become one optimal working solution. Together, features become available which won’t be there with any other combination. Think about the 3D driver, which resides in vSphere and VMware View uses to support Windows Aero, basic 3D and OpenGL2.1/DirectX 9. More “exclusive” features will come out like this, making the combination more and more stronger. Also on other levels integration happen. On a development level, both development groups (View and vSphere) communicate, make plans, share ideas and build with having each other’s product in mind. Last but not least, support. It does benefits when the whole stack is coming from 1 vendor. It could make troubleshooting a lot easier.

You want to be able to change building blocks? Would you ever do so after the implementation of your first choice? Well, you might but probably not until after your solution’s economic value is zero and that might take 3-5 years. Nobody likes to throw away money. After the economic write-off, you have to invest again and can decide for other vendors for whatever reason. Of course there are exceptions where an issue occurs and no solution is available so that you have no choice than to change building blocks or even the complete solution. In all other cases I doubt you will change a building block because another vendor has a feature you might like, or your license price went up, as an example. Think about the technical implication, your knowledge/resources internally and again, which features would you lose when going for another building block.

The VMware solution will bring you more and exclusive features and functionality. Changing building blocks, in whichever combination you use them and from vendors will most likely not happen before the economic write-off or exceptional circumstances. In this case, to me, vendor lock in doesn’t exist. Comments are most welcome!

Webcams,VoIP, Unified Communications in a VMware View environment

Although a lot of information is available today about this topic, I still get many questions about webcams, chat and VoIP in a VMware View environment. These days, customers are (thinking about) moving to VDI environments. At the same time they also want to move over to other new cool technologies like using webcams for conferences, VoIP, chats (all together; Unified Communication) and heavier multimedia. All good of course but you have to understand what will work and what won’t in your new environment. Your setup will change from local physical PC’s to a centrally oriented Virtual Desktop Infrastructure.

So, Skype for example, and I should say “Skype with bi directional audio and video”, inside a VMware View environment works…..technically. But why doesn’t VMware support this?

A couple of reasons for that;

  • First of all, with VDI, everything happens in the central datacenter, but I’m not there myself. So isn’t the person I’m talking to. When I would use a phone, I would call that person directly. Straightforward and efficient. With VDI all data flows from my house to the datacenter going through my VM, vSphere Server and to my contact. Not very efficient. This is called media hairpinning.
  • Because all voice/video data is going through the vDesktop and needs to be rendered on vSphere servers, your consolidation ratio will go down dramatically. CPU cycles are needed to render all that data
  • Also think about bandwidth consumption. USB webcams are often used and those do consume bandwidth, not to mention the voice and video data. This KB talks about USB devices.
  • Last reason I would like to mention is the lack of ability to implement a decent QoS on all that traffic.

There also is another reason why not to use Unified Communications inside a VDI environment. This reason is quite important; UC vendors don’t support it! For example, Microsoft OCS/Lync. The support statement is quite clear and this  (great) post describes why it isn’t supported.

Okay, now the good news. In VMware View 5 “View Media Service for Unified Communications” was introduced. This is an API, which vendors can adopt and use. With this API and the vendor’s new product, media data is being offloaded straight on the client device and not being sent through the datacenter. View 5 in combination with a vendor and its product who supports this API will solve the previous problems, which prevented support from VMware and vendor.  The great news is 4 huge UC vendors already signed up to team with VMware to make their products integrate with this new API; Avaya, Cisco, Mitel and Siemens. Mitel is already shipping its products and together with VMware published a white paper. Here you can read about the alignment between Cisco and VMware with Cisco’s VXI product.

So, what about Microsoft Lync? That’s the question I always get after this story. VMware is in touch with Microsoft on this topic. I do understand the big demand for this and I’m very sure VMware knows about it as well. To make VDI, in general, appealing to customers, we need to fix this asap. I agree! And an addition on this, yes, I’m aware of the need of support of Skype and GoogleTalk!

Hopefully you have an idea why it isn’t that simple to implement VoIP and video inside a VDI environment. It does take work to make it technically work efficiently and also supported by vendors. I do like VMware’s approach on this because we have 4 major players on board now. It’s up to them how soon and with which functionalities they will come out to use in a View environment. So, do check if their first release, for example, supports video and which requirements they have on the client side. One thing though, the future is looking good.

VMware View Composer and recomposing a Pool (Video included)

One of the best features of VMware View is Linked Clones and VMware View Composer. In this article I will discuss what composer is, the benefits and I will add a short video in which I will recompose a pool. Just for everyone to see how easy it is to recompose a pool, in just a couple of clicks.

View Composer is a tool/mechanism, which helps you streamline virtual desktop provisioning. Also, it helps you introduce single image management and reduce cost on storage capacity. Composer uses Linked Clone technology. Instead of creating multiple Full Clone VM’s for your users, you only create 1 Parent VM (Golden Image is another term I hear a lot) and roll out Linked Clones, which all are unique and point to a Master. The Master VM is Read-Only. The user writes in the Linked Clone Delta disk. The Master+Linked Clone is the complete VM for a user.

The steps to create a Linked Clone pool are;

  • Create a VM in vCenter with the View Agent installed (the Parent VM),
  • Turn off that VM and create a Snapshot,
  • In View Manager, create an Automated Pool, Linked Clone. See video.

What happens next (KB 1021506);

  1. View Manager creates the linked-clone entry in View LDAP and puts the virtual machine into the Provisioning state.
  2. View Manager calls View Composer to create the linked clone
  3. The View Composer Server creates the machine account entry in Active Directory for the new clone and creates a random binary password for the newly created computer account.
  4. If a replica for the base image and snapshot does not yet exist in the target datastore for the linked clone, View Composer creates the replica in the datastore. If a separate datastore is configured to store all replicas, the replica is created in the replica datastore. (In View 4.5 and later, replicas can be stored in a separate datastore.)
  5. View Composer creates the linked clone using the vCenter Server API.
  6. View Composer creates an internal disk on the linked clone. This small disk contains configuration data for QuickPrep or Sysprep. The disk also stores machine password changes that Windows performs every 30 days, according to the policy setting. This disk data ensures that domain connectivity is maintained when a checkpointed desktop is refreshed.

So, now you have a Linked Clone Pool. But, what do you do when you need to update this pool? Think about patches for Windows or other applications installed in the Parent VM. My recommendation; Don’t let every user update his/her VM, nor push updates with a deployment tool. All these updates will end up in the Linked Clones. They will grow but more importantly, when you do a recompose or a rebalance, you will loose all these updates.

Use VMware View Composer and Recompose pools to push updates out to users. In this scenario you start the Parent VM again, apply the updates/changes, turnoff the VM and create a second Snapshot. From that point, use VMware View Manager to recompose the pool. The video will show which steps need to be taken to recompose the pool.

These steps occur during a recompose operation:

  1. View Manager puts the linked clone into the Maintenance state.
  2. View Manager calls the View Composer resync API for the linked clones being recomposed, directing View Composer to use the new base image and snapshot.
  3. If a replica for the base image and snapshot does not yet exist in the target datastore for the linked clone, View Composer creates the replica in the datastore. If a separate datastore is configured to store all replicas, a replica is created in the replica datastore.
  4. View Composer deletes the current OS disk for the linked clone and creates a new OS disk, linked to the new replica.
  5. The rest of the recompose cycle is identical to the customization phase of the provisioning and customization cycle.

The beauty is you only update 1 VM and push it out to multiple users. You also have the option to leave certain pool on Snapshot 1 and recompose other pools to use Snapshot 2. Do realize all changes inside a Linked Clone will be lost after a recompose. That’s the reason you need to separate the “user” when you want to deploy a Linked Clone Pool. Changes should be saved centrally. In case a users needs to be able to install software, provision a Full Clone VM for that user so a recompose won’t delete all the user’s work.

Two more small things at the end of this article;

  1. Can you keep recomposing a pool? Meaning, can you add Snapshot after Snapshot? Well, good question and I don’t have an official answer.  Adding Snapshots can’t be good performance wise but the Linked Clones aren’t reading from the Parent+Snapshots. Every time a new Replica is being created, so it shouldn’t be an issue but again, I cannot find an official statement.
  2. In vSphere, you see 3 different numbers being mentioned what a VM uses, storage capacity wise. I admit, confusing. The “Not-Shared” number is the one to track. This is what the actual size is of your Linked Clone VM. You can read more about this here.

Options/choices Creating a Floating Linked Clone Pool

After discussing Creating Floating Linked Clone Pools it is time to go through the options you get when creating such a pool. If you haven’t created a Floating Linked Clone Pool before or can’t recall the options, take a look at the video. I will not discuss every option. Just the ones I get most questions about.

The first option you get after choosing “User Assignment-Floating” is the choice between “Full virtual machines” and “View Composer linked clones”. Now, I have to mention I don’t really understand this option. Why would you choose for a Floating Full VM Pool? I just don’t see the use case. It isn’t because you want to use Local Mode, because you need a dedicated pool for that. It also isn’t for Local Admin/installation rights because you will end up on a different VM, so your apps will be gone. You will just use more storage capacity as I see it. Oh well, it is there as an option.

The next options you will see are under “Pool Settings”.

“Remote Desktop Power Policy”; basically has 3 options; leave your VM’s turned on, suspended or turned off. This is the policy you set on VM’s which aren’t in use by users and which don’t belong to the “spare (powered on) desktops”. The amount of “spare” VM’s can be configured later on under “Provisioning settings-Pool Sizing”. Most of the times, I set the power policy to power off. Why burn CPU cycles when no one is using the VM? If you have enough (and that’s the magic word I guess) VM’s set as spare, no one needs to wait before a VM completely boots. It has to be said, when you leave all the VM’s on, no one needs to wait. Not even in the case suddenly everyone logs on.

“Automatically logoff after disconnect”; what do you want to happen when someone disconnects its sessions? Automatically logoff? Straight away or after a period of time? Straight away means freeing up the VM so others can use it. On the other hand, what about roaming through a building? Disconnect, go to a different level connect and directly go on with your work because your session is there. I have seen the setting “after a period of 4 hours” so users could go home and continue there. Do keep in mind, when a session logs off after a disconnect, all open applications (yes, that also means an open Word document which cost hours of work) will close as well.

“Delete or refresh a desktop on logoff”; I do believe you have to pick either delete or refresh in this case. Refresh means reverting to the original snapshot where as delete means that the Linked Clone gets deleted and build up again. Delete takes longer and cost more IO’s.

Just a quick comment on “Remote Display Protocol” settings; when you pick PCoIP as the default protocol and don’t allow users to choose protocol, you will be able to enable Windows 7 3D Rendering and set an amount of Video Ram per VM.

“View Composer Disks”; you have an option to redirect system temp files and page file. This disposable file gets deleted after using the VM. Again, I believe that you always should use the delete or refresh option with a Floating Linked Clone Pool. In that case you don’t need to use a disposable file.

“Pool Sizing”; max amount of desktops and number of spare desktops. Do look back at the “Remote Desktop Power Policy”. All policies together set the behavior of the pool.

Example; max amount is 100. Spare is set to 20, Remote Power Policy is set to turned off and provisioning is “up-front”.  In this example all 100 VM’s get created and configured. When they all are created, 20 VM’s will remain turned on and 80 turned off. When someone logs on to a VM, only 19 VM’s are spare because 1 has been taken. Automatically 1 VM will be turned on to meet the policy again. This continues till all VM’s are turned on.

“vCenter Settings-Datastores”; I will come back on this topic later on. Also, a lot of information already has been released around this topic. It is important which datastores you use for Replica’s and Linked Clones so do get familiar with the options you have around storage.

Hopefully you have an understanding now what these options bring you. Know what to provide to your users.

Creating a Floating Linked Clone Pool (video included)

One of the things I always show during a demo of View Manager is the creation of a pool. I show which choices you have as an admin; dedicated/floating pools, 3D turned on/off, storage tiering etc. I created a video in which I provision a Floating Linked Clone Pool. My message; it is very easy to create pools in VMware View and also that for different user groups you can create different pools, which behave differently.

In this post I would like to discuss the creation of a Floating Linked Clone Pool with Refresh after first use. Why start with this pool? Because I think this is the pool to aim for, to go for, which will give you flexibility, efficiency and the least management.

A Floating Linked Clone Pool is a pool mechanism where there is no permanent relationship between user and VM. On a random day I could log on to VM1 and the next time logon to VM20. When you know, on average, 70% of your employees are working every day you only need to provision 70% of the workspaces/VM’s. You don’t have to create a VM for every employee. This way you can work more with concurrency. With 70 VM’s you can provide a workplace for 100 employees.  This impacts the size of your VMware View environment but also 3rd party software running on your VM’s. So, you could save on hardware and software when working with concurrency.

Linked Clone Pools mean that you work with a Parent VM, or also known as Golden Image. Instead of giving every concurrent user a VM, which is 30GB in size, you can create a pool based on a 30GB Parent VM and users start with a small “Linked Clones”. These Linked Clones will grow over time but you will save on storage capacity. With a Linked Clone Pool, you only have to patch and manage the Parent VM so you will have single image management.

You can delete/refresh these Linked Clones after first use so storage management will be reduced. You don’t have to monitor these Linked Clones in detail. Most likely, they won’t grow that much during a user’s workday. In my opinion, do use the delete of refresh option when you use Floating Linked Clone Pools. The pool is floating, so delete changes made by a user before the next user logs on. Everybody starts with a clean VM, with his/her set of applications.

A couple of things you have to keep in mind;

  • Profiles; because you delete/refresh the Linked Clones, profiles/user settings need to be saved centrally. Use View 5’s Persona Management or any other 3rd party tool like Roaming Profiles, RES, Appsense etc. This way a user will get his/her settings back when logging on to a clean VM.
  • Local Admin/Installation rights; when users have Local Admin rights, giving them a Floating Linked Clone Pool is most likely not the best choice. After a refresh/delete, all user installed applications are gone and the next time your users have to install the applications again..and again.
  • Virtualizing your applications will make this mechanism even more flexible and efficient.  You can then reduce the amount of different Parent VM’s with a specific application set locally installed.

After knowing the boundaries I believe most employees/users can be placed on a Floating Linked Clone Pool. You will get storage savings, you can work with concurrency, single image management and users will see a full, complete Windows desktop with applications

Next, I will discuss the options you get during the creation of a Floating Linked Clone Pool. To check out the options, see the video.

A video tour through VMware View Manager; Overview

VMware View Manager is an enterprise-class virtual desktop manager. It is the place for desktop administrators to provision pools of VM’s, entitle users to pools and deploy virtualized applications, ThinApps.

I bet there still are a lot of people who haven’t seen or touched the View Manager Console. With this video, I wanted to give you an overview of what it looks like, how navigation via links is been done and how you can find information about users, pools and your environment.

This video is part 1 of multiple videos. I will cover other topics with videos as well. Stay tuned.

Nutanix; VDI made simple

Today I had a call with Nutanix. I saw them at VMworld for the first time and saw them again last week in Palo Alto, Ca. Many other bloggers already wrote articles about them, like Duncan Epping.

The mission of Nutanix is; “To make virtualization simple by eliminating the need for network storage while still delivering the enterprise-class performance, scalability and data management features you need”

Sounds promising! Solutions that can make virtualization (and in my case VDI) simple are most welcome. There are too many articles, blogs etc stating VDI is difficult and expensive. I’m sure Nutanix might be able to change that feeling!

So, what is Nutanix all about? Basically, Nutanix’s Complete Cluster is hardware, software and virtualization put together as a building block. A building block consists of nodes (servers) which have CPU, memory, local storage (SSD’s and hard disks) and a hypervisor (ESXi).

The beauty is that local storage from the nodes is virtualized into a pool. It is called Scale-Out Converged Storage. This SCOS is made out of so called Controller VM’s. Each node has 1 Controller VM running. All Controller VM’s communicate with each other and create a distributed storage system. Data tiering and HOT caching are some of the features. VM’s running on a node can write to any place within the cluster.  Do you need more compute power or storage, just add a next building block. With this, although you have local storage, all enterprise features like HA, DRS and VMotion just work!

A building block can host 4 nodes maximum. Each node is a 2U rack server base don Super Micro, and has 2x Intel CPU’s with 6 cores. For storage performance it uses FusionIO ioDrive, a SATA SSD and SATA HDD’s. Lastly, memory starts from 48GB to 192GB.  Hypervisor is vSphere 5.x.

So, let’s add it up quickly; per 4 nodes;

8 CPU’s x 6 cores= 48 cores, 1.3TB of FusionIO ioDrive, 1.2TB of SATA SSD, 20TB of SATA HDD, 192GB-768GB RAM

I think this is pretty amazing. 1 company, 1 solution, 1 support organization for your building block (hardware and software). You can put all use cases on this system; floating pools AND dedicated pools. There are VMware papers out there describing VMware View and local SSD storage but that’s only suitable for floating pools. Dedicated pools require HA possibilities which Nutanix can deliver

Very important to me; partners/resellers don’t have to worry about VDI and IO challenges because they aren’t there. The SSD and FusionIO card provide enough IO’s for the VM’s, according to Nutanix.  This way, IO concerns become less important.

What I need to figure out is CAPEX and price per desktop. Technically, it will work for sure but cost is also a huge factor. I bet it can compete with big SAN’s but I believe the biggest competitor of VDI is a status quo, not changing, staying with physical desktops.  Price per desktop should be close to that of a physical PC. OPEX savings you will get with VDI will be a bonus.  To be continued on this topic.

So, I’m convinced of this solution, mainly because it does make VDI simple. That’s a good thing for partners/resellers and customers. No headaches anymore about sizing and no horror stories about bad implementations and slow performance because of storage miscalculations. Also, price is pretty much linear; you scale out.

User Virtualization in the Post PC-era?

Today I ran into an article which had an interesting quote;

 Persona Management isn’t mature enough yet, and VMware knows it, Dunkin’s Brennan said. The company probably added it just to “check the box”, but he speculated that VMware would get profile management up to speed by making an acquisition

We can have a discussion about the the first part in another article but especially the acquisition part caught my attention.

So, will VMware acquire another company to speed up its profile management? I think that is an interesting question. A different question but related to the first 1 could be; how important will User Virtualization be in, let’s say, 5 years? Yet another question; will you still need User Virtualization in 5 years?

First, let’s take 1 step back for a minute; Once upon a time, there were Windows PC’s and in Windows NT the profiling scheme was introduced. Then there were roaming profiles, mandatory profiles, default user profiles and Group Policies; all mechanisms to control the user, control and save their settings like printers/wallpaper, their permissions to shares and folders, what they are or aren’t allowed to do like accessing Control Panel. Also, store profiles centrally and users will have the same look and feel from any Windows PC. Separate the user from the Operating System.

Third party vendors like RTO, Appsense, RES and LiquidWare got into this space as well to fill gaps and add new features, moving on where standard Microsoft profiles and GPO’s stopped.

But, all the tools have 1 thing in common; Windows. That’s not a bad thing but it isn’t the only platform anymore to run applications. IOS/Android phones/tablets and Macs are out there in the enterprise, even privately owned ones. The world is changing and I believe it is the Post PC-era already.

Management will change. It has to change. Applications and data will be delivered to different devices in different ways; you access ThinApp apps via VMware View from your private Android Tab 1 moment. Next, you access a SaaS app on your corporate iPhone.

Instead of managing most things on a Windows level/device level, you have to take that management up a couple of levels. To me, that’s the user level. It will become more important who is allowed to access which application/data from what device and place. The underlying Operating System and device will become less important. Horizon App Manager will be that Universal Broker where you set those user based rules.

Don’t get me wrong, I believe Windows will be around for a long time as a platform to execute specific applications. But will that platform be considered to be big enough for VMware to invest in a Windows profile management tool? Again, interesting questions.