Is VDI dead?

Just Google the title and you will find many articles around thoughts/opinions that VDI is dying or already dead. Well, dead is not the case in my opinion but dying is for sure. And to set the scene, I’m not sure DaaS is the answer either. I have been promoting VDI for years and years but since since early last year, I have been struggling with the concept. It has become so complex and costly. To be clear, I do believe in the solutions VMware and others make though. They are mature and deliver an OS and apps in a decent way. I just think VDI isn’t the right concept anymore.

VDI was meant to make the corporate desktop cheaper and easier to manage and on top of that, to make it easier to access corporate resources from a broad range of devices. Security also was a reason to go VDI. To keep information inside (your) data center.

Again, I believe VDI solutions are very mature and offer you a decent experience. I also believe there are use cases where VDI is a great fit (maybe for a small group of contractors). However, is VDI the way to go for your majority of users in your organization? That I doubt. Let’s be honest and objective about VDI- it is an artificial solution. It is unnatural how you use the OS and legacy apps by making them accessible over the network, remotely using a display protocol. This and all the components you need to set it up has an effect on the cost and user experience. Maximize a full HD video and it won’t be as crisp as locally on a laptop/any other device. Normal features like a communications solution like Skype needs extra attention or isn’t fully supported. Multi-media needs extra attention and likely extra hardware like GPU’s. My statement, a local experience will always be the best experience, no matter how mature a VDI solution might be.

Now the other side; the solution itself. VDI has become very complex. Take a look at all the components you need to setup, to create a VDI environment; you need central hardware like compute and storage, graphics hardware, connection brokers, DMZ components, data bases, additional components to make the VDI solution better manageable and efficient, load balancers and if you want redundancy, you need to do it twice. Just check out a couple of reference architectures and check the components, ports, considerations you have to make to make it all work. It isn’t easy anymore. Also, I’m truly questioning if VDI is the cheaper solution. Also because a lot of environments are over sized. IT departments going for a bigger environment than needed just to be sure.

Is security a good reason to implement VDI? Well, that could be and I’m sure there are use cases for VDI around that topic. However, in general, when you talk about data security, solve that challenge on the data level instead of putting every desktop in the data center. There are great tools out there that can help you label and protect your data. Malware/anti virus protection needs to be done no matter which way you go. Also, security around app access is pretty much the same in a virtual or physical world.

In the end, it is about apps, security and data. You need to manage those in VDI and decentralized/physical environments. In some cases, management might be easier in a VDI environment, and sometimes in a decentralized/physical one. But does a couple of wins there justify setting up a complex VDI environment where, most likely you will lose on user experience?

In my opinion, going back to the physical/decentralized way is (partly) the new way of handling end user computing. Of course, you need to combine that with separation of data from the OS, a new way of managing the OS (light way and through Enterprise Mobile Management) and your move to the cloud with apps/data. I believe that will give you a better user experience, is easier to setup and comes for a better price. And you should be able to access corporate resources from more devices as well. In a different way but but with the same result; great user experience and productivity.

A big change: from VMware to Microsoft

After 9+ years at VMware, I decided to change companies and moved over to Microsoft. At VMware, I worked as a Sr. Specialist Systems Engineer End User Computing. I will fulfill a similar role at Microsoft as a technology Solutions Professional Enterprise Mobility + Security. In this role I will cover Azure AD, Azure Information Protection, Identity, Office Workspace and Mobility-Intune.

I’m truly excited to be working for Microsoft and eager to learn more about all it offers around Enterprise Mobility + Security.

Although I love End User Computing in general (everything VMware, Citrix and Microsoft have to offer), I will change the content of Bright-Streams more towards Microsoft technology…obviously. I will keep on making (Microsoft’s) End User Computing technology simple to understand and explain what it can do for you.

Enjoy!

Office 365, Outlook .OST files, Horizon View- The glue: App Volumes

The first time you create an account in Outlook and connect to an Exchange server, it takes a while for Outlook to get ready for use and for you to see you calendar items and emails. During the preparation of Outlook, an .OST file is being created on your machine in C:\Users\user\AppData\Local\Microsoft\Outlook.

So, why is this file being created? The .OST file is being created so you will have a local copy of all items being stored on an Exchange server. Emails, calendar items, reminders etc. This “Cached Exchange Mode” allows you to keep on working in Outlook even when you don’t have a connection (offline) to the Exchange server. A sync with the server will happen when your device is connected to the Exchange server. By default, this option is turned on but you can choose to turn it off. Besides the offline reason, you also could say a cached version is improving performance and user experience. Redirecting the .OST file to a share is supported by Microsoft, but with restrictions. If you want to know the basics around .OST and .PST files, visit here. In this blog, I’m only discussing .OST files. .PST files are very popular as well and you can use the same solution for .PST’s as I describe for .OST files.

In a VDI world, virtual desktops are residing very close to the Exchange server in the data center. And, in a VDI concept, there is no “offline” way of working considering the Exchange Server. So, in this case, there is no real reason to turn on the Cached Exchange Mode. Especially, when you are working with Linked Clone Floating VM’s in VMware Horizon View, where the clones gets deleted/refreshed after logoff. One thing you don’t want is to create that .OST file every time a user logs on to a clean virtual desktop. Best practice is to disable the Cached Mode in Horizon View.

So far so good, right?! That is what I thought as well. But, a new phenomenon is out there; Office 365.

When customers are using Office 365, and to be more specific, also use the email part of it, different rules apply. In that case, the Exchange server isn’t sitting in the customer’s data center, right beside the virtual desktops. The Exchange Server might be in a different country. I have heard several customers now, saying that without using an .OST file, the performance of Exchange in Office 365 isn’t as it used to be when Exchange was on premises.

So, how to deal with this new situation? Best practice is to avoid using .OST files in a Horizon View environment but performance requires them when using Exchange in Office 365. What to do? Well, there are 2 options. Well, 2…. I’m not sure you want to use option #1 but for a couple of exceptions, it might be a valid solution Below the 2 solutions:

  1. Use a Full Clone Dedicated VM for users and enable the Cached Mode. The .OST gets created but the VM won’t get deleted and a user always will end up on the same VM. For a handful users this could be an option.
  2. App Volumes: Use the Writable Disk feature of App Volumes for users to store the .OST file. With a writable volume, user data inside a VM can be redirected to this writable volume. This way, you can use .OST files while using Linked Clone Floating desktops and delete/refresh these desktops after use. Next time a user logs on to a clean VM, the user’s writable volume with the .OST gets mounted and the user can use Outlook with the same performance as before.

With App Volumes you have 2 options to redirect the .OST file

  1. Use the App Volumes ”Profile” template for your writable volumes. This way, a user’s profile will get redirected to the writable volume. By default, the .OST file is being written inside a user’s profile.
  2. Use the “UIA” (User Installed Apps) template for your writable disks. This way you don’t have to redirect a user’s whole profile but just the .OST file. You can use this approach when you use a profile management/user environment management tool like VMware UEM, where UEM saves settings for users to a central place. Make sure you take the .OST file outside a user’s profile; for example, write it to C:\Outlookdata. Saving the .OST file to a different location is possible via a GPO/ADMX setting https://technet.microsoft.com/en-us/library/c6f4cad9-c918-420e-bab3-8b49e1885034#ConfigureDefaultOST

If you aren’t familiar with App Volumes, App Stacks and/or Writable Volumes, please read this VMware blog.

Horizon (with) View: 5 phase design framework

In the last couple of weeks I have been presenting about VMware Professional Services, and more specific, the way they handle a View project from start to finish. VMware PSO follows a standardized framework to deliver a successful Horizon View implementation. This framework, in my honest opinion, isn’t rocket science but surprisingly, I hardly see the steps in this framework being taken by customers or partners when they are doing a VDI project.

I would like to share the framework with you so you will understand how we are approaching a larger Horizon View project. And no, this approach isn’t a secret. In fact, there is a public white paper available on the VMware website which talks about this framework and how we used this with a customer, a car manufacturer, to come up with a 2,100 seat, twin data center Horizon View design and implementation.

The framework consists of 5 phases:

View framework

 

Assess:

During this phase, one of the steps is to investigate your current environment: you monitor your physical desktop environment so you gather information about RAM, CPU, IO and application usage. Tools to do this in a detailed matter are, for example, Lakeside Systrack and Liquidware Labs Stratusphere. A new and easy tool is the VMware Cloud-Hosted Systrack tool. This will give you a good understanding about resource usage of your employees. These tools will also come up with reports and even a recommendation on how many specific ESXi servers you will need when moving to the virtual world. And yes, I agree, the virtual environment will look different than the physical one, and resource usages will be a bit different but at least your will have a good understanding on resource usage and not completely be in the dark.

Also in this phase is describing/understanding the business needs. This is a very important step because this will be the justification for the project and in the end, the justification for the available budget. You can read the business drivers in the car manufacturer use case.

Discover:

One of the steps to take in this phase is to organize workshops. Get employees involved as well. Interview them, ask them what they think about the current environment, what can be done differently, better.

Also, build business use case. Describe which groups are present in the organization; what do these groups needs to get from an IT point of view.

Another step is to build a small scale Proof of Concept/Proof of Technology.

Plan and design:

Picture1

No real need to explain this phase but see below the steps to take within this phase. And do follow them clockwise. To give you an example: I have seen customers buying end point devices first and in the end finding out they didn’t buy the right end points.

Start with Use Case Definitions. Know which groups and users are within your organization and what kind of desktop and apps you would like them to get.

With the Use Cases in mind, you can make a Pool Design easily and with that in mind, you can design View Blocks and Pods, design vSphere, storage and networking. Last but not least, because you know what end users need to get and how they will get it, you know what they need to access everything…the end points.

Build and Test:

In this phase you will build the designed environment. An important step here is load testing. Test the environment and see if your design is working with full load. If it isn’t behaving as expected, this is the time to make adjustments like adding more resources.

Optimize:

And the last phase is to optimize your newly implemented solution, check it is according best practices etc. After that you can bring it into production.

These are the phases we follow during a project. Again, no rocket science but a very structured way of handling a project. Hopefully you think a lot of these steps are open doors/obvious. I just noticed in reality that this framework is just not that common.

App Volumes and RES Workspace Manager: exclusions

Recently, I have done a Proof of Concept at a customer site with App Volumes. What I didn’t know was the fact that this customer was using RES Workspace Manager. Right after we started the PoC, we found out App Volumes and RES didn’t like each other.

Both App Volumes and RES use filter drivers, and these drivers are conflicting. Symptons are; startmenu settings, which come form RES, aren’t coming through. Also, after mounting an App Volumes app stack, no desktop information is flowing back to the RES management console.

To solve this, you have to update your app stacks and your template(s). You have to modify the snapvol.cfg file in your app stack/template and add exclusions for RES.

To update your app stacks, use the update button in the App Volumes management interface, provision the app stacks, log on to the provisioning machine and browse to C:\SnapVolumesTemp\MountPoints\MountPointxyz\snapvol.cfg and edit the file.

To update your template, follow the procedure described in this KB article.

Below you will find the exclusions for RES for as well 32-bit as 64-bit OS’s. After adding these exclusions, App Volumes and RES worked happily together.

#———RES SOFTWARE EXCLUSIONS BEGIN———————————-
#
exclude_path=\Program Files (x86)\RES Software\Workspace Manager\Data
exclude_path=\Program Files\RES Software\Workspace Manager\Data
exclude_path=%SystemRoot%\System32\drivers\appGuard_amd64.sys
exclude_path=%SystemRoot%\System32\drivers\ImgGuard_amd64.sys
exclude_path=%SystemRoot%\System32\drivers\netGuard_amd64.sys
exclude_path=%SystemRoot%\System32\drivers\RegGuard_amd64.sys
exclude_path=%LOCALAPPDATA%\Res
exclude_path=%SystemRoot%\System32\spool
exclude_registry=\REGISTRY\MACHINE\SOFTWARE\Wow6432Node\RES\Workspace Manager
exclude_registry=\REGISTRY\MACHINE\SOFTWARE\RES\Workspace Manager
exclude_registry=\REGISTRY\USER\SOFTWARE\RES\Workspace Manager
exclude_process_path=\Program Files (x86)\RES Software\Workspace Manager\Data
exclude_process_path=\Program Files\RES Software\Workspace Manager\Data
exclude_process_path=\Program Files (x86)\RES Software\Workspace Manager\svc
exclude_process_path=\Program Files\RES Software\Workspace Manager\svc
exclude_process_name=pwgrids.exe
exclude_process_name=spoolsv.exe
#
#———RES SOFTWARE EXCLUSIONS END————————————

 

 

Project Enzo: The new, fast, scalable and hybrid workspace solution

Enzo banner

I have to admit, just the name made me curious already. But, after reading about Enzo, seeing the video’s and talking to colleagues, my curiosity went through the roof because everything about Enzo is bold!

Just a couple of statements;

  • From scratch, have the first desktop up and running in an hour,
  • From 1 to 2,000 desktops in 20 minutes,
  • Create 100 desktops in under a minute,
  • No more downtime for app, OS and infrastructure/system updates,
  • Desktops can be placed on premises, in the Cloud or both, and move them back and forth,
  • It will cost less than a cup of coffee…..

So, what is Project Enzo?

Enzo is a new way of building, delivering and managing virtual workspaces (apps and desktops) with a unified, single pane of glass management interface. Administrators can manage these workspaces on premises and in the Cloud, and move the apps and desktops between the 2.

Which components make Enzo?

Enzo

The ground layer is “Enzo Ready Infrastructure”. The can be EVO:RAIL, EVO:RACK or other Hyper Converged Infrastructure appliance from VMware partners which are Enzo enabled. The intelligence that is responsible for the set up, orchestration and automation comes from VMware Smart Node technology. This will be a virtual appliance, pre configured, sitting on the appliances.

The second layer is the desktop layer. Because of new technologies like instant cloning (and I will write a blog about that soon) Enzo will be capable of getting desktops up and running in seconds. Not only cloning will go incredibly fast, but you most likely will save on vm’s because over provisioning of vm’s will be reduced. Just-in-Time desktop means vm’s will be created when users are demanding them. Nowadays, vm’s are provisioned up front most of the time. With “JIT” desktops, other solutions like App Volumes and VMware User Environment Manager come into play to deliver apps and personalize the desktop.

The 3rd layer is the management layer. It is called the Enzo Control Plane. This web-based portal will be delivered to customers as a cloud based service. It will be hosted on VMware’s vCloud Air platform. Via this portal you can set up your Enzo environment, deliver apps and desktops and monitor all components. And because of this hosted portal, you can connect your private environment to public cloud environments and move apps and desktops from 1 to the other.

A public beta will come out this summer. Visit http://www.vmwhorizonair.com/enzo. There you can register for Early Access and get more info on Project Enzo, watch a video and webinar about Enzo.

More info to come about this amazing project. Stay tuned

Atlantis Computing in a VMware View environment

A couple of years ago, I had the pleasure of being introduced to Atlantis Computing. Atlantis would help solving the storage IO issues customers were facing when implementing VDI. Basically their solution would cache storage IO in memory so that disks wouldn’t be the bottleneck anymore. Nowadays, Atlantis does way more than that and they call it “storage optimization”.

Atlantis ILIO, the product name, comes as a virtual appliance and runs on the VMware vSphere hypervisor. You need an appliance on every ESX host. Traditionally, that appliance sits between virtual machines and storage (local storage and/or shared storage). The appliance is using physical ESX memory for its operation. The appliance caches storage IO and also does inline deduplication. By doing that, it boosts VDI performance, makes you able to run more VM’s per storage device and also, you don’t need a high performance storage device. With ILIO Diskless VDI, you don’t even need physical storage anymore. VM’s are running in memory.

The ILIO solution gives you a couple of possibilities:

  1. When you use ESX servers connected to shared storage, for VDI, you could lower the specs of your SAN. You need less performance and also less disk capacity from your array.  Pick a more mainstream array instead of SSD based arrays. Eventually this comes down to a lower price per virtual desktop. Also, the “fear” around storage with VDI becomes less important. VDI doesn’t need to be difficult anymore.  This solution is a good fit for stateful/dedicated full clone desktops
  2. More and more customers are running stateless desktops on local ESX storage (so no need for a shared storage array for those VM’s). For storage they often chose SSD’s or FusionIO for performance. To save on capacity, you could use ILIO purely for deduplication but also think about the ILIO Diskless VDI option: no storage at all. All VM’s run from ESX memory. The ILIO appliance takes ESX memory and uses it as a datastore.

This week I also heard the following: use local storage for stateful/dedicated full clone virtual desktops. Use ILIO for boosting performance and dedup and also use VMware Mirage as a backup tool, in case an ESX host would fail and to backup local data and apps. Interesting thought, isn’t it?!

Bottom line, there are several solutions which can absolutely help with the VDI storage IO issues. They all have a different price, purpose and maybe even give you additional advantages. Take a good look at these solution and chose which one suits you best and gives you the lowest price per desktop.

Book published: VMware ThinApp 4.7 Essentials

The first book on VMware ThinApp 4.7 has been published!! Specialist Systems Engineer ThinApp  at VMware Peter Bjork is the author of “VMware ThinApp 4.7 Essentials”.

What you will learn from this book is:

  • Concepts behind Application Virtualization
  • ThinApp architecture and vocabulary
  • Application Linking
  • Application packaging process and best practices
  • Various methods to deploy ThinApp packages
  • How to update your ThinApp project
  • ThinApp 4.7 design and implementation best practices
  • ThinApp troubleshooting

For more details on the great book and to order it, use the following link: http://www.packtpub.com/vmware-thinapp-4-7-essentials/book