Azure Information Protection- part 1: Document+email protection overview

In one of my earlier post, I wrote about VDI and if the concept is dead. One of my points was that VDI was/is used for content security reasons. Place all your desktops virtually in a central data center, and automatically, the assumption is that content will be protected as well. I have heard this use case many times but I believe there is a better approach to deal with content protection: truly protect your content; your documents and emails. Besides true protection, make your users aware what kind of content they are dealing with. Make them think twice before they send content to others, for example.

Azure Information Protection is a cloud-based solution that helps you to classify, label
and protect documents and emails. This can be done automatically (rules set by administrators), manually (by users) or both- where users are given recommendations. Optionally you can monitor and respond which means you can track & trace content and revoke access.

By using labels you add classifications to files and emails. This is done by adding metadata in clear text to files and email headers.

So, there are 3 components to Azure Information Protection:

  1. Classification/labeling: as an organisation you must think about your content- documents/emails first. There needs to be a organisation wide policy on how to classify/label content. Call it sensitivity levels, like: Personal, General, Confidential etc. You need to describe which content will get what classification/label. This policy will be implemented in Azure Information Protection. I sometimes call this the awareness phase: as an organisation, you need to think documents/emails, get aware of the sensitivity and translate that to labels. As a user, because of the policy, you will become aware of the guidelines set by the organisation how to handle specific content, and become more aware of its sensitivity. Besides coming up with classifications/labels, as an organisation you also need to think about the results/consequences within a classification/label. Is there a result within a label? Does a label require protection? That’s component 2,
  2. Protection: if you decide/agree as an organisation that a specific classification/label needs protection, you will need to define what kind of protection; encryption, access control, expiration data etc. That’s a second policy you need to think about. Do realize that not all classifications/labels will get protection in most cases, as far as I see it. So, as an example: documents with a label “General” aren’t protected and can be send to everyone, opened by everyone. etc. Documents labeled as “Confidential” might have a protection policy- only shared internally, only viewed and not edited, etc. When there is a protection policy in place, attached to a classification/label, users can track&trace the document and optionally revoke access to it. Component 3,
  3. Monitor and Respond: when a document is classified/labeled and protected, a user can monitor the usage of that document when he/she shares it. Via the Azure Information Protection client, a user can monitor who has opened the document and from where. That user also can revoke access to that document.

The beauty of Azure Information Protection is that it can classify/label and protect data no matter where the documents are; file shares, OneDrive, Sharepoint etc. It is very intuitive and easy to use for users through buttons. I will cover what Azure Information Protection looks like from an admin perspective, from a user perspective and use cases in different, coming posts. Stay tuned. If you want to know more/read more, click here.

Windows 10+Azure AD: register or join? Turn on Auto Enrollment to Intune?

In Windows 10, under Settings- Accounts and Access work or school, you have a couple of actions to pick from: setting up a work or school account, join the Windows 10 device to Azure Active Directory or join it to a local Active Directory. Personally I know  the local AD and I do understand Azure AD but what is setting up a work or school account? And how is that different than Azure AD? When will I use one or the other?

Let’s start with setting up a school or work account. With this option you register your Windows 10 device in Azure AD. So, this isn’t an Azure AD join. The use case behind this is Bring Your Own Device. Personal owned Windows devices being used for work as well. By registering your personal W10 device in AAD (Azure AD), you will enjoy the benefits of Single Sign On to your company’s cloud apps, seamless multi-factor authentication and access to on-premises apps via the Web Application Proxy and ADFS Device Registration.

Device registration is possible for Windows, IOS and Android devices. In fact, registration is the only option for IOS and Android devices since they cannot be joined to AAD.

In Azure (the Azure Portal- Active Directory- Applications- Intune), you can turn on “Auto Enrollment” to Intune. What this means is that when Windows 10 devices are registered by users, those devices are automatically being enrolled in Intune. You can set this up for all users, none of them or by group. After enrollment, IT can manage your device by applying device and application policies. My question is: do you want BYOD/personal devices being enrolled automatically into Intune after registration? Thinking out loud, I would say no. Not automatically. Maybe registering a device, so that the user can benefit from SSO is more than enough for that user. And also thinking but the upcoming W10 Creators Update with fancy Mobile Access Management features, maybe that is enough and there is no need to manage the device. I can imagine that a user doesn’t want IT to manage his/her own personal device. This could change if that user would like to access more company resources, like email. In that case, conditional access could require the user to enroll his device in order to get access to mail. It’s up to the user to do that or not. Do realize, with IOS and Android devices, there is no choice of registering and/or (automatically) enrolling the device. It always is both. You enroll those devices and then they are registered in AAD.

 

So, now Azure AD join. Here, the use case is corporate owned devices. And again, only Windows devices can be joined. Besides the SSO, Multi-Factor Authentication benefits like with registering devices, a join adds a couple of other features: Phone/PIN sign in and AAD cloud Bitlocker key storage, to name a couple. Would you auto enroll those devices into Intune automatically? In this case I would say yes. The devices are corporate, users have access to many resources so you would like to manage and protect these devices and users. Automatically enrolling the devices and deploying policies is a great way in doing that.

I won’t discuss the local AD join because it’s been around for ages and it’s known to the public. I always like to say it is the “old school” way of Windows device management: AD with SCCM and GPO’s. The new way in my opinion is AAD join/registration with Enterprise Mobility Management- Intune. However, know it is possible to register a local AD joined Windows device to AAD.

Some background resources can be found here:

  • Differences between AAD join/registration here,
  • Automatic enrollment,
  • Managing AAD joined devices with Intune,
  • Windows 10 AAD join

Credits to Pieter Wigleven and Bjorn Axell, both Microsoft colleagues for helping me out!

Windows 10 Creators update: Office Mobile App Management happiness!

In my previous post, I discussed one of the great possibilities in Intune: managing the mobile Microsoft Office apps on Android and IOS. I truly like this feature and immediately I was thinking; what if….what if this would be possible on Windows 10 as well?! What I totally missed was an official blog post from Microsoft discussing the Windows 10 Creators update. Among many cool updates, there will be a great new feature: Mobile App Management for the Office apps on Windows 10. All the features I discussed i the previous post for IOS and Android will apply for Windows 10 as well. You won’t need to enroll your personal Windows machine anymore to access corporate resources/data in a secure way. The MAM policies will give you a great experience, setting up the apps and accessing emails and data and providing security for corporate data. Do check out the clip.

 

Microsoft Intune+mobile Office apps = Greatness!

Microsoft Office: Word, PowerPoint, Outlook, Excel, OneNote, OneDrive, etc, who doesn’t know these applications? Most of you know the apps from a corporate point of view and I think it is safe to say the Office suite of products is the corporate standard. As we know, there is another world besides the laptop/desktop/Windows based one: the mobile devices world. And besides desktop/laptop vs mobile, we also have a corporate vs private world. To make it even more exciting, the mixture of all worlds is happening all around us.

Wouldn’t it be great to use the same productivity apps you are used to use among all these different devices? What maybe isn’t known to many people is the fact Microsoft has developed many apps for IOS and Android. You can use the complete Office suite on your mobile devices. Find the Microsoft apps on iTunes here. So, if you want to have the same experience on your mobile devices, or even on your Apple Macs as on your corporate device, you can. The Office Suite is developed for all platforms.

Great, users can have the same experience, on Windows, Mac and mobile devices. But when these mobile devices are used professionally, IT would like to manage at least the productivity apps. It is great you can access and consume corporate data by using the Office apps, but you would like to secure the data as well.

To do this security, other MDM/MAM (Mobile Device Management/Mobile Application Management) vendors have created their own productivity apps. Their own email clients and data clients which previews Microsoft Word, Excel and PowerPoint documents. Those apps are not what end users know and like. Also, it isn’t the core business of these MDM/MAM vendors to develop Office/productivity tools.

With Microsoft Intune, it is possible to let users use what they know and like and secure the Office apps in multiple ways:

  1. Traditionally, you can enrol your device in Intune and manage the device and the Office apps: MDM-MAM,
  2. It also is possible to use the apps and secure them without enrolment: MAM Only
  3. If you currently are using another MDM tool, you still can use #2 by using Intune for the MAM part.

Bullit 1 is pretty clear: you enrol the device and policies are being pushed regarding the device and apps, by using Intune. With #2 and #3, the application policies are being pushed after users sign in, within the office apps on IOS and Android, with their accounts in Microsoft Azure/Intune.

 

 

 

So, what can be configured using MDM-MAM or MAM only?

  1. You can allow/deny copy/past from the Office apps to other native apps,
  2. You could allow copy/paste from native apps to the Office apps,
  3. You can set a PIN on all apps for another level of security,
  4. You can specify that links need to open in the Managed Browser,
  5. You can prohibit “save as”, to prevent users to save a corporate document on another, unmanaged location.

With Intune and the Microsoft productivity apps, users use familiar apps for productivity, and which are built for that purpose and IT can secure access to and from these apps, and secure corporate data. Check out this Microsoft blog for more details and screen shots. Also, check out this website to see more apps that can be managed by Intune.

Is VDI dead?

Just Google the title and you will find many articles around thoughts/opinions that VDI is dying or already dead. Well, dead is not the case in my opinion but dying is for sure. And to set the scene, I’m not sure DaaS is the answer either. I have been promoting VDI for years and years but since since early last year, I have been struggling with the concept. It has become so complex and costly. To be clear, I do believe in the solutions VMware and others make though. They are mature and deliver an OS and apps in a decent way. I just think VDI isn’t the right concept anymore.

VDI was meant to make the corporate desktop cheaper and easier to manage and on top of that, to make it easier to access corporate resources from a broad range of devices. Security also was a reason to go VDI. To keep information inside (your) data center.

Again, I believe VDI solutions are very mature and offer you a decent experience. I also believe there are use cases where VDI is a great fit (maybe for a small group of contractors). However, is VDI the way to go for your majority of users in your organization? That I doubt. Let’s be honest and objective about VDI- it is an artificial solution. It is unnatural how you use the OS and legacy apps by making them accessible over the network, remotely using a display protocol. This and all the components you need to set it up has an effect on the cost and user experience. Maximize a full HD video and it won’t be as crisp as locally on a laptop/any other device. Normal features like a communications solution like Skype needs extra attention or isn’t fully supported. Multi-media needs extra attention and likely extra hardware like GPU’s. My statement, a local experience will always be the best experience, no matter how mature a VDI solution might be.

Now the other side; the solution itself. VDI has become very complex. Take a look at all the components you need to setup, to create a VDI environment; you need central hardware like compute and storage, graphics hardware, connection brokers, DMZ components, data bases, additional components to make the VDI solution better manageable and efficient, load balancers and if you want redundancy, you need to do it twice. Just check out a couple of reference architectures and check the components, ports, considerations you have to make to make it all work. It isn’t easy anymore. Also, I’m truly questioning if VDI is the cheaper solution. Also because a lot of environments are over sized. IT departments going for a bigger environment than needed just to be sure.

Is security a good reason to implement VDI? Well, that could be and I’m sure there are use cases for VDI around that topic. However, in general, when you talk about data security, solve that challenge on the data level instead of putting every desktop in the data center. There are great tools out there that can help you label and protect your data. Malware/anti virus protection needs to be done no matter which way you go. Also, security around app access is pretty much the same in a virtual or physical world.

In the end, it is about apps, security and data. You need to manage those in VDI and decentralized/physical environments. In some cases, management might be easier in a VDI environment, and sometimes in a decentralized/physical one. But does a couple of wins there justify setting up a complex VDI environment where, most likely you will lose on user experience?

In my opinion, going back to the physical/decentralized way is (partly) the new way of handling end user computing. Of course, you need to combine that with separation of data from the OS, a new way of managing the OS (light way and through Enterprise Mobile Management) and your move to the cloud with apps/data. I believe that will give you a better user experience, is easier to setup and comes for a better price. And you should be able to access corporate resources from more devices as well. In a different way but but with the same result; great user experience and productivity.

A big change: from VMware to Microsoft

After 9+ years at VMware, I decided to change companies and moved over to Microsoft. At VMware, I worked as a Sr. Specialist Systems Engineer End User Computing. I will fulfill a similar role at Microsoft as a technology Solutions Professional Enterprise Mobility + Security. In this role I will cover Azure AD, Azure Information Protection, Identity, Office Workspace and Mobility-Intune.

I’m truly excited to be working for Microsoft and eager to learn more about all it offers around Enterprise Mobility + Security.

Although I love End User Computing in general (everything VMware, Citrix and Microsoft have to offer), I will change the content of Bright-Streams more towards Microsoft technology…obviously. I will keep on making (Microsoft’s) End User Computing technology simple to understand and explain what it can do for you.

Enjoy!

Office 365, Outlook .OST files, Horizon View- The glue: App Volumes

The first time you create an account in Outlook and connect to an Exchange server, it takes a while for Outlook to get ready for use and for you to see you calendar items and emails. During the preparation of Outlook, an .OST file is being created on your machine in C:\Users\user\AppData\Local\Microsoft\Outlook.

So, why is this file being created? The .OST file is being created so you will have a local copy of all items being stored on an Exchange server. Emails, calendar items, reminders etc. This “Cached Exchange Mode” allows you to keep on working in Outlook even when you don’t have a connection (offline) to the Exchange server. A sync with the server will happen when your device is connected to the Exchange server. By default, this option is turned on but you can choose to turn it off. Besides the offline reason, you also could say a cached version is improving performance and user experience. Redirecting the .OST file to a share is supported by Microsoft, but with restrictions. If you want to know the basics around .OST and .PST files, visit here. In this blog, I’m only discussing .OST files. .PST files are very popular as well and you can use the same solution for .PST’s as I describe for .OST files.

In a VDI world, virtual desktops are residing very close to the Exchange server in the data center. And, in a VDI concept, there is no “offline” way of working considering the Exchange Server. So, in this case, there is no real reason to turn on the Cached Exchange Mode. Especially, when you are working with Linked Clone Floating VM’s in VMware Horizon View, where the clones gets deleted/refreshed after logoff. One thing you don’t want is to create that .OST file every time a user logs on to a clean virtual desktop. Best practice is to disable the Cached Mode in Horizon View.

So far so good, right?! That is what I thought as well. But, a new phenomenon is out there; Office 365.

When customers are using Office 365, and to be more specific, also use the email part of it, different rules apply. In that case, the Exchange server isn’t sitting in the customer’s data center, right beside the virtual desktops. The Exchange Server might be in a different country. I have heard several customers now, saying that without using an .OST file, the performance of Exchange in Office 365 isn’t as it used to be when Exchange was on premises.

So, how to deal with this new situation? Best practice is to avoid using .OST files in a Horizon View environment but performance requires them when using Exchange in Office 365. What to do? Well, there are 2 options. Well, 2…. I’m not sure you want to use option #1 but for a couple of exceptions, it might be a valid solution Below the 2 solutions:

  1. Use a Full Clone Dedicated VM for users and enable the Cached Mode. The .OST gets created but the VM won’t get deleted and a user always will end up on the same VM. For a handful users this could be an option.
  2. App Volumes: Use the Writable Disk feature of App Volumes for users to store the .OST file. With a writable volume, user data inside a VM can be redirected to this writable volume. This way, you can use .OST files while using Linked Clone Floating desktops and delete/refresh these desktops after use. Next time a user logs on to a clean VM, the user’s writable volume with the .OST gets mounted and the user can use Outlook with the same performance as before.

With App Volumes you have 2 options to redirect the .OST file

  1. Use the App Volumes ”Profile” template for your writable volumes. This way, a user’s profile will get redirected to the writable volume. By default, the .OST file is being written inside a user’s profile.
  2. Use the “UIA” (User Installed Apps) template for your writable disks. This way you don’t have to redirect a user’s whole profile but just the .OST file. You can use this approach when you use a profile management/user environment management tool like VMware UEM, where UEM saves settings for users to a central place. Make sure you take the .OST file outside a user’s profile; for example, write it to C:\Outlookdata. Saving the .OST file to a different location is possible via a GPO/ADMX setting https://technet.microsoft.com/en-us/library/c6f4cad9-c918-420e-bab3-8b49e1885034#ConfigureDefaultOST

If you aren’t familiar with App Volumes, App Stacks and/or Writable Volumes, please read this VMware blog.

Horizon (with) View: 5 phase design framework

In the last couple of weeks I have been presenting about VMware Professional Services, and more specific, the way they handle a View project from start to finish. VMware PSO follows a standardized framework to deliver a successful Horizon View implementation. This framework, in my honest opinion, isn’t rocket science but surprisingly, I hardly see the steps in this framework being taken by customers or partners when they are doing a VDI project.

I would like to share the framework with you so you will understand how we are approaching a larger Horizon View project. And no, this approach isn’t a secret. In fact, there is a public white paper available on the VMware website which talks about this framework and how we used this with a customer, a car manufacturer, to come up with a 2,100 seat, twin data center Horizon View design and implementation.

The framework consists of 5 phases:

View framework

 

Assess:

During this phase, one of the steps is to investigate your current environment: you monitor your physical desktop environment so you gather information about RAM, CPU, IO and application usage. Tools to do this in a detailed matter are, for example, Lakeside Systrack and Liquidware Labs Stratusphere. A new and easy tool is the VMware Cloud-Hosted Systrack tool. This will give you a good understanding about resource usage of your employees. These tools will also come up with reports and even a recommendation on how many specific ESXi servers you will need when moving to the virtual world. And yes, I agree, the virtual environment will look different than the physical one, and resource usages will be a bit different but at least your will have a good understanding on resource usage and not completely be in the dark.

Also in this phase is describing/understanding the business needs. This is a very important step because this will be the justification for the project and in the end, the justification for the available budget. You can read the business drivers in the car manufacturer use case.

Discover:

One of the steps to take in this phase is to organize workshops. Get employees involved as well. Interview them, ask them what they think about the current environment, what can be done differently, better.

Also, build business use case. Describe which groups are present in the organization; what do these groups needs to get from an IT point of view.

Another step is to build a small scale Proof of Concept/Proof of Technology.

Plan and design:

Picture1

No real need to explain this phase but see below the steps to take within this phase. And do follow them clockwise. To give you an example: I have seen customers buying end point devices first and in the end finding out they didn’t buy the right end points.

Start with Use Case Definitions. Know which groups and users are within your organization and what kind of desktop and apps you would like them to get.

With the Use Cases in mind, you can make a Pool Design easily and with that in mind, you can design View Blocks and Pods, design vSphere, storage and networking. Last but not least, because you know what end users need to get and how they will get it, you know what they need to access everything…the end points.

Build and Test:

In this phase you will build the designed environment. An important step here is load testing. Test the environment and see if your design is working with full load. If it isn’t behaving as expected, this is the time to make adjustments like adding more resources.

Optimize:

And the last phase is to optimize your newly implemented solution, check it is according best practices etc. After that you can bring it into production.

These are the phases we follow during a project. Again, no rocket science but a very structured way of handling a project. Hopefully you think a lot of these steps are open doors/obvious. I just noticed in reality that this framework is just not that common.