Thin clients vs, vdi virtual desktop integration.#Vdi #virtual #desktop #integration

#

Thin clients vs. thick clients for desktop virtualization

Vdi virtual desktop integration

Find out when you should recommend thin clients or PCs (aka thick clients) to your desktop virtualization customers.

This Article Covers

Server Management

RELATED TOPICS

Looking for something else?

  • Share this item with your network:
  • FAQ: Sifting through thin client technology options SearchVirtualDesktop
  • ‘Zero clients’ promise to replace fat clients but . SearchVirtualDesktop
  • Benefits of desktop virtualization technology SearchITChannel

It’s time to get your client on board with desktop virtualization, but what type of desktop hardware should you recommend? Can your customer continue to use its current conventional PCs, also known as thick clients, or is it time to migrate the company to a thin-client — or even a zero-client — computing platform?

Vdi virtual desktop integration

Vdi virtual desktop integration Vdi virtual desktop integration

Could Securing Your Channel Business Be Easier? We Can Help.

Download our latest guide to the top strategies solution providers can leverage for starting up and securing a cloud practice, successful approaches to selling and marketing cloud, and why it is urgent for partners to transition now.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

There are countless choices for matching each project’s scope and budget with a platform. Let’s review the characteristics of each principal desktop type, and that will help you and your customers weigh in on the thin client vs. thick client debate as it applies to them.

Desktop virtualization computing requirements

With conventional PCs, each individual computer processes applications, which requires significant local processing power, memory and disk storage. Desktop virtualization computing –specifically virtual desktop infrastructure (VDI) and other server-hosted architectures — instead shifts the bulk of the operating system and application processing tasks to a central server.

Consequently, the desktop hardware no longer needs to be a processing powerhouse sporting the latest CPU and massive quantities of memory.

People have used desktop virtualization as a way of getting around extensive hardware requirements on the PC end, said Brien Posey, an independent technology consultant in Rock Hill, S.C. That way they can get away with low-end or even legacy hardware.

Instead, the desktop device is typically relegated to the role of what is called a dumb terminal. Dumb terminals simply pass keyboard and mouse data to the server, then display screen renderings of the desktop and applications the server returns. That means each virtual desktop can function with a 1 GHz processor and 1 GB of RAM, or sometimes even less. These thin client platforms also forgo local disk storage, replacing hundreds of GB of disk space with 512 MB to 1 GB of flash memory — less storage than a common thumb drive. Gigabit Ethernet connectivity is usually recommended, especially when using the thin client for graphics-intensive work that requires more network bandwidth.

Using thick clients for desktop virtualization

Given the low computing requirements, any thick client in service today should be able to work in desktop virtualization. It requires no special hardware modifications and the PC can run its current operating system.

I’m just using my PC essentially as a browser, said Barb Goldworm, president and chief analyst at Focus Consulting, an industry research and analysis firm in Boulder, Colo.

The wildcards here are usually network connectivity and display performance. Even though display-over-IP protocols, like Microsoft’s Remote Desktop Protocol, have improved tremendously, some PCs may need upgrades to support Gigabit Ethernet for streaming media or other graphics applications.

Aside from adequate connectivity, the main requirement is that each thick client shares the same protocols used by the desktop server. Other protocols, like Citrix’s Independent Computing Architecture, may require additional software for Windows or Linux.

Reallocating existing thick clients is a real cost savings for desktop virtualization customers because there’s no upfront investment for desktop systems. But it’s important to remember that the computing requirements for desktop virtualization are often subjective. They can vary depending on the solutions provider and the customer’s needs and preferences. There is no software utility or other tool that can tell you whether a PC is suited for desktop virtualization or not.

Hot Spot Tutorial: Desktop virtualization

Learn more about desktop virtualization in our Hot Spot Tutorial for solutions providers.

Solution providers may base their evaluations on the relative age of the PC. For example, if a PC is new enough to be covered under warranty, then it’s probably more than suitable for desktop virtualization. Systems that are old enough to be out of warranty may still be suitable, but you should scrutinize their specifications and support costs more closely before making an upgrade decision.

Using thin clients for desktop virtualization

Solutions providers often redeploy conventional desktop PCs for desktop virtualization, but they are increasingly turning to purpose-built thin-client or zero-client endpoints. Thin clients use little local software — often just Windows Embedded CE 6.0 or another stripped-down OS — to manage the initial startup and connection to the desktop server. Examples of thin clients include Neoware, Hewlett-Packard’s t5500, t5600 and t5700 families and the SunRay clients from Sun Microsystems.

By comparison, zero clients are scarcely computers at all. The Pano device from Pano Logic Inc., for example, has no CPU, memory, storage or software. The device is merely an appliance that connects a keyboard, mouse, display, audio and USB peripherals over the LAN to an instance of Windows XP or Windows Vista running on a virtual desktop server. Otherwise, thin clients and zero clients share all of the desktop virtualization benefits.

Before choosing to go with either thin clients or thick clients, solutions providers must also consider the need for expansion devices. Older desktop virtualization technologies had trouble supporting expansion ports, but today’s offerings can handle a rich array of ports. For example, HP’s t5630 sports four USB 2.0 ports. And the t5730 has eight USB 2.0 ports, one VGA input, one DVI-D port, two PS/2 (mouse/keyboard) ports, an RJ-45 (modem/telephone) port and a serial port, along with an optional PCI expansion module. Even the Pano device and other zero-client endpoints provide three USB 2.0 ports for expansion.

Migrating to standard virtual desktop platforms

It’s important to note that choosing a standard thick-, thin- or zero-client endpoint has absolutely no effect on desktop virtualization. The servers are doing all the work, so the question of thin clients vs. thick clients is really moot from a technology standpoint.

But desktop virtualization is mainly about reducing risk and simplifying the customer’s environment, and the move to standardize on a single desktop platform does offer related benefits. Purchasing endpoint devices in volume saves money. When there is only one make and model of platform, with only one OS and patch set, it simplifies desktop support. And technicians can then focus on the nuisances of one product, rather than dealing with possibly dozens of different kinds of endpoints on the network.

The choice to standardize depends on the customer’s needs and budget, but the best tactic is often to migrate in phases — perhaps coinciding with the expiration of expensive service contracts or other end-of-life timing. A phased approach minimizes disruption to the end users and lets the customer distribute the acquisition costs across several fiscal quarters or even years.

Next Steps

Find out what to watch out for when using zero clients





23/09/2017

Posted In: NEWS

Tags: , , ,

Leave a Comment

What disk image should I use with VirtualBox, VDI, VMDK, VHD

#

Latest versions of VirtualBox supports several formats for virtual disks, but they forgot to provide a comparison between them.

Now, I am interested about a recommendation or comparison that considers the following:

  • be able to use dynamic sizing
  • be able to have snapshots
  • be able to move my virtual machine to another OS or even another free virtualization solution with minimal effort (probably something that would run fine on Ubuntu).
  • performance

asked Nov 23 ’11 at 0:28

VirtualBox has full support for VDI. VMDK. and VHD and support for Parallels Version 2 (HDD) (not newer versions) .

Answering Your Considerations

  • be able to use dynamic sizing

VDI. VMDK. and VHD all support dynamically allocated sizing. VMDK has an additional capability of splitting the storage file into files less than 2 GB each, which is useful if your file system has a small file size limit.

  • be able to have snapshots

All four formats support snapshots on VirtualBox.

  • be able to move my virtual machine to another OS or even another free virtualization solution with minimal effort (probably something that would run fine on Ubuntu).

VDI is the native format of VirtualBox. I didn’t search for any other software that supports this format.

VMDK is developed by and for VMWare, but Sun xVM, QEMU, VirtualBox, SUSE Studio, and .NET DiscUtils also support it. (This format might be the most apt for you because you want virtualization software that would run fine on Ubuntu. )

VHD is the native format of Microsoft Virtual PC. This is a format that is popular with Microsoft products.

I don’t know anything about HDD. Judging from looking at this site. Parallels is a Mac OS X product and probably isn’t suitable for you, especially considering that VirtualBox only supports an old version of the HDD format.

The format should not affect performance, or at least, performance impacts are negligible.

The factors that influence performance are:

  • your physical device limitations (much more noticeable on a hard disk drive than a solid-state drive. Why? )
  • expanding a dynamically allocated virtual disk drive (write operations are slower as the virtual disk expands, but once it’s large enough, expanding should happen less)
  • virtualization technology (hardware vs. software ; hardware virtualization helps VirtualBox and improves the speed of virtual operating systems)
  • the fact that you are running a virtual operating system. Performance is always slower than running an operating system on the host because of the virtualization process.

answered Jun 22 ’12 at 20:33

I always use VDI, as it is the native format of VirtualBox; however, using a VMDK (VMWare format) will increase compatibility with other virtual machine software.

VirtualBox will run fine on Ubuntu, so if the goal is Windows/Ubuntu interoperability, VDI would be a perfectly valid choice.

Both formats will fulfill your requirements.

As for the other two, VHD is a Microsoft-developed format, and HDD is an Apple-developed format; these are both proprietarily-licensed, so limit cross-platform support; I wouldn’t recommend them.

Mpack, explains a key performance difference between VHD and VDI here:

Having recently studied the VHD format, I would expect there to be at least a small difference in VDIs favor, most noticeable when you are comparing like with like, i.e. an optimized VDI vs optimized VHD. The reason is that the dynamic VHD format has these “bitmap” sectors scattered throughout the disk. Every time you modify a sector inside a block these bitmap blocks may need to be updated and written too, involving extra seeks, reads and writes. These bitmap sectors also have to be skipped over when reading consecutive clusters from a drive image – more seeks. The VDI format doesn’t have these overheads, especially if the VDI has been optimized (blocks on the virtual disk sorted into LBA order).

All of my comments apply to the dynamic VHD format vs dynamic VDI. Performance tests on fixed sized virtual disks is pointless since both formats are then the same (just a plain image of a disk), they just have different headers on them.

answered May 8 ’14 at 14:20

I don’t know if using vmdk would enable you to transparently run a virtual machine created in VirtualBox in VMware or not. It might. However a more universal option might be to use the VirtualBox File/Export function to create an “Open Virtualization Appliance” .ova file that can then be imported into VMware. With that approach, you can port to any virtualization system that supports .ova without caring what disk image format you use in VirtualBox.

If you need to export from the same VM at regular intervals, e.g. every day, that could be a pain. But if you only move to a different technology occasionally, it should be fine.

If you have a .vdi file already, You can test whether this works without having to create a new virtual machine. Export it to a .ova, then try importing with vmware.

answered Jul 3 ’12 at 21:22

A good reason for me for using vmdk is that Virtualbox (at least until v4.1) using VDI format has the tendency, over time, to fill the complete allocated disk space, even though the internal virtual disk usage is still much less. With Virtualbox using vmdk disks, this seems less of a problem.

But I’m talking years uptime. This might not be a problem many people encounter.

answered Jan 30 ’15 at 15:13

It s more related to the fragmentation of the guest file system than to the format itself. Enzo Jun 3 ’16 at 15:19

It depends on how you plan to use virtual disk as well. Not every VM wants a single partition on a single disk.

VDI seems to have more options (when used with VirtualBox), but as soon as you take VirtualBox out of the picture, support for VDI becomes somewhat shaky (as of late 2014).

For instance my solutions need to have maximum cross-platform support. Mounting a VDI (such as a loopback device) on linux or Windows 7 is harder and buggier than you might expect. Almost like the VDI has too many features, making it difficult to make fully conforming utilities that can operate on it.

VMDK is just less painless IMHO when you want it to work with any VM on any workstation, when you want to clone it 3 times to other systems on the network at the same time, and when you want to pry it open without launching a VM instance.

Even though I use VirtualBox 90% of the time, those few times when my disks become unaccessable in certain workflows have led me to favor VMDK for pluggable/shared filesystems.

answered Jan 8 ’15 at 4:33

Disk image files reside on the host system and are seen by the guest systems as hard disks of a certain geometry. When a guest operating system reads from or writes to a hard disk, VirtualBox redirects the request to the image file.

Like a physical disk, a virtual disk has a size (capacity), which must be specified when the image file is created. As opposed to a physical disk however, VirtualBox allows you to expand an image file after creation, even if it has data already; VirtualBox supports four variants of disk image files:

VDI: Normally, VirtualBox uses its own container format for guest hard disks — Virtual Disk Image (VDI) files. In particular, this format will be used when you create a new virtual machine with a new disk.

VMDK:VirtualBox also fully supports the popular and open VMDK container format that is used by many other virtualization products, in particular, by VMware.[25]

VHD:VirtualBox also fully supports the VHD format used by Microsoft.

Image files of Parallels version 2 (HDD format) are also supported.[26] For lack of documentation of the format, newer formats (3 and 4) are not supported. You can however convert such image files to version 2 format using tools provided by Parallels.

answered Nov 28 ’15 at 18:23

Looks like using VDI makes possible to trim disk file to its actual size VirtualBox and SSD s TRIM command support

answered Nov 19 ’16 at 0:23

While accurate it s a bit lackluster for a question that asks about the general differences between those formats, don t you think? Seth Nov 21 ’16 at 11:02





13/09/2017

Posted In: NEWS

Tags: , ,

Leave a Comment

Vdi assessment tool #vdi #assessment #tool

#

AutoIt v3 is a freeware BASIC-like scripting language designed for automating the Windows GUI and general scripting. It uses a combination of simulated keystrokes, mouse movement and window/control manipulation in order to automate tasks in a way not possible or reliable with other languages.

We looked at many editors to see which one was the most useful editor for AutoIt. We found SciTE and saw its potential and wrote a customized Lexer for the Syntax Highlighting and Syntax folding and created a special installer called SciTE4AutoIt3.

GImageX is a graphical user interface for the ImageX tool from the Windows Assessment and Deployment Kit (Windows ADK). ImageX is used to capture and apply WIM images for Windows deployments. GImageX uses the supported Microsoft WIMGAPI API for working with WIM files.

While AutoIt can be used to achieve these goals, many IT administrators are reluctant to install and learn a full scripting language when they only require a specific function. To this end AutoIt Tools will consist of small and self-contained executables that can be used for a specific purpose.

AutoIt v3.3.14.0 has been released. This releases fixes a number of reported issues. Thanks to everyone involved in creating this release and everyone who continues to download and support AutoIt. AutoIt v3.3.14.0 – 10th July, 2015. Download it here. Complete list of.

AutoIt v3.3.12.0 has been released. This releases fixes a number of reported issues. Thanks to everyone involved in creating this release and everyone who continues to download and support AutoIt. AutoIt v3.3.12.0 – 1st June, 2014. Download it here. Complete list of.

AutoIt v3.3.10.0 has been released. There are too many changes and fixes to list here, but the highlights include: 90+ bugs fixes and additions to the main AutoIt executable. 90+ bug fixes and additions to the user defined functions (UDFs). Much improved Unicode.

GImageX v2.1.0 has been released. The new version has been recompiled and tested for use with the Windows Assessment and Deployment Kit (Windows ADK) for Windows 8.1. Download it.

AutoIt v3.3.8.1 has been released. It contains a small bug fix for users of the built-in editor which prevented the editor from running/compiling scripts correctly on x64 systems and also when installed in a non-default location. Download it here. Changes in this.

After a long (too long!) development cycle, AutoIt v3.3.8.0 has been released. Download it here. Changes in this version: History There have also been a number of changes to the COM handling which may break existing scripts. Be sure to check out the Script Breaking.

A new tool has been added to the site: Logoff Screensaver. This is a screensaver that allows you to logoff or shutdown a machine when the screensaver is active. You can specify the idle time before logoff/shutdown happens and also use a passthru screensaver. You can.

AutoIt v3.3.6.1 released (16th April, 2010) (History) Recent changes: Better Unicode Send() support. Better UTF8 file support, 64-bit and Unicode support. Windows Vista UAC support and Regular Expressions. AutoIt executables and setup files are now digitally signed.





13/09/2017

Posted In: NEWS

Tags: , ,

Leave a Comment

SteelCentral AppResponse #vdi, #unified #communications #applications, #network #performance #monitoring, #network #performance

#

STEELCENTRAL APPRESPONSE

We went from being virtually blind to having a hawk’s-eye view of the network; able to see the whole picture yet capable of zeroing in immediately on details for trouble-shooting, capacity analysis, and planning. – Jean-Francois Bustarret, senior network architect, e-TF1

What is SteelCentral AppResponse?

Riverbed AppResponse joins together advanced application and transaction insight, comprehensive end-user experience monitoring, and deep network intelligence into a single appliance to provide total visibility into your application performance problems. It helps you discover where and when bottlenecks are occurring so that you can troubleshoot problems faster, and it streamlines workflows and reporting to enable cross-team collaboration to put an end to finger pointing.

Using high-speed packet acquisition and multi-stage analytic processing, AppResponse delivers rich network and application insights for traditional and web applications, VoIP and video quality monitoring, database performance analysis, and Citrix-delivered apps.

Features and benefits of SteelCentral AppResponse

“The users are happier now, so the IT staff doesn’t have to crawl under the floorboards when they go for coffee.” – Peter Toft, IT Department Manager, Icopal

Even split seconds of slow page times can have a huge impact on the revenue for an enterprise. When you have thousands of users per hour accessing your applications through web pages, you need to quickly answer:

  • What pages were slow?
  • When they were slow?
  • For whom they were slow?

AppResponse pinpoints mission-critical web application problems quickly, letting you resolve them before end users even notice.

Identify the root-cause of application performance problems faster with SteelCentral AppResponse

With SteelCentral AppResponse you will quickly see if a performance problem is rooted in the network orservers. View response time for all users and transactions by application, user business group, and location. Understand end-user experience for both SteelHead-optimized and non-optimized web and SaaS applications.

“As the global network manager, I didn’t want to spend my time analyzing network traffic; [SteelCentral AppResponse] does the work for us and puts our team at ease so we can focus on planning strategic IT projects that impact our business.” – Global Network Manager. Coty, Inc.

“I was skeptical at first, but I have to say that the [SteelCentral AppResponse] box definitely ‘does what it says on the tin’. Its ability to quickly drill down to the cause of performance issues, its amazing ease of use, and the clarity of its presentation of so many useful metrics, have made it our primary tool for application performance management.” – UK Network Manager, IT Centre UK. ALSTOM

The [SteelCentral AppResponse] solution has given us a scientific approach to managing the site, making the hard data we need easily available for analysis and thus enabling us to forecast, act, and react much more appropriately. And now, when my boss comes down and asks me ‘what’s up with the network’, I can give him a complete answer—in terms of the impact
on our business—in ten minutes or less.”
Senior Network Engineer. Hotwire.com

[SteelCentral AppResponse] saves us more time than any other management tool we use. It’s like having another person on staff.” – Senior Network and Systems Analyst, Colgate University

AppResponse ROI Calculator shows you how you can reduce costs of delivering applications to users.

SteelCentral AppResponse integrates application and network performance management to deliver:

  • Deep packet inspection and application recognition
  • End-user experience monitoring for web applications at the user transaction level
  • End-user experience monitoring of SteelHead optimized SaaS and on-premise web applications
  • Response time decomposition
  • Database transaction analysis
  • Per-user Citrix XenApp transaction analysis
  • Continuous VoIP and video monitoring
  • Rich network intelligence

Supported Network Packet Brokers:


AppResponse monitors end-user experience for SteelHead-optimized enterprise web and SaaS applications and helps you understand the benefit of SteelHead WAN optimization in terms of the effect on end user experience.

Compatible Riverbed solutions to solve application performance issues

  • When SteelCentral AppResponse reports which server is causing the delay, deploy SteelCentral AppInternals to conduct a full code-levelapplication diagnosis.
  • If the network seems to be the issue, SteelCentral NetSensor displays performance data all along the layer 2/3-network path.
  • Seamlessly drill into SteelCentral Transaction Analyzer for in-depth analysis of individual multi-tier transactions.
  • AppResponse leverages Riverbed SteelHead appliances in the branch, data center and cloud to provide visibility into end-user experience for optimized enterprise web and SaaS applications.

Citrix Monitoring for SteelCentral AppResponse

Citrix offers significant benefits to IT organizations. But, due to its unique architecture, the lack of a complete view of user transactions can sometimes complicate the performance-troubleshooting process. The CX-Tracer module uses industry-leading analytics in SteelCentral AppResponse to pinpoint the root cause of performance problems for Citrix-hosted applications.

CX-Tracer automatically correlates front-end user sessions to their back-end counterparts, enabling end-to-end analysis of individual Citrix user sessions. Additionally, CX-Tracer complements and extends your existing Citrix monitoring solutions such as Citrix EdgeSight.

Database Performance Monitoring for SteelCentral AppResponse

With the Database Performance module you can identify the impact of a database on end-to-end application performance. By monitoring database performance at the transaction level, the module can identify the particular SQL statement or database call responsible for application delay and equip your database team with actionable information.

Because it uses passive monitoring for zero overhead on database operations, no database logging is required. You get a unified view of the end-user experience and database performance, to complement and extend your investment in database-health monitoring tools.

Unified Communications Monitoring for SteelCentral AppResponse

With the Unified Communications module you can monitor and report on live VoIP and video calls. Manage voice, video, and data on the same network, and proactively resolve communication issues by using real-time and historical data on both application performance and call quality.

With passive speech-quality analysis, you can monitor call quality and resolve issues before they affect end-users. Prioritize problem resolution and set meaningful SLAs based on the effect of call quality on various areas of your business, and then easily troubleshoot with real-time, web-based dashboards for speedy resolution.

Shark module for SteelCentral AppResponse

Add Shark module to your SteelCentral™ AppResponse license to create a single appliance that provides rich network intelligence, end user experience monitoring and transaction analysis. Accelerate problem resolution with streamlined workflows and deeper network insight, letting you get to the right level of information needed to solve problems quickly and easily.

Global Selector





24/08/2017

Posted In: NEWS

Tags: , , , , , , , , , , , , , , , , , , , , , , ,

Leave a Comment

VDI 101: Persistent vs #vdi #solutions #compared

#

VDI 101: Persistent vs. Non-persistent

A conversation about desktop virtualisation will invariably turn to the topic of persistent vs. non-persistent. Anyone new to VDI or Server Based Computing (SBC), may need persistent and non-persistent defined in context. This is a discussion that I have on a semi-regular basis, so for easy reference, I’d thought I would put down a discussion on this topic into an article.

I’ll avoid talking about any one particular desktop virtualization solution and instead discuss this topic as it applies to all environments.

Persistency

I think it was Harry Labana that I originally heard this statement from “Persistency is a measure of time”. This is absolutely true what is the time between deploying and re-deploying that desktop? How long before that PC is rebuilt because of an unrecoverable error?

If Windows is re-installed or the PC retired, the user must migrate to a new instance of Windows. If Windows is running on a persistent virtual machine, how long do you let that instance of Windows run before the size of the virtual hard disk becomes unmanageable? (Perhaps you need dedupe? )

If we take persistency to mean a Windows install is immutable we live with a false sense of security. The same would apply to any general purpose OS manage the data and configuration as though that install will fail tomorrow. If you can run all non-persistent desktops, you’re way ahead.

Defining Terms

Let me first list the various terms that you might hear when discussing this topic:

  • Persistent, stateful, full clone a Windows instance is persistent because we want to protect that Windows install. Rebuilding it from scratch can take time and effort. A physical PC or server is persistent because there’s no abstraction of the OS from the hardware.
  • Non-persistent, stateless, pooled, shared, linked-clone a non-persistent virtual desktop is often destroyed at user logoff, reboot or shutdown. A Remote Desktop Services (RDS) environment can also be considered non-persistent, even though the underlying Windows instance may be persistent.

So many words to describe essentially the same thing. For clarity’s sake, let’s stick with persistent and non-persistent for the rest of this article.

Who Are We Talking To?

When discussing persistent and non-persistent, context is key who’s perspective are we using the administrator (admin, engineer, architecture etc.) or the user (end-user, IT Manager, CIO etc.)? These terms may have different meaning depending on the audience non-persistent may sound scary to the uninitiated. Tell a user that their desktop is non-persistent and see what reaction you get.

An administrator on the other hand, can choose either for his (or her) toolbox when delivering virtual desktops; however non-persistent may take a rethink when compared to traditional desktop management.

Regardless of what type of desktop a user receives, the user requires persistency of their data some things are non-negotiable.

Ultimately we need to tailor the conversation to the audience and ensure we explain these concepts succinctly.

What s Makes the Modern Desktop?

Data aside, we first need to establish whether a user requires a persistent state across sessions, before deciding on a way to manage the desktops. To do that, we should consider each of the major components of the modern Windows desktop:

  • Application data where does the application store data? If it’s a web-based application or stores data in a database, then it’s unlikely data also ends up on the user’s desktop
  • User data ideally user data (e.g. documents) is not stored on the desktop or is at least synchronised to a remote location.
  • User preferences do user preferences or their profiles need to persist across sessions? If you’re delivering just applications (and not desktops), do those applications have preferences that need to be saved? Could application settings be delivered as policies instead?
  • Applications what is your application delivery strategy. VDI/RDS has been historically been hard to manage as a result of application requirements, making persistent desktops the easy route
  • User applications do you need to provide an environment that users can install applications? Do you have developers or IT Pros in-house who often need administrative rights to get their jobs done

Virtual Desktops

A user connects to an individual virtual machine running Windows (or perhaps soon this could be Linux). Virtual desktops usually run a desktop version of Windows, but this can also be Windows Server (Server VDI).

A virtual desktop can be delivered from a persistent virtual machine and they will typically connect to that same virtual machine each session.

Virtual desktops can also be provided from a pool of virtual machines that are might be deleted or refreshed within a short amount of time. If the user connects to a pool of desktops, they could connect to any desktop in that pool (i.e. random).

Remote Desktop Services

Users connect to a shared Windows instance running Windows Server (individually known as a Remote Desktop Session Host).

From the administrators point of view, Remote Desktop Session Hosts (RDSH) are managed as a persistent virtual machines (or as Windows directly on a physical host) but they could also be managed as non-persistent VMs.

RDSH servers provide a pool (or farm) of Windows instances that users can receive their desktops or applications from a user could connect to any server in the pool. As such, they represent a non-persistent desktop, regardless of whether the underlying RDSH server is persistent or non-persistent.

Considerations

With a persistent desktop, each time the user connects to that desktop, their applications, data and user profile / preferences will be intact. No other management is required (that doesn t mean that management should be ignored though) and no change to process from physical desktop management is needed (other than the introduction of a hypervisor).

On the other hand though, if you can ensure that even with non-persistent desktops, each time the user connects, their applications are installed, their data is abstracted from the desktop (using folder redirection or file sync solutions) and their profile (and application preferences) is available at logon, the illusion of a persistent environment will be presented.

Delivering a persistent user environment on top of non-persistent desktops will take some effort can may require 3rd party tools to achieve the goal of running 100% non-persistent desktops.

Here s a short breakdown of the various differences between and considerations of persistent and non-persistent desktops:





12/07/2017

Posted In: NEWS

Tags: , ,

Leave a Comment