1. Preparing the LabPlanning application
deployment requires a lab environment for application repackaging.
Within an organization, different teams that work on deployment (image
engineering, application packaging, and so on) can and often should
share a single lab environment. Sharing a lab enables teams to share
deliverables and integration-test their work with other components more
easily. In a shared lab environment, however, each team must have its
own workspace on the file server and dedicated computers on which to
work. Although
the lab must have access to the Internet, it should be insulated from
the production network. However, if you don't install any server
features like Dynamic Host Configuration Protocol (DHCP), separating the
lab from the production network is not a rigid requirement. Application
repackaging does not require that the lab mirror the production
network. The lab must provide storage space for application source files
and repackaged applications. The following list describes the recommended requirements for a lab used to repackage applications: A lab server configured as follows: Windows Server 2008 or Windows Server 2008 R2 An Active Directory Domain Services domain DHCP services Domain Name System (DNS) services Windows Internet Naming Service (WINS) services (optional) Microsoft SQL Server 2005 or SQL Server 2008 Microsoft Virtual Server 2005, Microsoft Virtual PC 2007, Microsoft Windows Virtual PC, or Microsoft Hyper-V
Lab test accounts (for standard users and an administrator) Network
hardware to provide connectivity (consider the routing and bandwidth so
that moving large files doesn't impact users on the production network) Internet access (for downloading updates, files, and so on) Test computers that accurately reflect production computers Source files for all applications to be tested and repackaged Software repackaging tools
NoteMDT
2010 provides prescriptive guidance for building and using a deployment
lab. For more information, see the "Getting Started Guide" in MDT 2010. 2. Planning DeploymentCreating an application inventory is the main task you must complete when planning application deployment. You use the inventory to prioritize applications—determining
which are not compatible with Windows 7, which you must repackage for
automatic installation, and so on. The Application Compatibility Toolkit
(ACT) provides tools for collecting an application inventory based on
the production network. After creating an application inventory, you must take the following planning steps for each application in the list: Priorities
Prioritize the application inventory so that you can focus on the most
important applications first. Focus on the applications that help your
organization provide products and services to customers. While you are
prioritizing the inventory, you might discover duplicate applications
(different versions of the same application or different applications
fulfilling the same purpose) that you can eliminate. You may also
discover many applications that were used for a short-term project and
are no longer required. Categories
Categorize each application in the inventory as a core application or a
supplemental application. A core application is common to most
computers (virus scanners, management agents, and so on), whereas a
supplemental application is not. Installation method
Determine how to install the application automatically. Whether the
application is a core or supplemental application, you achieve the best
results by completely automating the installation. You cannot automate
the installation of some legacy applications; you must repackage them.
If so, the best time to choose a repackaging technology is while
planning deployment. Determine responsibility
Determine who owns and is responsible for the installation and support
of each application. Does IT own the application or does the user's
organization own it? Subject matter experts
You will not have the in-depth understanding of all applications in the
organization that you will need to repackage them all. Therefore, for
each application, identify a subject matter expert (SME) who can help
you make important decisions. A good SME is not necessarily a highly
technical person. A good SME is the person most familiar with an
application, its history in the organization, how the organization uses
it, where to find the media, and so on. Configuration
Based on feedback from each application's SME, document the desired
configuration of each application. You can capture the desired
configuration in transforms that you create for Windows Installer–based
applications or within packages that you create when repackaging older
applications. Configuring older applications is usually as easy as
importing Registration Entries (.reg) files on the destination computer
after deployment.
ACT
5.5 provides data organization features that supersede the application
inventory templates in earlier versions of MDT. With ACT 5.5, you can
categorize applications a number of ways: by priority, risk, department,
type, vendor, complexity, and so on. You can also create your own
categories for organizing the application inventory. After
creating an application inventory, the next step is to prioritize the
list. Prioritizing the application inventory is not a task that you
perform unilaterally. Instead, you will want to involve other team
members, management, and user representatives in the review of
priorities. The priority levels you choose to use might include the following: High
High-priority applications are most likely mission-critical or core
applications. These are applications that are pervasive in the
organization or are complex and must be addressed first. Examples of
high-priority applications include virus scanners, management agents,
Microsoft Office, and so on. Medium
Medium-priority applications are nice to have but not essential. These
are applications that are not as pervasive or complex as high-priority
applications. For example, a custom mailing-list program might be a
medium-priority application, because you can replicate the functionality
in another application. To test whether an application is indeed a
medium priority, answer this question: What's the worst that would
happen if all the high-priority applications are deployed, but not this
application? If you foresee no major consequences, the application is a
medium priority. Low
Low-priority applications are applications that deserve no attention in
the process. Examples of low-priority applications are duplicate
applications, applications that users have brought from home and
installed themselves, and applications that are no longer in use. When
prioritizing an application as low, record the reason for that status in
case you must defend the decision later.
Prioritizing
the application list helps you focus on the applications in an orderly
fashion. Within each priority, you can also rank applications by order
of importance. Ranking applications in an organization using thousands
of applications is a foreboding task, however. Instead, you might want
to rank only the high-priority applications or repeat the prioritization
process with only the high-priority applications. After
prioritizing the application list, you must categorize each high- and
medium-priority application. You can drop the low-priority applications
from the list, as you have no intention of addressing them. The
following categories help you determine the best way to deploy an
application: Core applications
Core applications are applications common to most of the computers in
the organization (typically 80 percent or more) or applications that
must be available the first time you start a computer after installing
the operating system. For example, virus scanners and security software
are usually core applications because they must run the first time you
start the computer. Mail clients are core applications because they are
common to all users and computers. The following list contains specific
examples of what most organizations might consider core applications: Adobe Acrobat Reader Corporate screen savers Database drivers and connectivity software Macromedia Flash Player Macromedia Shockwave Microsoft Office Network and client management software, such as OpenManage clients Terminal emulation applications, such as TN3270 Various antivirus packages Various Windows Internet Explorer plug-ins Various Microsoft Office Outlook plug-ins
Supplemental applications
Supplemental applications are applications that aren't core
applications. These are applications that are not common to most
computers in the organization (department-specific applications) and
aren't required when you first start the computer after installing a new
operating system image. Examples of supplemental applications include
applications that are department specific, such as accounting software,
or role specific, such as dictation software. The following list
contains examples of what most organizations consider supplemental
applications: Microsoft Data Analyzer 3.5 SQL Server 2005 Client Tools Microsoft Visual Studio 2005 and Visual Studio 2008 Various Computer-Aided Design (CAD) applications Various Enterprise Resource Planning (ERP) systems
For each high- and medium-priority application, you must determine the best way to install it. For each, consider the following: Automatic installation
Most applications provide a way to install automatically. For example,
if the application is a Windows Installer package file (with the .msi
file extension), you can install the application automatically. Repackaged application
If an application does not provide a way to install automatically, you
can repackage it to automate and customize installation by using one of
the packaging technologies . Repackaging applications is a complex process
and is quite often the most costly and tedious part of any deployment
project. Make the decision to repackage applications only after
exhausting other possibilities. Doing so requires technical experience
with repackaging applications or using third-party companies to
repackage the application for you. Screen scraping You can automate most applications with interactive installers by using a tool that simulates keystrokes, such as Windows Script Host Understand that this
method is more of a hack than a polished solution, but sometimes you're
left with no other choice. Occasionally, the installation procedure may
require the user to use the mouse or otherwise perform some complex task
that cannot be automated easily. In these circumstances, automating the
installation process may not be feasible.
For
each application, record the installation method. Does the application
already support automated installation? If so, record the command
required to install the application. Are you required to repackage the
application? If so, record the packaging technology you'll use and the
command required to install the application. If you will use screen scraping to install the application, indicate that decision in the application inventory. In
a small organization with a few applications, you might know them all
very well. In a large organization with thousands of applications, you
will know very few of them well enough to make good decisions about
repackaging applications. Therefore, for each application you must
identify a SME. This SME should be an expert with the application,
having the most experience with it. In other words, each application's
SME will have insight into how the organization installs, configures,
and uses that application. The SME will know the application's history
and where to find the application's source media. Record the name and
e-mail alias of each application's SME in the application inventory. During planning, with the SME's help, you should review each application and record the following: The
location of the installation media. Often, the SME is the best source
of information about the location of the source media, such as CDs,
disks, and so on. Settings that
differ from the application's default settings that are required to
deploy the application in a desired configuration. External
connections. For example, does the application require a connection to a
database, mainframe, Web site, or other application server? Constraints associated with the application. Deployment
compatibility. Is the application compatible with disk imaging and
Sysprep? Is the application compatible with 32-bit systems? 64-bit
systems? Application dependencies. Does the application depend on any patches or other applications?
3. Choosing a Deployment StrategyMost
companies share a common goal: create a corporate-standard desktop
configuration based on a common image for each operating system version.
They want to apply a common image to any desktop in any region at any
time and then customize that image quickly to provide services to users. In
reality, most organizations build and maintain many images—sometimes
even hundreds of images. By making technical and support compromises and
disciplined hardware purchases, and by using advanced scripting
techniques, some organizations have reduced the number of images they
maintain to between one and three. These organizations tend to have the
sophisticated software distribution infrastructures necessary to deploy
applications—often before first use—and keep them updated. Business
requirements usually drive the need to reduce the number of images that
an organization maintains. Of course, the primary business requirement
is to reduce ownership costs. The following list describes costs
associated with building, maintaining, and deploying disk images: Development costs
Development costs include creating a well-engineered image to lower
future support costs and improve security and reliability. They also
include creating a predictable work environment for maximum productivity
balanced with flexibility. Higher levels of automation lower
development costs. Test costs
Test costs include testing time and labor costs for the standard image,
the applications that might reside inside it, and those applications
applied after deployment. Test costs also include the development time
required to stabilize disk images. Storage costs Storage costs include storage of the deployment
shares, disk images, migration data, and backup images. Storage costs
can be significant, depending on the number of disk images, number of
computers in each deployment run, and so on. Network costs Network costs include moving disk images to deployment shares and to desktops.
As
the size of image files increases, costs increase. Large images have
more updating, testing, distribution, network, and storage costs
associated with them. Even though you update only a small portion of the
image, you must distribute the entire file. Thick images
are monolithic images that contain core applications and other files.
Part of the image-development process is installing core applications
prior to capturing the disk image, as shown in Figure 1. To date, most organizations that use disk imaging to deploy operating systems are building thick images. The
advantage of thick images is deployment speed and simplicity. You
create a disk image that contains core applications and thus have only a
single step to deploy the disk image and core applications to the
destination computer. Thick images also can be less costly to develop,
as advanced scripting techniques are not often required to build them.
In fact, you can build thick images by using MDT 2010 with little or no
scripting work. Finally, in thick images, core applications are
available on first start. The disadvantages of thick images are
maintenance, storage, and network costs, which rise with thick images.
For example, updating a thick image with a new version of an application
requires you to rebuild, retest, and redistribute the image. Thick images require more storage and use more network resources in a short span of time to transfer. The
key to reducing image count, size, and cost is compromise. The more you
put in an image, the less common and bigger it becomes. Big images are
less attractive to deploy over a network, more difficult to update
regularly, more difficult to test, and more expensive to store. By
compromising on what you include in images, you reduce the number you
maintain and you reduce their size. Ideally, you build and maintain a
single, worldwide image that you customize post-deployment. A key
compromise is when you choose to build thin images. Thin images contain few if any core applications. You install applications separately from the disk image, as shown in Figure 2.
Installing the applications separately from the image usually takes
more time at the desktop and possibly more total bytes transferred over
the network, but spread out over a longer period of time than a single
large image transfer. You can mitigate the network transfer by using
trickle-down technology that many software distribution infrastructures
provide, such as Background Intelligent Transfer Service (BITS). Thin
images have many advantages. First, they cost less to build, maintain,
and test. Second, network and storage costs associated with the disk
image are lower because the image file is physically smaller. The
primary disadvantage of thin images is that postinstallation
configuration can be more complex to develop initially, but this is
offset by the reduction in costs to build successive images. Deploying
applications outside the disk image often requires scripting and usually
requires a software distribution infrastructure. Another disadvantage
of thin images is that core applications aren't available on first
start, which might be necessary in high-security scenarios. If you
choose to build thin images that do not include applications, you
should have a systems-management infrastructure, such as Microsoft
System Center Configuration Manager 2007, in place to deploy
applications. To use a thin
image strategy, you will use this infrastructure to deploy applications
after installing the thin image. You can also use this infrastructure
for other postinstallation configuration tasks, such as customizing
operating system settings. Hybrid images
mix thin- and thick-image strategies. In a hybrid image, you configure
the disk image to install applications on first run, giving the illusion
of a thick image but installing the applications from a network source.
Hybrid images have most of the advantages of thin images. However, they
aren't as complex to develop and do not require a software distribution
infrastructure. They do require longer installation times, however,
which can raise initial deployment costs. An
alternative is to build one-off thick images from a thin image. In this
case, you build a reference thin image. After the thin image is
complete, you add core applications and then capture, test, and
distribute a thick image. Testing is minimized because creating the
thick images from the thin image is essentially the same as a regular
deployment. Be wary of applications that are not compatible with the
disk-imaging process, however. If you choose to build hybrid
images, you will store applications on the network but include the
commands to install them when you deploy the disk image. This is
different than installing the applications in the disk image. You are
deferring application installs that would normally occur during the
disk-imaging process to the image-deployment process. They become a
postinstallation task.
|