As with most things
Microsoft, there are multiple paths to the same destination, none of
them specifically wrong or right. For example, if you want to lock your
screen—always a good idea when you step away from your desk—there are a
number of ways you can accomplish this:
- Simultaneously press the Windows key and L.
- Press Ctrl-Alt-Del and select Lock this computer.
- Create a desktop shortcut with the command line RUNDLL32 USER32.DLL,LockWorkStation.
OSD includes
similar flexibility, allowing disparate organizations to use the same
tool differently to meet their needs. Nearly every step of the process
is customizable, and you can tailor it as necessary. Although this
flexibility sometimes leads to uncertainty and conflicting opinions as
to the best way to get things done, ultimately the only thing that
matters is if it works for you and fits your organization’s goals and
requirements.
Having discussed tools used by OSD, the next section covers OSD itself.
OSD Scenarios
Here are the three main scenarios for operating system deployment, and OSD addresses all three:
The next sections describe these scenarios.
New System
The new system scenario is the easiest to deal with because you do not have to worry about user state—a user’s state
includes all the data, documents, and configuration of the system and
applications that are unique to that user. This scenario simply involves
wiping a system, whether it is straight from the vendor or previously
used inside your organization, and deploying the image and applications
to it.
In-Place Migration
An in-place migration
is one where the system is currently in use but needs to have its
operating system reloaded. This reload can be the result of a variety of
reasons:
An upgrade such as Windows XP to Windows Vista.
The current operating system installation is broken beyond repair.
The operating system installation does not meet current standards.
After a process is in place
to quickly rebuild systems using OSD, organizations typically choose to
re-image a system when the helpdesk spends a set amount of time
troubleshooting without resolving an issue. This approach provides a way
to decrease those helpdesk costs spent on fixing operating systems.
Side-by-Side Migration
A side-by-side migration
usually occurs as the result of a hardware refresh. In this scenario, a
new system physically replaces a user’s system and might involve an
operating system switch. Both in-place and side-by-side migration
scenarios add the complexity of user state migration.
For the record, there are five scenarios in existing Microsoft documentation:
New System— This is the same as the New System scenario just described in the “New System” section. Refresh— This is an in-place migration without upgrading the operating system. Replace— This is a side-by-side migration without upgrading the operating system. Upgrade— This is either an in-place or a side-by-side migration, including the upgrading of the operating system. OEM—
This is a scenario available to Original Equipment Manufacturers (OEM)
using the MDT to prepare systems for customer or end-user delivery.
The primary difference in these scenarios from the ones previously presented in the “OSD Scenarios”
section is the distinction made for upgrading the operating system.
This distinction, although significant to the end-user, does not affect
the actual operation of OSD, which does not change how it operates based
upon the starting and ending operating system.
|
Imaging Goals
The core building block, which OSD builds on, is an image of a fully installed reference Windows system. Reference systems
are systems used to build baseline images for deployment to the rest of
the systems in the organization. Because hardware differences between a
reference system and target deployment systems can cause issues, you
must often use multiple reference systems to model your environment and
thus create multiple images.
Enabling
creation and deployment of this image is what OSD focuses on. However,
OSD cannot automate the actual choice or definition of what goes into an
image because this is not a technical decision.
A general definition of an image is a single file that stores all the files and information for a specific disk drive volume on a computer system. This file is portable and can be copied or deployed to a destination system.
Deploying the
image file creates an exact duplicate of the original source volume.
This allows you to easily copy the content of a disk drive volume
containing an operating system, installed applications, and
customizations to multiple other destination systems. In effect, the
image clones the source system and allows rapid deployment of an
operating system on a large scale. The process of copying the image to
multiple machines is much quicker than doing a native Windows install
and requires little manual intervention relative to a full Windows
installation that includes applications and other miscellaneous
configurations.
A prerequisite to the
imaging process is inventorying all software and hardware in your
organization. This helps ensure you take into account all possible
variations—you must know all the possibilities to create the best
possible images.
A question often asked
is whether to include applications in the image and which ones. Do you
include Microsoft Office? Microsoft Silverlight? Questions like these
abound and fuel the continuing debate between using a thick or a thin
image. The distinction between thick and thin images is somewhat
subjective, so let’s start with some simplistic definitions:
Thick image— An image including the OS, OS updates and patches, miscellaneous components, drivers and applications
Thin image— An image containing the OS with only a minimal set of updates and patches
Conventional wisdom is that
a thin image is the better choice—why is this the case? A thin image is
easier to maintain; it contains a minimal set of components and thus a
smaller set of components that require updates. Like many theories, this
one sounds great, but reality gets in its way; because you want to
automate maintenance of images, this should be a minor concern.
If you forget to
add something in an image or need to add something simple to an image
without having to create it again, never fear, ImageX is here.
Using ImageX, you can mount a WIM image file into an empty folder using the command imagex /mountrw <image_path> <image index> <mount path>,
where the mount path is an empty folder. This loads an image to that
empty folder, where you can access the entire file system contained in
the image file as if it were part of the file system of the host
operating system.
For
example, if you have a WIM file called XPSP3.wim at the root of your C:
Drive, you can load that WIM file to an empty folder on your C: drive
named mount with the following command:
imagex /mountrw c:\XPSP3.wim 1 c:\mount
This mounts the image in a read-write mode; if you want to mount the image in a read-only mode, use /mount instead of /mountrw.
Now you can open either Windows Explorer or a command prompt and
manipulate the contents of the WIM file by navigating to C:\Mount. Figure 1 shows a folder listing of a sample captured Vista WIM file mounted in this fashion.
For example, you can add a
bitmap file to the Windows folder (accessed at c:\mount\windows) or add a
ReadMe.txt file to the All Users desktop (accessed at
c:\mount\Documents and Settings\All Users\Desktop). You can make changes
to the default user’s Registry hive using reg.exe . The following example shows setting the wallpaper for the default user:
1. | Load the default user’s Registry hive: reg.exe load HKU\Mount c:\mount\Documents and Settings\Default User\ntuser.dat.
| 2. | Modify the desired setting: reg.exe add HKU\Mount\Control Panel\Desktop /v Wallpaper /t REG_SZ /d %SystemRoot%\CompanyLogo.bmp.
| 3. | Unload the Registry hive: reg.exe unload HKU\Mount.
|
If you mount a
Windows Vista WIM, you can also use the Windows SIM and Windows Package
Manager from the WAIK to manipulate the image further, including
performing the following tasks:
Microsoft discusses each of these methods in detail at http://technet.microsoft.com/en-us/library/cc732695.aspx.
To save changes that you
make to a file system contained in the WIM file using this mounting
method, use the following command: imagex /unmount /commit c:\mount.
Note the /commit option in the command line; without this option, no
changes made to the mounted WIM are saved.
The WAIK must
actually be installed on the host system to use ImageX to mount images.
You cannot simply copy the ImageX executable to a system and use it to
mount an image.
|
Here are several goals for the deployment images:
Hardware agnostic—
Few organizations can actually standardize on a single hardware system
for all their desktops, so this goal should be obvious. What might not
be as readily obvious is that it is achievable! The main obstacles to
this goal are drivers and the Hardware Abstraction Layer (HAL) in
Windows XP. Windows Vista (and Windows 7) change the way mass-storage
drivers are handled and automatically change HALs as needed, so these
concerns are no longer valid for the newer operating systems.
Universal—
Images should be a baseline for all deployments in an organization;
they should contain the greatest common denominator of all the desktop
needs in an organization. If not everyone requires a specific
application, component, driver, and so on, it should not go into the
image—you want to layer it on after deploying the image. This simple but
important goal greatly affects your success with OSD. Creating an
optimal universal baseline relies on your knowledge of the hardware and
software in use at your organization and the accuracy of your inventory.
Deployment speed—
Although not as important as the previous goals, deployment speed is
still a valid goal and becomes important if the network is not as fast
as it should be or a wide area network (WAN) is involved. Applications
and components included in an image only slightly increase the time it
takes to deploy a system, because they are already installed and do not
have to be pulled across the network separately. Applications and
components layered on after the deployment might increase overall
deployment time significantly because they are pulled over the network.
Typically, installations include some files not even installed on the
system, such as setup.exe or alternate language resource files (in the
form of Dynamic Link Libraries or DLLs), which are installed only on
systems supporting those languages. This can have a greater impact than
is first realized.
Ease of maintenance—
In traditional, image-only deployment systems, ease of maintenance is
typically the most important factor. Creating and updating images is
often an intensive and lengthy manual process. Images created for these
systems are typically thinner, to avoid putting in any components that
might need updating. This ultimately increases overall deployment time
and can increase the complexity of
the deployment. ConfigMgr automates creating images, greatly easing
this burden and freeing you from making decisions about your images that
are based solely on maintaining the images.
An additional
consideration is whether you can install an application generically or
have its internal unique identifiers stripped. Sysprep does this for
Windows, and OSD properly prepares the ConfigMgr Client if installed,
but you must also think about the applications in the image. Some
centrally managed antivirus products have trouble when installed in an
image; they customize themselves to the specific system they are
installed on and do not behave well when copied to another system as
part of an image. This is something to verify with the vendors of the
products you plan to incorporate into the image and is an area you
should test.
Ultimately, thin versus
thick is a moot argument. Every deployment image will probably be
somewhere in the middle, and what is right for one organization might
not be right for another. Having a thin image, just for the sake of
having a thin image, should not be a primary goal. Maintaining images,
if it is automated and done correctly, is a minor concern.
Hardware Considerations
Sometimes, hardware
differences between references systems can cause problems. If you
create the image properly, it can truly be hardware-agnostic. This task
is sometimes more difficult in Windows XP than Windows Vista because of
HAL issues and SATA (Serial Advanced Technology Attachment) drivers, but
it is not impossible. To implement OSD successfully, you should derive a
full inventory of all hardware used in the targeted environment. From
this inventory, it can be determined if any anomalies exist, if all the
drivers are still available from the manufacturer, or if all the systems
meet the minimum requirements for the operating system you deploy.
When deploying Windows XP
and Windows Server 2003, different HAL types are potentially the biggest
obstacle to creating a hardware agnostic image. Here are the six HAL
types available:
The non-ACPI HALs in
the preceding list are legacy types and normally needed only for very
old hardware. Based on your hardware inventory, you probably can rule
out their use completely.
You can identify the
exact HAL in a captured image by right-clicking the image in ConfigMgr
and choosing Properties. In the resulting dialog box, choose the Images
tab at the top; see Figure 2 for an example.
Eliminating legacy hardware typically leaves the three ACPI HAL types that follow three rules for imaging:
Images created with ACPI Uniprocessor PC HAL— You can deploy these images to hardware requiring either ACPI Uniprocessor or ACPI Multiprocessor HALs.
Images created with ACPI Multiprocessor PC HAL— You can deploy these images to hardware requiring either ACPI Uniprocessor or ACPI Multiprocessor HALs.
Images created using the Advanced Configuration and Power Interface (ACPI) PC HAL type—
You cannot use these images on systems requiring either of the other
two HAL types. Luckily, hardware requiring this HAL type is outdated and
no longer common.
This
means you have to create only one image to support all your systems
because they all require either ACPI Uniprocessor or ACPI Multiprocessor
HALs. If through trial and error or through your hardware inventory you
discover that another HAL type is in use, the only currently supported
method of deploying images is to create multiple images, each containing
a different HAL.
Mass storage drivers present a similar challenge; because they are essential to booting a system, they are referred to as boot critical.
Neither Windows XP nor Windows Server 2003 includes a huge variety of
the modern boot critical drivers; this includes a lack of SATA drivers,
which are becoming more and more popular. You add boot-critical drivers
to Windows XP and Windows Server 2003 in a different way than all other
hardware drivers; you see this when manually installing a system
requiring a boot-critical driver because you need to push F6 to install
the driver during the blue screen pre-installation phase. OSD gracefully
handles this situation with little overhead or extra work. Some trial
and error testing may be involved, though.
Both Windows Vista
and Windows Server 2008 include the most popular SATA drivers out of the
box. If you do encounter a drive controller requiring a driver not
included out of the box, you can load the driver the same way as other
hardware drivers—this is due to an architectural change made by
Microsoft in the handling of boot critical drivers in Windows Vista and
Server 2008.
Although creating
multiple images initially sounds like a hassle, it should not be. If you
have properly automated your image build process using a Build and
Capture task sequence , creating the multiple images is as simple as
running that sequence on a system supporting each type of HAL in your
inventory. The task sequence is automated, so the images will be
identical except for the HAL type that they contain.
In addition, using the magic that is ImageX, these images can be merged into a single file using the /append option: imagex /append <image_path> <image_file> <"image_name"> [<"description">].
Because of the single instancing of WIM images, the resulting WIM file
contains only one copy of each file in common between the images (which
will be every file except one, the hal.dll). The result is that the WIM
file will be only slightly larger than maintaining separate WIM files
for each version.
The
only real pain point with this solution is finding a reference system
for each type of HAL. Because most of these HALs are legacy and only
used on aging or outdated hardware, chances are that you do not have any
in your lab and must be creative in procuring one from an active user.