The location of file, print, and application
servers depends on a variety of factors, including user population,
required accessibility, and more. The inclusion of application
partitions in Windows 2003,
shows the potential of allowing applications to take advantage of
Lightweight Directory Access Protocol (LDAP) services and to replicate
application date via Active Directory (AD) replication.
Another solution is the Microsoft Active
Directory Application Mode (ADAM) product. ADAM could well be named
“Active Directory Lite” because it provides pure LDAP Directory Services
(DS) and replication for applications. With ADAM, Microsoft has
decoupled DS from the Network Operating System (NOS) directory that AD
represents. ADAM requires Windows 2003. This is similar to how
application partitions work, except ADAM does not require the
intervention of DCs or AD. Thus, applications can be installed on member
servers. Another advantage is that each ADAM session has its own
schema, so application owners can update their individual schemas
without affecting other applications. Multiple instances of ADAM can run
on a single server, making very powerful and flexible application
servers. The down side of ADAM is that it can't use Domain Name Server
(DNS) SRV records to locate servers. For additional information about
ADAM, download the whitepaper from Microsoft at http://www.microsoft.com/windowsserver2003/techinfo/overview/adam.mspx.
1. DNS Placement
Although the Windows community has more than four
years of Windows 2000 experience, there is still a lot of difference of
opinion regarding placement of DNS server. A large number of deployments
I've seen specify every DC to be a DNS. I suppose it's a natural
progression to make all DCs DNS server as well, but this isn't
necessary. Typically, DNS server—at least caching-only servers—are
placed in remote sites or sites across slow links to provide DNS name
resolution on a more reliable basis than connecting to remote DNS
server, which is a good practice.
However, just blindly making every DC a DNS is not
following good design practices. My experience working in
troubleshooting DNS problems over the past four plus years has taught me
three things about DNS and AD:
DNS is really a simplistic service, at least in relation to AD.
Sometimes
we make DNS structure more complicated than it needs to be. The simpler
you make the DNS structure, the better it will work and the fewer
problems you'll have.
The more ADI (Active Directory Integrated) DNS server you have, the more potential problems you have.
This list would indicate that making every DC a DNS
is not a good idea. If you analyze each situation and it happens that
every DC should be a DNS, that's fine. For instance, if all of your
remote sites have a single DC, it makes sense that they should also be
DNS server if the links are slow or unreliable.
HP has three domains worldwide and employs three DNS
ADI servers per domain. Thus, only three DNS server for North, South,
and Central America; three more for Europe, the Middle East, and Africa;
and three more for Asia-Pacific. In the Qtest environment at HP, we
have a similar configuration, and place those DNS server in sites on or,
one hop from, the corporate backbone.
Note that ADI DNS stores the DNS records in the AD.
Thus, even if all the DNS server in a domain become unavailable, you can
simply install DNS on another DC, and the zone will be populated. Of
course, to complete the transformation you would need to do the
following:
Configure the new DNS with the IP (Internet Protocol) address of the old one (and give the old DNS a new IP address)
OR
Configure
all the clients (workstations, servers, DCs, other DNS server) to point
their resolvers to the new DNS server's IP address. This could be done
easily for DHCP clients.
The first option is usually the easiest, because it
avoids having to change the DNS resolver on all the clients. This
built-in redundancy caused one company I know of to move from UNIX BIND
for its corporate DNS structure to Windows 2000.
Consider these points when determining how many DNS server to deploy and where to deploy them.
2. Site Affinity
Windows 2000 introduced the Site Affinity feature to
allow clients who request services from a DC or a GC to contact a DC or a
GC in their local site. This could be for authentication, access to
Distributed File System (DFS) shares, GC searches, Exchange GC access,
and many other applications. Although DCs have to be associated with a
site, a site doesn't necessarily have to have a DC in it. DC-less sites
often cannot justify a DC, but want to have Site Affinity defined for
the benefit of applications such as DFS.
When designing sites without DCs, it's important to note that AD employs auto site coverage.
Site coverage is an algorithm that defines a DC in one site to provide
services to clients in another site that has no DC. This DC responds to a
request as if it were indeed in the same site as those clients. This
means that clients in the DC-less site authenticate to the DC that is
“covering” that site. Because DCs have domain boundaries, this principle
also applies to clients that are members of a domain for which there
are no DCs for that domain in the client's site. Thus, if the clients in
the Atlanta site were members of the B.A.com domain, but there was only
a DC for the A.com domain in Atlanta, the clients would authenticate to
a DC in another site that is a member of B.A.com and is covering
Atlanta for that domain.
In terms of design, Site Affinity follows the least-cost path rule. For instance in Figure 1,
four sites—Atlanta, San Francisco, Charlotte, and San Jose—are all in a
single domain. Only Atlanta and San Francisco have DCs. The costing has
been constructed so that the DC in Atlanta covers Charlotte, and the DC
in San Francisco covers San Jose. This is true because the cost from
Charlotte to Atlanta is 50 and Charlotte to San Francisco is 70, and
likewise from San Jose to San Francisco is 50 and to Atlanta is 70.
One customer I worked with had been the victim of bad
information and it nearly caused the company to implement a poor
design. With more than 200 physical locations, the company had DCs in
only about 15 sites. However, the company wanted to implement Site
Affinity for clients in all sites. The “Automatic Site Coverage” section
of the Distributed Systems Guide (in the Windows 2000 Resource Kit)
describes the site-coverage process. It specifies that sites that have
no DCs for a particular domain are “covered”by a DC for that domain in
another site. The “closest” DC is defined as a DC in a site that has the
least-cost path to the DC-less site. For example, if a user from Domain
A logs into the Boston site, and there are no DCs for Domain A in
Boston (or none respond), the Knowledge Consistency Checker (KCC)
determines the closest site that has a DC for Domain A by evaluating the
site cost to that site from Boston. Now, if the KCC determines that 2
sites have Domain A DCs, and they both have the same cost from Boston,
then there are a couple of tie-breaker rules. The rules are the site
with the most DCs will cover the DC-less site and, if that fails, then
the site whose name is highest in alphabetical order will provide
coverage.
This customer decided to put all sites in a single
site link, DefaultIPSiteLink, and let the KCC use these tie-breaker
rules to determine site coverage. The company also wanted all the sites
in the United States to be affiliated with the New York site, all the
sites in Europe to be affiliated with the London site, and all the sites
in Asia to be affiliated with Singapore or Tokyo.
This could never work reliably. The company's own
testing determined that the tie-breaker rules didn't always work as
expected (remember these are for tie-breakers, not design points). Even
if the rules did work, it's highly unlikely the company could ever get
lucky enough to have the Site Affinity to New York, London, Singapore,
or Tokyo as desired.
Throwing all sites in one basket, letting the KCC sort them out, and
having it all work out in a certain way is virtually impossible—unless
you happen to get lucky once or twice. The solution was simply to create
a multi-tier topology with site links, as shown in Figure 2.
Note that New York is the first tier; London, Singapore, Tokyo, and
Amsterdam are in the second tier; and DC-less sites are in the third
tier. Site links were assigned costs according to their tier level,
forcing replication up the tree to the core sites. This forced a DC-less
site in Berlin to be covered by the Amsterdam DC who replicates with
the central hub in New York. Under the original design using the
tie-breaker rules, all site link costs were equal, and because there are
more DCs in NYC than Amsterdam, Berlin would have had automatic site
coverage provided by the DCs in NYC.
Note that designing Site Affinity for
DC-less sites follows the same rules as if they were sites with DCs. Use
explicit site links and costing to force replication (and site
coverage) in the way you want it to go. Designing Site Affinity really
isn't that hard. It took this company a couple of weeks of testing and
it still didn't have the answer. It took me about an hour and a half to
do it the right way.