1. Workload groups
Classifying incoming database connections into a workload group offers a number of benefits, including the following:
Connections that share a similar property, for example, Application Name, can be grouped together for purposes of applying resource usage boundaries via a specific resource pool.
Application of resource usage constraints, such as a custom MAXDOP setting, can be made at a workload group level, thereby enabling more control over resources in mixed-purpose environments.
Resource
usage can be monitored at a group level, enabling a deeper
understanding and visibility of current, aggregate, minimum, and maximum
resource usage for a given group.
As covered earlier,
there are two preexisting system workload groups, default and internal.
The default group is used for any connections not classified into a
user-defined group or classified into a group that no longer exists. The
internal group, used for internal SQL Server operations, can be
monitored, but connections cannot be classified into this group, nor can
the group be modified in any way.
In addition to these system workload groups, user-defined groups can be created using the CREATE WORKLOAD GROUP
T-SQL command. To gain a deeper understanding of workload groups, let's
examine the syntax of this command before discussing its optional
arguments:
CREATE WORKLOAD GROUP group_name
[ WITH
( [ IMPORTANCE = { LOW | MEDIUM | HIGH } ]
[ [ , ] REQUEST_MAX_MEMORY_GRANT_PERCENT = value ]
[ [ , ] REQUEST_MAX_CPU_TIME_SEC = value ]
[ [ , ] REQUEST_MEMORY_GRANT_TIMEOUT_SEC = value ]
[ [ , ] MAX_DOP = value ]
[ [ , ] GROUP_MAX_REQUESTS = value ] )
]
[ USING { pool_name | "default" } ]
IMPORTANCE—When multiple workload groups are set to use the same resource pool, the Importance
argument enables tasks from one group to be weighted ahead of others;
for example, if a Miscellaneous Query resource pool is used by two
workload groups called AdHoc and Admin, assigning a high importance to
the Admin group will place its access to resources before that of the
AdHoc group. A high importance should not be confused with priority
access to system resources; that is, importance is a simple weighting
mechanism to establish order among multiple groups in the same resource
pool.
REQUEST_MAX_MEMORY_GRANT_PERCENT—This
argument enables the specification of the largest allowable percentage
of resource pool memory that can be assigned to a single request from
the group; for example, a value of 10
would permit a maximum of 50MB of memory to be assigned to a query from a
pool with a 500MB memory size. A query that exceeds this value will be
met with an error message similar to that shown in figure 16.4.
REQUEST_MAX_CPU_TIME_SEC—Similar to the MAX_MEMORY_GRANT
argument, this argument applies to the maximum CPU seconds; however,
rather than cancel the query, Resource Governor will allow the query to
continue and will generate an alert.
REQUEST_MEMORY_GRANT_TIMEOUT_SEC—This
argument represents the maximum amount of time in seconds that a query
from the group will wait for a memory grant. After the time period
expires, rather than the query failing, it will receive the minimum
memory grant, which may result in lower-than-expected performance.
MAX_DOP—A workload group can be configured with its own default MAXDOP level. Doing so allows commands from a group to use a MAXDOP setting that may differ from the server default, without specifying an explicit MAXDOP setting. If a query from the group is executed that does specify a MAXDOP value, it will be used, so long as the value does not exceed the group's MAXDOP
setting. This argument presents some interesting possibilities; for
example, Microsoft Report Builder provides no option to specify a MAXDOP value to assign to the queries it generates. Thus, the only option is to use a server default MAXDOP
setting to control the parallelism of its queries; however, changing
the server default may introduce unwanted results in other areas. Using
Resource Governor, we can classify Report Builder queries into a
workload group with its own MAXDOP setting.
GROUP_MAX_REQUESTS—This
argument allows a limit to be applied to the number of simultaneous
requests that can execute from the workload group. However, in some
cases, the SQL engine may allow this limit to be exceeded if doing so
prevents a blocking or deadlock scenario from occurring.
USING—This argument links a workload group to a resource pool. If this argument is excluded, the default pool is used.
As with
classifier functions, you should be careful when creating workload
groups. The group and pool names are returned in any error messages,
potentially exposing information that could be used maliciously. Figure 1 contains an example of an error message that includes the group/pool names.
Finally, a user connection
classified into a workload group remains in the group for the life of
the connection, regardless of whether the classifier function is changed
while the connection is active.
As you've just seen, one of the major roles for a workload group is to define a connection's resource pool.
2. Resource pools
We
come now to the final component of Resource Governor, the resource pool,
created using the T-SQL command shown below. As you can see, the
command is fairly simple, with arguments for min and max values for CPU
and memory, the two resources under our control in SQL Server 2008.
CREATE RESOURCE POOL pool_name
[ WITH
( [ MIN_CPU_PERCENT = value ]
[ [ , ] MAX_CPU_PERCENT = value ]
[ [ , ] MIN_MEMORY_PERCENT = value ]
[ [ , ] MAX_MEMORY_PERCENT = value ] )
]
There are two system pools, default and internal,
and as with workload groups, the internal pool is for system usage
only, and its resource limits cannot be modified. Given the importance
of internal SQL Server system processes, its resource usage is not
constrained, regardless of the resources reserved in other pools. In
contrast, the default pool can be modified, with multiple user-defined workload groups in addition to the default workload group able to use it.
Before we look at what min
and max actually mean in the context of resource pool usage (there's
more to it than you may think), let's first define some important terms:
effective maximum percentage and shared percentage. In doing so, let's look at an example of the two resource pools defined in table 1. The values in this table can represent either CPU or memory; the terms apply in both cases.
Table 1. Example pool configuration—internal pool excluded
Pool name | Min % | Max % |
---|
Default | 0 | 100 |
Pool A | 30 | 100 |
Pool B | 60 | 75 |
Have a look at Pool A; its maximum value is specified as 100 percent; however, Pool B is configured with a minimum of 60 percent. It follows that Pool A could never receive more than 40 percent, hence the term effective maximum.
In a similar vein, Pool B is configured with a maximum of 75 percent,
but given Pool A's minimum value of 30 percent, it will never receive
more than 70 percent.
The minimums of pools A and
B added together total 90 percent; therefore, only 10 percent is left
for pools to use over their minimum values. The 10 percent value is
referred to as the total shared percentage and is used in calculating the effective maximum values. Essentially, effective maximums decrease as minimum values increase.
A poorly configured
pool design with large minimum values may have the unwanted effect of
starving resources from certain pools. The important point to take from
this is that the best resource pool designs are usually the simplest,
and like other configuration settings, they should be changed only for a
good reason after a well-considered analysis. Table 2 includes the effective maximum values based on the pool design from table 1.
Table 2. Pool configuration with effective maximum values included
Pool name | Min % | Max % | Effective max % |
---|
Default | 0 | 100 | 10 |
Pool A | 30 | 100 | 40 |
Pool B | 60 | 75 | 70 |
With these points in mind, let's look at further ramifications of minimum resource values on CPU and memory.
2.1. Effective minimum: memory considerations
You must take special
care when configuring a resource pool with a minimum memory percentage.
When SQL Server starts, the memory minimums for each pool are reserved
up front, regardless of whether the memory is required, or even if there
are no active workload groups using the pool. It follows that in a case
where there are a number of unused pools with configured minimum memory
values, there is potentially a large amount of memory that's unable to
be accessed by pools that actually need it. In contrast, CPU limits are
more fluid.
2.2. Effective minimum: CPU considerations
Consider figure 2, which shows the CPU usage of two resource pools in Performance Monitor.
What we're looking at here
is a running query in the RP_Reporting pool (represented by the line
that starts near 100 percent and drops down to around 15 percent). This
pool is configured with a maximum CPU usage of 15 percent. In the left
half of the screen, it's clearly using much more than that, in some
cases 100 percent. About halfway across, we see the emergence of a query
running in the RP_Sales pool (represented by the line that starts at 0
percent and increases to around 80 percent). At this point, the original
query's CPU usage is throttled back dramatically, to around the 15
percent average value.
What's actually happening
here is that Resource Governor is smart enough to figure out that
there's no CPU contention, so it lets the first query use as much of the
resource as it needs. As soon as a second query comes along, CPU
contention occurs, at which point resource limits are applied.
Perhaps the most
important point to learn from this is in regard to appropriate load
testing. For example, testing the impact of a 30 percent CPU pool
maximum is pointless unless there is something else running that
throttles the pool's CPU usage down to 30 percent.
Finally, note that the resource values are to be interpreted as averages,
not hard and fast limits; that is, monitoring will occasionally show
limits being exceeded, while the average values should be maintained.
With this background, let's walk through a script to set up a Resource Governor scheme from start to finish.