Learn how to improve the performance of your SQL Server database and understand when the recompilation of queries is necessary and when it is not. Understanding query compilation issues from an operational perspective requires intimate familiarity with two indispensable tools: Performance Monitor and SQL Profiler.
Author: Lawrence Garvin, SolarWinds Head Geek, SQL Server Monitoring
There are many situations that can contribute to database performance issues. One situation that is quite often overlooked is the issue of unnecessary query execution plan recompilation. It’s probably overlooked because identifying the issue requires some deep-diving into the database service and querying database dynamic management views. It also requires an understanding of when the recompilation is necessary, and when it is not. Also contributing to their oversight is that query performance is the domain of the database programmer not the database administrator.
Understanding query compilation issues from an operational perspective requires intimate familiarity with two indispensable tools: Performance Monitor and SQL Profiler.
The Plan Cache
The first step to finding performance issues caused by unnecessary recompilation is understanding when a recompilation is necessary and when it isn’t. This requires a basic understanding of the SQL Server Plan Cache and why queries need to be recompiled.
Anytime a query is executed, SQL Server performs an extensive pre-execution analysis on the various components of the query to define an execution plan for that query. Once the execution plan is defined, it is cached in the Plan Cache where it can be reused if the same query is submitted again. The Plan Cache is transient, however, and there are a number of things that can cause the destruction of the plan cache. The obvious ones would be a system restart, database service restart, or database restore. A less obvious condition occurs when the database service is short on memory, query plan caches are the first things to go. The cache can also be cleared intentionally by a user with sysadmin privileges.
Why are queries recompiled?
Queries are recompiled because the execution plan in the cache is no longer valid, or it is suspected to be no longer valid. Stored procedures are automatically compiled the first time they are called after the database service starts. Strictly speaking, this is not a RE-compilation because there is no execution plan in the plan cache after a service start. Initial compilations are tracked differently than recompilations of cached plans.
Queries are naturally recompiled when the underlying data and/or architecture in a database has changed. This would occur when indexes are added, or the data in an existing index has been changed. In most cases, a query recompilation occurs because it needs to. Our concern here is to identify recompilations that do not need to occur.
When should queries be recompiled?
The most prevalent scenario that triggers query recompilation is the use of parameterized queries. Parameterized queries that use highly selective values in the parameters should be recompiled when targeted for use with a non-highly-selective parameter value to ensure the proper query plan is built. “Highly selective” refers to the number of rows returned from the query as compared to the total number of possible rows in the non-filtered rowset. Sometimes this does not occur because of the improper caching of the query plan.
Two examples to demonstrate this would be:
- Highly Selective: SELECT * FROM dbo.users WHERE LastName LIKE ‘Olajuwon’
- Not-Highly Selective: Select * FROM dbo.user WHERE LastName LIKE ‘Smith’
It’s a logical presumption that the number of Smiths in a table of users has a higher occurrence of existence than the number of Olajuwons. Recognizing these scenarios is the domain of the database programmer, and is easily handled by forcing recompilation in the definition of the stored procedure when non-highly-selective parameter values are predictable.
When shouldn’t queries be recompiled?
So, from above, the first scenario in which we can note that subsequent executions of a cached query should not be recompiled is when the selection criteria continues to be highly selective, i.e. the typical number of rows returned is minimal.
Another scenario in which queries should not be recompiled is when the result set is deterministic. These type of queries usually have fixed filter criteria rather than variable-driven filter criteria. The use of equality operators generally produces a deterministic result set, whereas operators such as ‘LIKE’ or ‘IN’ are typically variable.
A third scenario that can contribute to unnecessary recompilation is the execution of queries in an ad-hoc fashion (i.e. interactively) that should be defined and called as a stored procedure.
Eliminating recompiles caused by improper use of ad-hoc queries
For parameterized ad-hoc statements, SQL Server will classify the SQL statement as either “Safe” or “Unsafe” based on whether the query optimizer believes that changes in values to parameters will affect the reliability of an execution plan. Monitoring the actual performance monitor counters can help identify potential problem areas.
- If the Safe Auto-Params/sec counter is high, this indicates ad-hoc queries that would have been better implemented as stored procedures with cached query plans.
- Ad-hoc queries without parameters, will be reflected in the Failed Auto-Params/sec counter, and this value should always be low. A high value in this counter indicates ad-hoc queries without parameters that should be implemented as a stored procedure to take advantage of a cached query plan.
Confirming that performance issues are caused by unnecessary recompilation
First, presumably we’ve identified an actual database performance issue. The question is whether the performance issue is attributable to unnecessary recompilation of queries, or some other factor. There are numerous ways in which a database performance issues may manifest, but those that are related to query execution plan generation manifest as higher CPU utilization (for the effort of creating a new execution plan), increased memory writes (caused by replacing a cached plan with a recompiled plan), and longer query execution times (caused by the need to rebuild the execution plan prior to executing the query).
Relationship of recompilation to long-running queries
We need to be able to attribute performance issues to the query execution plan generation itself, as opposed to the actual query execution. Our concern here is unnecessary recompilation when the recompile would not materially affect the actual runtime of the query.
There are two complimentary scenarios to consider. The case where the runtime of a query is so much longer than the recompilation time of the query, that the improvements obtained by not recompiling the query would be functionally insignificant. An example of this would be a query that implements a full table scan with no expectation to improve that behavior. In such cases, it’s probably of no real value to recompile the query, since the table scan is the de facto performance bottleneck. If the recompile time is significantly long, however, it may be beneficial to eliminate the recompilation for the benefit of other activities within the database server. It’s more probable, though, that those long running queries would run in shorter time frames if they were recompiled when they should be, so it’s equally as important to not constrain recompilation of long-running queries unless an identifiable benefit will result.
Recompilation from excessive statistics calculations
Earlier we noted that recompiles can be triggered due to changes in the data. It may be that changes to data are so frequent that the database cannot maintain current statistics and recompilations are being caused merely because of those data changes, even those the same query plan is continually rebuilt. Turning off the automatic updating of statistics on individual tables or indexes can be helpful in these scenarios, but you’ll want to be certain that this is the precise cause for the recompilations. A comparison of the recompilation events to the statistics recalculation events can sometimes shed some light on this condition.
Identifying consumption from query recompilation
For the DBA, excessive resource consumption in query recompilation can be identified with the Dynamic Management View (DMV) sys.dm_exec_query_optimizer_info which provides information on how much time is spent in the optimization process. The elapsed time value is the amount of time allocated to query optimizations.
SELECT counter, [value] FROMsys.dm_exec_query_optimizer_info
WHERE counter in(‘optimizations’,‘elapsed time’,‘final cost’)
|optimizations ||1 |
|elapsed time ||0.171963527219327 |
|final cost ||0.603439265367659 |
Another way to identify potential recompilation issues is to compare SQL Recompilations/sec to Batch Requests/sec. If SQL Recompilations are more than 1% of Batch Requests, there may be an excessive recompilation issue.
In summary, we need to identify three key factors:
- Unnecessary recompilations are occurring.
- The recompilations are causing a negative impact on performance.
- The removal of the recompilations won’t make the situation worse.
If all three factors cannot be confirmed, it’s likely that you should let the recompilations continue to occur as they need to.
About the author
Lawrence Garvin is a Head Geek and technical product marketing manager at SolarWinds, a Microsoft Certified IT Professional (MCITP), and an eight-time consecutive recipient of the Microsoft MVP award in recognition for his contributions to the Microsoft TechNet WSUS forum. His area of focus is on patch and server management with an emphasis on Windows systems management. Over the last 40 years Lawrence has programed in nearly every language to include work in database programming and database administration.