Wednesday, March 28, 2012

Intermittent Excessive Compilation and Recompilation

We have an SQL Server 2000 sp3 installation (version 8.00.760) that is used
by a ASP.Net appliction.
Normally the rate of SQL compilation and recompilation is as folllows:
Compilations: 900 per minute.
Recompilations: 90 per minute.
On an intermittent basis, these rates jump to much higher levels:
Compilations: 15,000 per minute.
Recompilations: 15,000 per minute.
This behvaiour severly impacts performance and does not seem to have any
specific trigger.
Any insights would be greatly appreciated.Sounds like you have some pretty poor code. See if this can get you
started:
http://support.microsoft.com/default.aspx?kbid=243586 Troubleshooting
Recompiles
Andrew J. Kelly SQL MVP
"David Sullivan" <DavidSullivan@.discussions.microsoft.com> wrote in message
news:87DA86BE-A83E-4406-AD33-D392D099170B@.microsoft.com...
> We have an SQL Server 2000 sp3 installation (version 8.00.760) that is
> used
> by a ASP.Net appliction.
> Normally the rate of SQL compilation and recompilation is as folllows:
> Compilations: 900 per minute.
> Recompilations: 90 per minute.
> On an intermittent basis, these rates jump to much higher levels:
> Compilations: 15,000 per minute.
> Recompilations: 15,000 per minute.
> This behvaiour severly impacts performance and does not seem to have any
> specific trigger.
> Any insights would be greatly appreciated.|||What are your relative Batch Requests per second at the indicated times?
Could be memory pressure causing SQL Server to page. What the last set of
metrics is telling you is that everything is being flushed from the Proc
Cache.
Take a look at Cache Manager, Cache Hit Ratio for all of the instances.
Also, take a look at DBCC MEMORYSTATS to see what the relative distribution
of the memory management of the Buffer Pool looks like.
What edition are your running? How much memory? CPUs? Etc. etc., etc.
Sincerely,
Anthony Thomas
"David Sullivan" <DavidSullivan@.discussions.microsoft.com> wrote in message
news:87DA86BE-A83E-4406-AD33-D392D099170B@.microsoft.com...
We have an SQL Server 2000 sp3 installation (version 8.00.760) that is used
by a ASP.Net appliction.
Normally the rate of SQL compilation and recompilation is as folllows:
Compilations: 900 per minute.
Recompilations: 90 per minute.
On an intermittent basis, these rates jump to much higher levels:
Compilations: 15,000 per minute.
Recompilations: 15,000 per minute.
This behvaiour severly impacts performance and does not seem to have any
specific trigger.
Any insights would be greatly appreciated.|||Hi Anthony,
Thanks for your feedback.
"Anthony Thomas" wrote:

> What are your relative Batch Requests per second at the indicated times?
Between 6000 and 9000 batches per minute - so 60 to 150 per second.

> Could be memory pressure causing SQL Server to page. What the last set of
> metrics is telling you is that everything is being flushed from the Proc
> Cache.
That's the weird thing. The proc cache size does not change to any great
degree.
It sits from 1Gb to 1.5Gb throughout the periods of execessive compilation.
The cache hit ratio is 90% or above throughout this period also.
However, there small "ripples" in the level of the proc cache. - it's as
though every proc execution results in at least once compilation.

> Take a look at Cache Manager, Cache Hit Ratio for all of the instances.
> Also, take a look at DBCC MEMORYSTATS to see what the relative distributio
n
> of the memory management of the Buffer Pool looks like.
> What edition are your running? How much memory? CPUs? Etc. etc., etc.
Enterprise Edition
6656 Mbytes RAM allocated to SQL Server.
IBM X335 - Twin Xeon 2.8GHz with hyper-threading OFF.
Database is approx 17GBytes.

> Sincerely,
>
> Anthony Thomas
>
> --
> "David Sullivan" <DavidSullivan@.discussions.microsoft.com> wrote in messag
e
> news:87DA86BE-A83E-4406-AD33-D392D099170B@.microsoft.com...
> We have an SQL Server 2000 sp3 installation (version 8.00.760) that is use
d
> by a ASP.Net appliction.
> Normally the rate of SQL compilation and recompilation is as folllows:
> Compilations: 900 per minute.
> Recompilations: 90 per minute.
> On an intermittent basis, these rates jump to much higher levels:
> Compilations: 15,000 per minute.
> Recompilations: 15,000 per minute.
> This behvaiour severly impacts performance and does not seem to have any
> specific trigger.
> Any insights would be greatly appreciated.
>|||A cache size of 1.0 to 1.5GB on a system that small wreaks of adhoc queries
and hardly any stored procedures. You need to optimize you code so the plans
can be reused.
Andrew J. Kelly SQL MVP
"David Sullivan" <DavidSullivan@.discussions.microsoft.com> wrote in message
news:C8F829AD-8C2C-4310-9F06-1269A9855816@.microsoft.com...[vbcol=seagreen]
> Hi Anthony,
> Thanks for your feedback.
> "Anthony Thomas" wrote:
>
> Between 6000 and 9000 batches per minute - so 60 to 150 per second.
>
> That's the weird thing. The proc cache size does not change to any great
> degree.
> It sits from 1Gb to 1.5Gb throughout the periods of execessive
> compilation.
> The cache hit ratio is 90% or above throughout this period also.
> However, there small "ripples" in the level of the proc cache. - it's as
> though every proc execution results in at least once compilation.
>
> Enterprise Edition
> 6656 Mbytes RAM allocated to SQL Server.
> IBM X335 - Twin Xeon 2.8GHz with hyper-threading OFF.
> Database is approx 17GBytes.
>|||This may be so, but this alone does not explain why the rate of compilation
jumps 15 times the normal rate on a random basis.
"Andrew J. Kelly" wrote:

> A cache size of 1.0 to 1.5GB on a system that small wreaks of adhoc querie
s
> and hardly any stored procedures. You need to optimize you code so the pla
ns
> can be reused.
> --
> Andrew J. Kelly SQL MVP
>
> "David Sullivan" <DavidSullivan@.discussions.microsoft.com> wrote in messag
e
> news:C8F829AD-8C2C-4310-9F06-1269A9855816@.microsoft.com...
>
>|||I'd have to agree with Andrew on this. If you have 6.5 GB allocated to SQL
Server, you MUST be running in AWE mode with the OS set to /PAE, or, you
really aren't using as much memory as you think you are.
Also, there is the Buffer Cache, which tends to make up the majority of the
BPool. If you are running in AWE, you should see that at least 50% - 80% of
the lower 3GB all dedicated to Buffer Cache, then the rest of the lower 3GB
should be allocated across the 4 other Memory Managers plus the MEM TO LEAVE
region, the bulk of those 4 dedicated to Proc Cache, which should be no more
than a few hundred MB. The only thing in the upper memory region, ubove
3GB, should be mostly Data Pages.
What those metrics are telling you is that, yes, you are correct, the first
time a procedure, or ad-hoc, query is executed, it is compiled, and the
execution plan put in the Procedure Cache. Recompilations tell you that an
ad-hoc query that was cached is not appropriate for auto-parameterization or
was explicitly recompiled or the procedure wan't found in the cache. This
means you have an enormous number or size of procedures and/or ad-hoc
queries an many are being either flushed for the cache or paged out to the
swap file for insufficient amount of memory space to make room for new
requests. This doens't mean that the Cache Size will change, but that the
current size is insufficient for the load you are throwing at it.
Do me a favor and execute the DBCC MEMORYSTATS, once for each scenario you
are describing now. It may be to your benefit to put this in a SQL Agent
job to append to a file once every 5 minutes or so for a few days, then you
can pick out a few around the events you are discribing. Believe me, an
anylsis of this output can be very useful.
Code the job for this execution:
SELECT RunTime = GETDATE()
DBCC MEMORYSTATS
On the Advanced Tab, specify an output file to push the results to and mark
it to append.
Also, the Batch Request per second is a perfmon metric. SQL Server:Server
BatchRequest/sec. I believe the 6,000 to 9,000 where the jobs or code you
are executing. This metric measures the T-SQL statement batches that are
sent to the server. It tells you how busy you are. The highest recorded
benchmarks TCP-C type are in the range of 1 - 3 million TPS. However,
something more along the lines of 1 - 4 million Batch Requests per HOUR
(300 - 1,000 Batch Requests per second) are more inline with a reasonably
busy server. You have about as much memory as some of our systems that run
in this range but only a 2-way with HTT turned off (why did you do that?)
were are's are 4-way Dells with HTT turned on.
Your database is 17 GB, which is not tiny but not huge either. The machines
I am speaking of above run more than 100 concurrent databases with an
aggregate space of consumption of about 250 GB, all flavors, OLTP, DSS, and
OLAP, which is a trick to manage on a single fail-over cluster, I assure
you.
We will be waiting to see the output above.
Best of luck.
Sincerely,
Anthony Thomas
"David Sullivan" <DavidSullivan@.discussions.microsoft.com> wrote in message
news:52909EE3-A4A7-4CA4-BD7C-9929097D8587@.microsoft.com...
This may be so, but this alone does not explain why the rate of compilation
jumps 15 times the normal rate on a random basis.
"Andrew J. Kelly" wrote:

> A cache size of 1.0 to 1.5GB on a system that small wreaks of adhoc
queries
> and hardly any stored procedures. You need to optimize you code so the
plans
> can be reused.
> --
> Andrew J. Kelly SQL MVP
>
> "David Sullivan" <DavidSullivan@.discussions.microsoft.com> wrote in
message
> news:C8F829AD-8C2C-4310-9F06-1269A9855816@.microsoft.com...
times?[vbcol=seagreen]
Proc[vbcol=seagreen]
etc.[vbcol=seagreen]
any[vbcol=seagreen]
>
>|||I'm sorry, that permon counter is SQL Server:SQL Statistics, Batch Requests
/ sec.
Sincerely,
Anthony Thomas
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:Oehq3%23LIFHA.580@.TK2MSFTNGP15.phx.gbl...
I'd have to agree with Andrew on this. If you have 6.5 GB allocated to SQL
Server, you MUST be running in AWE mode with the OS set to /PAE, or, you
really aren't using as much memory as you think you are.
Also, there is the Buffer Cache, which tends to make up the majority of the
BPool. If you are running in AWE, you should see that at least 50% - 80% of
the lower 3GB all dedicated to Buffer Cache, then the rest of the lower 3GB
should be allocated across the 4 other Memory Managers plus the MEM TO LEAVE
region, the bulk of those 4 dedicated to Proc Cache, which should be no more
than a few hundred MB. The only thing in the upper memory region, ubove
3GB, should be mostly Data Pages.
What those metrics are telling you is that, yes, you are correct, the first
time a procedure, or ad-hoc, query is executed, it is compiled, and the
execution plan put in the Procedure Cache. Recompilations tell you that an
ad-hoc query that was cached is not appropriate for auto-parameterization or
was explicitly recompiled or the procedure wan't found in the cache. This
means you have an enormous number or size of procedures and/or ad-hoc
queries an many are being either flushed for the cache or paged out to the
swap file for insufficient amount of memory space to make room for new
requests. This doens't mean that the Cache Size will change, but that the
current size is insufficient for the load you are throwing at it.
Do me a favor and execute the DBCC MEMORYSTATS, once for each scenario you
are describing now. It may be to your benefit to put this in a SQL Agent
job to append to a file once every 5 minutes or so for a few days, then you
can pick out a few around the events you are discribing. Believe me, an
anylsis of this output can be very useful.
Code the job for this execution:
SELECT RunTime = GETDATE()
DBCC MEMORYSTATS
On the Advanced Tab, specify an output file to push the results to and mark
it to append.
Also, the Batch Request per second is a perfmon metric. SQL Server:Server
BatchRequest/sec. I believe the 6,000 to 9,000 where the jobs or code you
are executing. This metric measures the T-SQL statement batches that are
sent to the server. It tells you how busy you are. The highest recorded
benchmarks TCP-C type are in the range of 1 - 3 million TPS. However,
something more along the lines of 1 - 4 million Batch Requests per HOUR
(300 - 1,000 Batch Requests per second) are more inline with a reasonably
busy server. You have about as much memory as some of our systems that run
in this range but only a 2-way with HTT turned off (why did you do that?)
were are's are 4-way Dells with HTT turned on.
Your database is 17 GB, which is not tiny but not huge either. The machines
I am speaking of above run more than 100 concurrent databases with an
aggregate space of consumption of about 250 GB, all flavors, OLTP, DSS, and
OLAP, which is a trick to manage on a single fail-over cluster, I assure
you.
We will be waiting to see the output above.
Best of luck.
Sincerely,
Anthony Thomas
"David Sullivan" <DavidSullivan@.discussions.microsoft.com> wrote in message
news:52909EE3-A4A7-4CA4-BD7C-9929097D8587@.microsoft.com...
This may be so, but this alone does not explain why the rate of compilation
jumps 15 times the normal rate on a random basis.
"Andrew J. Kelly" wrote:

> A cache size of 1.0 to 1.5GB on a system that small wreaks of adhoc
queries
> and hardly any stored procedures. You need to optimize you code so the
plans
> can be reused.
> --
> Andrew J. Kelly SQL MVP
>
> "David Sullivan" <DavidSullivan@.discussions.microsoft.com> wrote in
message
> news:C8F829AD-8C2C-4310-9F06-1269A9855816@.microsoft.com...
times?[vbcol=seagreen]
Proc[vbcol=seagreen]
etc.[vbcol=seagreen]
any[vbcol=seagreen]
>
>|||Actually it does. Keeping in mind all the things Anthony stated in his post
you need to consider what happens when SQL Server needs more memory for
something. Since you have such a large percentage of the memory taken up by
the proc cache there will be times when she engine needs to clear out some
cache to make room for something else, probably data. I suspect that even
plans that are being reused are getting cleared and need to be compiled
again. I bet that if you look at your pagelifeexpectancy counter in perfmon
you will see large dips to almost 0 when this happens. This is when the
cache is getting cleared and basically has to start over again. The
PageLifeExpentancy counter would ideally be 1000 or more on a properly tuned
system. I am willing to bet yours hovers around 0 more often than not. You
can look for some magic switch to fix your problems or you can attack the
root of it. You will never get peak performance with adhoc queries that do
not reuse plans and recompile on a regular basis. Once you attack the most
used or heaviest offenders you will see a dramatic impact in overall
performance. Fixing one query that gets called 100K times a day and gets
compiled each time it runs can make a huge impact on performance. Fix
several of those and you are a hero.
--
Andrew J. Kelly SQL MVP
"David Sullivan" <DavidSullivan@.discussions.microsoft.com> wrote in message
news:52909EE3-A4A7-4CA4-BD7C-9929097D8587@.microsoft.com...[vbcol=seagreen]
> This may be so, but this alone does not explain why the rate of
> compilation
> jumps 15 times the normal rate on a random basis.
>
> "Andrew J. Kelly" wrote:
>|||I'm sorry but instead of DBCC MEMORYSTATS, it is DBCC MEMORYSTATUS.
Sincerely,
Anthony Thomas
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:uaCVQAMIFHA.2784@.TK2MSFTNGP09.phx.gbl...
I'm sorry, that permon counter is SQL Server:SQL Statistics, Batch Requests
/ sec.
Sincerely,
Anthony Thomas
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:Oehq3%23LIFHA.580@.TK2MSFTNGP15.phx.gbl...
I'd have to agree with Andrew on this. If you have 6.5 GB allocated to SQL
Server, you MUST be running in AWE mode with the OS set to /PAE, or, you
really aren't using as much memory as you think you are.
Also, there is the Buffer Cache, which tends to make up the majority of the
BPool. If you are running in AWE, you should see that at least 50% - 80% of
the lower 3GB all dedicated to Buffer Cache, then the rest of the lower 3GB
should be allocated across the 4 other Memory Managers plus the MEM TO LEAVE
region, the bulk of those 4 dedicated to Proc Cache, which should be no more
than a few hundred MB. The only thing in the upper memory region, ubove
3GB, should be mostly Data Pages.
What those metrics are telling you is that, yes, you are correct, the first
time a procedure, or ad-hoc, query is executed, it is compiled, and the
execution plan put in the Procedure Cache. Recompilations tell you that an
ad-hoc query that was cached is not appropriate for auto-parameterization or
was explicitly recompiled or the procedure wan't found in the cache. This
means you have an enormous number or size of procedures and/or ad-hoc
queries an many are being either flushed for the cache or paged out to the
swap file for insufficient amount of memory space to make room for new
requests. This doens't mean that the Cache Size will change, but that the
current size is insufficient for the load you are throwing at it.
Do me a favor and execute the DBCC MEMORYSTATS, once for each scenario you
are describing now. It may be to your benefit to put this in a SQL Agent
job to append to a file once every 5 minutes or so for a few days, then you
can pick out a few around the events you are discribing. Believe me, an
anylsis of this output can be very useful.
Code the job for this execution:
SELECT RunTime = GETDATE()
DBCC MEMORYSTATS
On the Advanced Tab, specify an output file to push the results to and mark
it to append.
Also, the Batch Request per second is a perfmon metric. SQL Server:Server
BatchRequest/sec. I believe the 6,000 to 9,000 where the jobs or code you
are executing. This metric measures the T-SQL statement batches that are
sent to the server. It tells you how busy you are. The highest recorded
benchmarks TCP-C type are in the range of 1 - 3 million TPS. However,
something more along the lines of 1 - 4 million Batch Requests per HOUR
(300 - 1,000 Batch Requests per second) are more inline with a reasonably
busy server. You have about as much memory as some of our systems that run
in this range but only a 2-way with HTT turned off (why did you do that?)
were are's are 4-way Dells with HTT turned on.
Your database is 17 GB, which is not tiny but not huge either. The machines
I am speaking of above run more than 100 concurrent databases with an
aggregate space of consumption of about 250 GB, all flavors, OLTP, DSS, and
OLAP, which is a trick to manage on a single fail-over cluster, I assure
you.
We will be waiting to see the output above.
Best of luck.
Sincerely,
Anthony Thomas
"David Sullivan" <DavidSullivan@.discussions.microsoft.com> wrote in message
news:52909EE3-A4A7-4CA4-BD7C-9929097D8587@.microsoft.com...
This may be so, but this alone does not explain why the rate of compilation
jumps 15 times the normal rate on a random basis.
"Andrew J. Kelly" wrote:

> A cache size of 1.0 to 1.5GB on a system that small wreaks of adhoc
queries
> and hardly any stored procedures. You need to optimize you code so the
plans
> can be reused.
> --
> Andrew J. Kelly SQL MVP
>
> "David Sullivan" <DavidSullivan@.discussions.microsoft.com> wrote in
message
> news:C8F829AD-8C2C-4310-9F06-1269A9855816@.microsoft.com...
times?[vbcol=seagreen]
Proc[vbcol=seagreen]
etc.[vbcol=seagreen]
any[vbcol=seagreen]
>
>sql

No comments:

Post a Comment