Skip Navigation LinksHome > View Post

Tuning the ThreadPool

I've spent a lot of time recently looking at the performance of a number of web services. We found them to respond poorly under a burst load. i.e. twelve simultaneous requests resulted in very poor response times.

So, we set about recreating the problem under a load test using Visual Studio 2005 Tester Edition's load testing capabilities.

Let me quickly explain how the system fits together. It's made of three web services and one web application. The web app calls Service 1 which calls Services 3 and 4 asynchronously. To ensure we got consistent results we stubbed out Services 3 and 4 with a StubService that just slept for 10 seconds.

Logical architecture of the applications

At present, all four applications reside on the same machine (though they maybe separated at some point). We'd therefore expect, all being well, that the response time of the web app should be just over 10 seconds (time of the two async service calls plus a little extra for all plumbing to happen).

Here is the result of one of those (many) tests. This test hits the web app with a constant load of 12 concurrent users.

First Load test, 12 constant users showing a huge peak in response time at the start

The X axis is the time the test is running and the Y Axis varies depending on the colour of the line. The red line shows user load (constant at 12). The green line shows requests per second (x10) and the blue line shows the response time in seconds.

With some investigation (and a big helping of support from Microsoft) it turns out that the initial spike in response time is caused by the ThreadPool... ?

Let me explain.

"The advantage of using a Thread Pool over creating a new thread for each task, is that the thread creation and destruction overhead is negated, which may result in better performance and better system stability." (From wikipedia's article on the Thread pool pattern).

So thread pools are good. The .NET CLR's ThreadPool is used by ASP.NET to process incoming requests from IIS and is also used on the client side of web services by web service proxies (generated by Adding a Web Reference).

So you can imagine that the ThreadPool is used a lot by the application I outlined above. But how does that explain the performance issue?

The .NET ThreadPool is very actively managed by the runtime. If it doesn't have much work to do, threads are allowed to die so they don't eat up system resources. Put the ThreadPool under pressure and it will introduce new threads until it reaches the maximum value allowed (which is 25 per processor by default).

Note: A maximum limit on pool size is a good thing as it restricts the number of threads that the Processor has to switch between. This stops the processor spending all of its time switching between threads and not doing any actual work. Many developers are already aware of this and have increased their ThreadPool sizes using the settings described in Microsoft's Improving Web Service Performance article (look under the Threading section).
However, we weren't exhausting the threadpool with too much work because the application recovers under the same load (and we could monitor the free threads in the pool and there were plenty!). Nope, our problem is under burst load.

Consider: just before our load tests, the applications are idle. Which means the threads in the ThreadPool will have died. And then, all of a sudden we hit it with twelve users and it suddenly needs A LOT OF THREADS.

And now the bombshell: the ThreadPool manager will only create a new thread every 0.5 seconds, no matter how extreme the load.

It's pretty easy to demonstrate this with a simple console application using Visual Notepad.

using System;
using System.Threading;

public class Test
{
    public static void Main()
    {
        // lets kick off a lot of threads
        for (int i=0; i < 25; i++)
        {
            ThreadPool.QueueUserWorkItem(
                new WaitCallback(WorkThread), DateTime.Now);
        }

        Console.WriteLine("All items queued, watch the results come in or hit enter to exit");
        Console.ReadLine();
    }
    
    public static void WorkThread(object queuedAt)
    {
        // Sleep for 10 seconds
        Thread.Sleep(10000);
        
        // Display how long it took for us to get here since we were
        // queued on the threadpool;
        Console.WriteLine("Time since I was queued: {0}",
         DateTime.Now - (DateTime)queuedAt);
        
        // Now we've finished, queue another item up to simulate the
        // round-robin effect of a load test.
        ThreadPool.QueueUserWorkItem(
            new WaitCallback(WorkThread), DateTime.Now);
    }
}

Output of the sample console application

Notice how the first request comes in almost bang on 10 seconds after it was queued. But each subsequent request takes 0.5 seconds longer as it had to wait its turn for the ThreadPool to create another thread. Eventually, the timings will calm down once the ThreadPool is populated with enough threads to deal with the workload.

If we approximate that our application needs 7 threads per request: one for the incoming request to the web application and two for each call to a web service (one for the client and one for the service), you can easily see where the delay comes from when 12 requests come in at once to a sleepy service.

Fortunately, the problem is as easy to fix as it is alarming. The kind people at Microsoft gave us a minWorkerThreads setting that we can add to the machine.config. Please note that this setting is only available in .NET 1.0 SP3 and above (don't worry, that includes 1.1 and 2.0).

So we changed that to the recommended setting of 50 and here's the results of our next load test:

Second Load test, 12 constant users showing a flat response time

Much better!

The settings

If you're wondering where I got the number 50 from, our Microsoft consultant found this knowledge base article: http://support.microsoft.com/?id=821268.

I now intend to use all these settings on all our staging and production servers.

At first I asked myself how I could trust these recommended settings would be right for our system. "How do I know this is right for our circumstances?" - and the truth is we don't. But we don't have time to play with all these settings and test the outcome; there are just too many possibilities and too little time.

However, these settings work great with our systems for all we've asked them to do to date. Until then, that's tuned enough.

In summary then, a sleepy unconfigured ThreadPool could devastate the performance of your app if it's suddenly met with a burst of activity. So make sure it's configured.

Tags: ASP.NET

 
Josh Post By Josh Twist
1:39 PM
22 Mar 2006

» Next Post: Projects and Namespaces
« Previous Post: Initialisation using constructors

Comments are closed for this post.

Posted by Pandurang @ 22 Mar 2006 4:07 PM
This is also why most load testing tools have a "warmup time" before actually profiling the results.

Good work on the article though.

Posted by Josh @ 23 Mar 2006 4:15 AM
I should have mentioned that we made sure the application had "warmed up" before hand, so that compilation or starting of a new appdomain wouldn't influence our results.

Glad you liked it.

Josh

Posted by Ahmad @ 09 May 2006 11:45 AM
Josh, great article, saved me a lot of testing time. thank you.

Posted by roni @ 28 May 2006 11:10 AM
great example, is there a place to download your example?

Posted by Ravee @ 23 Apr 2007 3:11 AM
Hi Josh,

Would the same be applicable to unmanaged C++ version of the thread pool [using QueueUserWorkItem()] ?

We are trying to use the unmanaged version of the thread pool, but do not find much info about the same.
For eg.,
- Would windows adjust the thread pool size dynamically considering the no. of threads being used for other processing in the current process (not created by the pool)?
- Can the min. threads be set to a reasonable number?

Thanks in advance,
Ravee

Posted by vineeth @ 06 Jan 2009 7:42 AM
I am not able to figure out the time the threads will be in sleep mode before killing itself? or can I track the number of suspended threads in the threadpool? Any info on this is very helpfull. Thanks

© 2005 - 2014 Josh Twist - All Rights Reserved.