Category Archives: Troubleshooting

SQLDIY: Tracking Wait Stats

One of the most important methodologies to date for troubleshooting problems with the SQL Server engine is examining the wait statistics. As with most of the DMV’s provided in SQL Server 2005/2008 sys.dm_os_wait_stats is a cumulative view. To add to my series SQLDIY: Manage and Monitor SQL Server Yourself  here is my stored procedure for capturing deltas on an ongoing basis. It is modeled the same way I capture sys.dm_io_virtual_file_stats. This procedure captures the delta and the raw data at a point in time. This allows you to do your own roll ups to look at the numbers in different ways. Another reason I do this is to capture what happened before the problem and right at the beginning as the customer started reporting the problems. Starting with the slide deck from Joe Sack (blog|twitter) on Performance Tuning With Wait Statistics as a foundation and working through the links provided, you will be able to get up to speed quickly.

Link to the script Gather Server Wait Stats

I’m hosting all my scripts from this series on GitHub as I do with all my open source projects.

As always, if you find any bugs please let me know and I will correct them!

Your Homework:
SQL Server 2005 Waits and Queues Tom Davidson
SQL Server Waits and Queues Robert Pearl (blog|twitter)
SQL Server 2008 DR, Testing, Waits and Queues Jose Barreto (blog|twitter)
Drum Roll, Please…The Debut of The SQL DMV All-Stars Dream Team! Jimmy May(blog|twitter)

Specific examples on using wait stats:
Diagnosing Transaction Log Performance Issues and Limits of the Log Manager Mike Ruthruff (blog|twitter)
How do you measure CPU pressure? Tom Davidson

The Mystery Of The Slow Last Row

Things Go South

Recently I was troubleshooting a piece of software that archives data out of a very active import table. It is a pretty simple app. It queries the table for any id’s that have data older than X days. It then queries the table again to pull the detail data generating a csv file, table schema script and compresses them for storage. Well, it was running slow. Much slower than expected in fact. It has to process around 5000 id’s and it was taking 50 to 240 seconds an id. This doesn’t sound slow but it adds up in a hurry. I started digging into the code looking at the normal culprits. Network slow? No. Disk IO a problem? No. It must be the compression engine! It takes the CPU to 100% and leaves it there. That metric by itself isn’t a bad thing but I just wasn’t seeing what the problem was. I plugged in other compression engines and tweaked the settings. Oddly enough, every engine performed about the same! I knew there was something I was missing. Stripped things away until I was left with one thing, the query that pulls the detail data from SQL Server.

The Big Boy(TM)

The table in question is big, due to the archive tool slowing down it has ballooned up to 240GB, in a single table, that shouldn’t be more than 10GB or 20GB at any one time. It gets most of the heft not in just rows but the width of the table. I took my sample slow query and started running permutations on it. In the grand scheme of things it wasn’t returning a large row count, around 183,000 or so. The data size was pretty large at 650MB. Still, not anything that would choke the network or our system by any means.

Turn The Knobs

I though it may be that the application was choking on the data size. So, I stepped out of the app and ran the query in SSMS. It ran in the same time that the program did as far as query execution. I put a TOP clause on it cutting the number of rows in half thinking I would get the data back in half the time. I was way off! Instead of the 40 seconds it was now taking around 5 seconds. I pulled the TOP clause off again and watched the data come back.  I noticed it returned data pretty quickly. As a matter of fact, it returned almost all the rows in just a few seconds. It would then spend the next say 30 seconds waiting for the last few rows. At this point I thought something was wrong with my workstation. I  ask a co-worker to run the same two queries, one with the TOP and one without. He got the exact same results I did! I started to panic a little bit. What if something was wrong with the server or there was corruption in the database? I decided to play with the TOP clause getting the count to within a couple of records before things would slow down dramatically. I looked at the crazy simple plan nothing to see here but a scan and return.

 Original Plan

Plan with TOP operator

Digging Deep, Pinging my Tweeps

If you aren’t on twitter you are missing out on a valuable resource. There are members of the SQL Server community that hang out watching #sqlhelp that might just be qualified to offer you an opinion on your problem. MVP’s, MCM’s and beyond! Sometimes they are just really smart people like Paul White (blogtwitter). He may not have any of the magic three letters in his title he is still extremely capable. If you aren’t reading his blog you need to start now. Anyway, I tweeted my dilemma and several people responded. I had already tried everything that people were suggesting Paul offered to take a look at the execution plans and see if he could figure it out. I was pretty sure it had something to do with the data layout on disk and the fact I only had a single index, the clustered key, on the table which I wasn’t searching on. What I expected to get was a full scan every time on the table. When I added the TOP clause SQL Server was smart enough to say “Stop scanning when X records are returned” So, when I returned all 182,853 records it had no choice to to scan the entire table.

I think Paul explains it pretty well:

It seems that the first 182,852 matches on (IDFK = 69468) appear relatively early in the clustered index (scanning in clustered key order).

The 182,853rd record appears to be much later in the clustered index – perhaps quite close to the end of the table, so SQL Server scans a large number of pages after finding the 182,852nd match before it encounters the next one.  If you select the clustered index key plus IDFK from the table, and number the rows using ROW_NUMBER(OVER ORDER BY (SELECT 0)) you should find a big gap in clustered key values between records 182,852 and 182.853.

Note – Which I did find – Wes

When you request the TOP (182852) records, the scan stops as soon as that many records are found – so the scan does not continue across the big gap in clustered index key values to find the 182,853rd value.

Note – I also played with SET ROWCOUNT and had the exact same results. – Wes

Side note.  The query does not request the results in clustered key order, but the storage engine has no other choice as the query is written.  If you add WITH (TABLOCK), WITH (NOLOCK), or otherwise modify the table so that no data changes are possible while the scan is running (perhaps by making the database read-only, or placing the table on a read-only file group) – the storage engine will be able to scan from the IAM pages, which will result in rows coming back in the order pages were allocated to the table.  Depending on the distribution of records with (IDFK = 69468), you might get better, or worse, performance – but the big gap between record #182,852 and #182,853 will almost certainly disappear.  Of course, the storage pattern might result in other such gaps!

Thanks Paul!

I did test the NOLOCK and TABLOCK, both had similar results that sped up the query. Alas, I couldn’t do ether since there is data changes taking place during the archive process. I opted for a covering index on the search key, which automatically includes the primary key and a bookmark lookup. Now it is faster and consistent on speed.

And that is how the mystery of the slow last row was solved!

SQLDIY: Gather Virtual File Statistics Using T-SQL #TSQL2sDay 15

It’s that time again, T-SQL Tuesday is here! This time Pat Wright (blog|twitter) is hosting and has put forth automating tasks using ether T-SQL or Powershell. I LOVE automating stuff. As a production DBA in some very large shops you can’t do your job unless you make your servers work for you. I’ve been using T-SQL and *GASP* xp_cmdshell, OSQL and file shares to gather stats and push configurations to servers for decades. Log before fancy things like C# and Powershell existed. These days I use a variety of home grown tools but doing things with just T-SQL can be just as powerful. I’m going to leverage this post to start a series on a pure T-SQL implementation of configuration management, data gathering and utility procedure deployment.

Where Is Your Management Database?

Every DBA should have two things, a utility, or management database on every server and a central repository where all the locally collected data in the management database is pulled back to. What surprises people when I talk about this methodology is I don’t always advocate using a licensed copy of SQL Server. By that I mean that SQL Express works just great as a central repository server. I usually put this on a virtual machine, Hyper-V or what ever flavor of virtual environment your company supports. This allows you to do things like enable CLR and xp_cmdshell on a server that technically is non-production and keep your security risk much lower. Every server that is deployed in my shop gets a local management database. From the central repository I can push out everything I need to manage a server and keep all my servers up to date as I make improvements or bug fixes to the management code. That’s all I’m really going to say about that in this post though I just wanted to give you an idea of just how deep the rabbit hole can go.

DMV’s give you the information, you have to use it.

Since SQL Sever 2005 Microsoft let of of the black box mentality and started providing crazy useful information via Dynamic Management Views. Virtual file statistics though have been around for quite a while. They got a touch up in the DMV but the basic information was available in SQL Server 2000 via function call.  The DMV I’m after is sys.dm_io_virtual_file_stats. It has a ton of information in it. It’s main problem though is it is an aggregation over time and doesn’t really tell you what is different from yesterday to today. To get around that we have to build our own sampling routine.

The VirtualFileStats Table

We create a table in our local management database to collect the daily numbers. I try to keep things simple.


CREATE TABLE dbo.VirtualFileStats
  (
     RecordID                             INT IDENTITY(1,1) NOT NULL,
     ServerName                           VARCHAR(255) NOT NULL,
     DBID                                 INT NOT NULL,
     FileID                               INT NOT NULL,
     Reads                                BIGINT NULL,
     ReadsFromStart                       BIGINT NULL,
     Writes                               BIGINT NULL,
     WritesFromStart                      BIGINT NULL,
     BytesRead                            BIGINT NULL,
     BytesReadFromStart                   BIGINT NULL,
     BytesWritten                         BIGINT NULL,
     BytesWrittenFromStart                BIGINT NULL,
     IostallInMilliseconds                BIGINT NULL,
     IostallInMillisecondsFromStart       BIGINT NULL,
     IostallReadsInMilliseconds           BIGINT NULL,
     IostallReadsInMillisecondsFromStart  BIGINT NULL,
     IostallWritesInMilliseconds          BIGINT NULL,
     IostallWritesInMillisecondsFromStart BIGINT NULL,
     RecordedDateTime                     DATETIME NULL,
     IntervalInMilliseconds               BIGINT NULL,
     FirstMeasureFromStart                BIT NULL
  )
GO
CREATE TABLE dbo.VirtualFileStatsHistory
  (
     RecordID                             INT NOT NULL,
     ServerName                           VARCHAR(255) NOT NULL,
     DBID                                 INT NOT NULL,
     FileID                               INT NOT NULL,
     Reads                                BIGINT NULL,
     ReadsFromStart                       BIGINT NULL,
     Writes                               BIGINT NULL,
     WritesFromStart                      BIGINT NULL,
     BytesRead                            BIGINT NULL,
     BytesReadFromStart                   BIGINT NULL,
     BytesWritten                         BIGINT NULL,
     BytesWrittenFromStart                BIGINT NULL,
     IostallInMilliseconds                BIGINT NULL,
     IostallInMillisecondsFromStart       BIGINT NULL,
     IostallReadsInMilliseconds           BIGINT NULL,
     IostallReadsInMillisecondsFromStart  BIGINT NULL,
     IostallWritesInMilliseconds          BIGINT NULL,
     IostallWritesInMillisecondsFromStart BIGINT NULL,
     RecordedDateTime                     DATETIME NULL,
     IntervalInMilliseconds               BIGINT NULL,
     FirstMeasureFromStart                BIT NULL
  )

This is what we need to gather, and later analyze the data. Since we are managing our samples we have to know when the sampling started and what the first sample is. FirstMeasureFromStart lets us know that it is the first base measurements the rest of the samples will delta off of.

GatherVirtualFileStats Stored Procedure

Next we need a stored procedure to do the sampling. One thing you will notice is the procedure executes continuously with a WAIT FOR DELAY built into it so you can get finer grained than the 1 minute limitation of the SQL Agent. Sometimes, I will do one off sampling for a short period, say 30 minutes at a 30 second interval but most often I just let it run and set the sample rate at 1 minute or larger depending on how busy the system is.


IF EXISTS (SELECT
             *
           FROM
             dbo.sysobjects
           WHERE
            id = Object_id(N'[dbo].[GatherVirtualFileStats]')
            AND Objectproperty(id, N'IsProcedure') = 1)
  DROP PROCEDURE [dbo].[GatherVirtualFileStats]

GO

--------------------------------------------------------------------------------------
--  GatherVirtualFileStats
--  by: Wesley D. Brown
--  date: 02/08/2011
--  mod:  04/14/2011
--  mod:  04/17/2013

--  description:
--	This stored procedure is used to sample sys.dm_io_virtual_file_stats to track
--	performance at a database file level. This is useful for finding
--	hotspots on SAN's or under performing IO systems.

--  parameters:
--    @Duration	 = '01:00:00' How long to run before exiting
--   @IntervalInSeconds = 120 Number of seconds between samples
--@DB			 = -1 DB_ID to monitor, -1 for all
--@DBFile		 = -1 File_ID of file to monitor, -1 for all
--  usage:
--  	DECLARE @RC         INT,
--			@StartTime  DATETIME,
--			@databaseID INT

--	SELECT @StartTime = Getdate(),
--		   @databaseID = Db_id()

--	EXEC @RC = Gathervirtualfilestats
--	  '00:45:30',
--	  30,
--	  10,
--	  -1

--	SELECT *
--	FROM   dbo.VirtualFileStats
--	WHERE  DBID = 10
--	ORDER  BY RecordID

--  platforms:
--  SQL Server 2005
--  SQL Server 2008
--  SQL Server 2008 R2
--  SQL Server 2012
--  tested:
--  SQL Server 2005 SP2
--  SQL Server 2012
---------------------------------------------------------------------------------------
--  *** change log		***
--	Added history table and perge on start up if there is data in the main table
--  *** end change log	***
-------------------------------------------------------------------------------------
CREATE PROC dbo.Gathervirtualfilestats
  @Duration          DATETIME = '01:00:00',
  @IntervalInSeconds INT = 120,
  @DB                INT = -1,
  @DBFile            INT = -1
AS
  SET nocount ON

  DECLARE
    @StopTime                 DATETIME,
    @LastRecordedDateTime     DATETIME,
    @CurrentDateTime          DATETIME,
    @ErrorNumber              INT,
    @NumberOfRows             INT,
    @ErrorMessageText         NVARCHAR(4000),
    @CurrentServerName        VARCHAR(255),
    @DifferenceInMilliSeconds BIGINT,
	@SQLVersion				  VARCHAR(50)

	select @SQLVersion =
		substring(version_number,1,charindex(' ',version_number))
		from
		(
		select
			substring(version,charindex('-',version)+2, charindex('(',version)-patindex('%.[0-9]',version)) as version_number from
			(
				select @@version as version
			) as t
	) as t

	if (charindex('11.',@SQLVersion,0) > 0)
	BEGIN
		IF (@DB = -1)
		BEGIN
			set @DB = NULL
		END
		IF (@DBFile = -1)
		BEGIN
			set @DBFile = NULL
		END
	END

  IF EXISTS (SELECT
               1
             FROM
               dbo.VirtualFileStats)
    BEGIN
        IF EXISTS (SELECT
                     *
                   FROM
                     dbo.sysobjects
                   WHERE
                    id = Object_id(N'[dbo].[VirtualFileStats]')
                    AND Objectproperty(id, N'IsTable') = 1)
          BEGIN
              INSERT INTO dbo.VirtualFileStatsHistory
              SELECT
                *
              FROM
                VirtualFileStats;

              TRUNCATE TABLE dbo.VirtualFileStats;
          END
    END

  SELECT
    @CurrentServerName = Cast(Serverproperty('servername') AS VARCHAR(255))

  SET @DifferenceInMilliSeconds = Datediff(ms, CONVERT(DATETIME, '00:00:00', 8), @Duration)

  SELECT
    @StopTime = Dateadd(ms, @DifferenceInMilliSeconds, Getdate())

  WHILE Getdate() <= @StopTime
    BEGIN
        SELECT
          @LastRecordedDateTime = @CurrentDateTime

        SELECT
          @CurrentDateTime = Getdate()

        INSERT INTO dbo.VirtualFileStats
                    (ServerName,
                     DBID,
                     FileID,
                     Reads,
                     ReadsFromStart,
                     Writes,
                     WritesFromStart,
                     BytesRead,
                     BytesReadFromStart,
                     BytesWritten,
                     BytesWrittenFromStart,
                     IostallInMilliseconds,
                     IostallInMillisecondsFromStart,
                     IostallReadsInMilliseconds,
                     IostallReadsInMillisecondsFromStart,
                     IostallWritesInMilliseconds,
                     IostallWritesInMillisecondsFromStart,
                     RecordedDateTime,
                     IntervalinMilliseconds,
                     FirstMeasureFromStart)
        SELECT
          @CurrentServerName,
          vfs.database_id,
          vfs.[file_id],
          vfs.num_of_reads - dbaf.ReadsFromStart                            AS Reads,
          vfs.num_of_reads                                                  AS ReadsFromStart,
          vfs.num_of_writes - dbaf.WritesFromStart                          AS Writes,
          vfs.num_of_writes                                                 AS WritesFromStart,
          vfs.num_of_bytes_read - dbaf.BytesReadFromStart                   AS BytesRead,
          vfs.num_of_bytes_read                                             AS BytesReadFromStart,
          vfs.num_of_bytes_written - dbaf.BytesWrittenFromStart             AS BytesWritten,
          vfs.num_of_bytes_written                                          AS BytesWrittenFromStart,
          vfs.io_stall - dbaf.IostallInMillisecondsFromStart                AS IostallInMilliseconds,
          vfs.io_stall                                                      AS IostallInMillisecondsFromStart,
          vfs.io_stall_read_ms - dbaf.IostallReadsInMillisecondsFromStart   AS IostallReadsInMilliseconds,
          vfs.io_stall_read_ms                                              AS IostallReadsInMillisecondsFromStart,
          vfs.io_stall_write_ms - dbaf.IostallWritesInMillisecondsFromStart AS IostallWritesInMilliseconds,
          vfs.io_stall_write_ms                                             AS IostallWritesInMillisecondsFromStart,
          @CurrentDateTime,
          CASE
            WHEN @LastRecordedDateTime IS NULL THEN NULL
            ELSE Datediff(ms, dbaf.RecordedDateTime, @CurrentDateTime)
          END                                                               AS IntervalInMilliseconds,
          CASE
            WHEN @LastRecordedDateTime IS NULL THEN 1
            ELSE 0
          END                                                               AS FirstMeasureFromStart
        FROM
          sys.Dm_io_virtual_file_stats(@DB, @DBFile) vfs
        LEFT OUTER JOIN VirtualFileStats dbaf
          ON vfs.database_id = dbaf.dbid
             AND vfs.[file_id] = dbaf.fileid
        WHERE
          ( @LastRecordedDateTime IS NULL
             OR dbaf.RecordedDateTime = @LastRecordedDateTime )

        SELECT
          @ErrorNumber = @@ERROR,
          @NumberOfRows = @@ROWCOUNT

        IF @ErrorNumber != 0
          BEGIN
              SET @ErrorMessageText = 'Error ' + CONVERT(VARCHAR(10), @ErrorNumber) + ' failed to insert file stats data!'

              RAISERROR (@ErrorMessageText,
                         16,
                         1) WITH LOG

              RETURN @ErrorNumber
          END

        WAITFOR DELAY @IntervalInSeconds
    END

I Have Data, Now What?

This is where the fun begins! If you just query the table the data doesn’t make much sense.

image

We need to do some simple math and get the data into a metric that is meaningful.


SELECT TOP 10 Db_name(dbid)                                          AS 'databasename',
              File_name(fileid)                                      AS 'filename',
              Reads / ( IntervalInMilliSeconds / 1000 )              AS 'readspersecond',
              Writes / ( IntervalInMilliSeconds / 1000 )             AS 'writespersecond',
              ( Reads + Writes ) / ( IntervalInMilliSeconds / 1000 ) AS 'iopersecond',
              CASE
                WHEN ( Reads / ( IntervalInMilliSeconds / 1000 ) ) > 0
                     AND IostallReadsInMilliseconds > 0 THEN IostallReadsInMilliseconds / Reads
                ELSE 0
              END                                                    AS 'iolatencyreads',
              CASE
                WHEN ( Reads / ( IntervalInMilliSeconds / 1000 ) ) > 0
                     AND IostallWritesInMilliseconds > 0 THEN IostallWritesInMilliseconds / Writes
                ELSE 0
              END                                                    AS 'iolatencywrites',
              CASE
                WHEN ( ( Reads + Writes ) / ( IntervalInMilliSeconds / 1000 ) > 0
                       AND IostallInMilliseconds &gt; 0 ) THEN IostallInMilliseconds / ( Reads + Writes )
                ELSE 0
              END                                                    AS 'iolatency',
              RecordedDateTime
FROM   management.dbo.VirtualFileStats
WHERE  DBID = 10
       AND FirstMeasureFromStart = 0
ORDER  BY RecordID

This gives us reads, writes and io latency per second results.

image

Now we are cooking! We can now see that on this database we are seeing some spikes in latency, the number of milliseconds it takes to complete a single IO request, and may warrant investigation. As a general rule of thumb if I see IO latency above 20 milliseconds consistently I start looking deeper into the IO system to see what is wrong. A single modern hard disk is capable of 130 random IO’s a second. Another thing to consider is how many databases are on the same disk. I will pull the database files together with the volume they are on to get a true aggregate number of IO’s and latencies. You may find that a single database is dominating the disks and causing other databases to slow down even when the number of IO’s for those databases is small. On a SAN being able to get to the file level can help you locate the physical disks for that LUN and help your SAN administrators look at a very specific spot instead of the overall health of the SAN and actually fix your performance issues.

Lastly, I run a nightly job that performs these aggregations for me and moves them into a table that I keep long term so I can see performance over time. This is a great way to see if you are getting near your IO capacity and if the steps you have done correct the issues.

So, That’s it?

Not by a long shot. Measuring IO latencies is just one piece of a much larger puzzle for troubleshooting performance problems. It is a valuable one though, and keeping this information over time is just priceless. Now that you have the tools to gather virtual file statistics on your SQL Server instance I’d love to hear how you are using the data.

Update 4/14/2011

I added a history table an a check at the beginning to move all data to history before starting up. Mike Kaplan reported below that multiple runs caused issues if there was data in the table from a previous run.

Update 4/17/2013

Added a check for the version of SQL Server. In 2012 they changed the syntax so calling the dmv with a -1,-1 simply gives you no data. Previously, it returned all database files.

Idera SQL Doctor First Look

I recently saw a tweet about Idera’s new SQL Doctor tool that is currently in beta. This differs from other tools you may think of like Diagnostic Manager. DM and other tools like it gather some of the same information but are geared for real time alerts. This is more like the Best Practices Analyzer. It takes a look at several key points on your SQL Server instance and the OS. It makes recommendations on how to improve any problems it identifies. You may ask, does this do anything I can’t do on my own? No, it doesn’t. If you are doing this in house with your own tool set you may not need this, or any other tool like it. To be honest, you aren’t the target audience for SQL Doctor. This is really designed for shops that ether don’t have enough DBA’s to watch everything, or don’t have a full time DBA at all. That isn’t to say it has no use even in a highly monitored and optimized shop. I picked a horribly abused development box and let SQL Doctor tell me just how bad things were.

Idera_sql_doctor1

The interface is clean and simple. First thing we have to do is pick a server and connect.

Idera_sql_doctor2

Next it starts the interview process. It isn’t extremely in depth, and it isn’t suppose to be.

Idera_sql_doctor3

I would be a little worried if I didn’t know if the server was in production or not. But, the person running SQL Doctor may not be the DBA.

Idera_sql_doctor4

I’m sure they wrestled with this one. How do you describe OLTP workload vs. OLAP workload if the person running the tool doesn’t know if the server is production or not?

Idera_sql_doctor5

That’s pretty much all the questions at all. You can change some things in the settings tab like the databases

Idera_sql_doctor6

I’d say the time estimate can be a bit optimistic. It took over an hour to run on this box. The progress bar is a tad misleading as well since it spent the bulk of that time basically at 99% complete.

Idera_sql_doctor7

Looks scary! Some of these problems are. They do scale them according to problem and push the less likely stuff to the bottom for you. It is a lot of information to look at though. 600 recommendations, wow. Lots of index issues and query issues. Time to dig in.

Idera_sql_doctor8

They do a good job of grouping issues together. Like other tools in this category, sometimes the recommendations aren’t 100%. They throw the disk queue around quite a bit but not disk latency, which is really what we are worried about here. It does pick up some pretty nifty stuff that other tools I have looked at don’t really catch. It warned me about a NIC setting being optimized for file sharing and that it could put memory pressure on the box. It also hit on some other fun stuff at the OS level for memory like lock pages in memory that I bet plenty of folks may not be using. I was pretty happy with the level of OS recommendations over all. I am concerned with some of the recommendations that seem to be out of date. It recommended a file per CPU core on tempdb for example. I would recommend that they get a few more SQL Server folks to look at the recommendations and submit tweaks to them. It also will script the SQL Server changes for you or point you to Microsoft Kb’s to get them fixed up. Again, my worry with this kind of tool is someone is just going to blanket run every recommendation, and some of them like disabling the CLR could be detrimental if that feature is on for a reason. I know its hard to get everyone to read the warnings. Lets be honest though, if they don’t know if it is a production OLTP system they probably don’t know if the CLR is on for a reason ether.

I have to say, for a beta and for a tool of this type, I am impressed. With a little extra work I think it will be a worthy addition to any shop that is lacking in deep SQL Server expertise.

The Dangers of Sub-queries!!!!

Ok, now that I have your attention this really should be titled the danger of not qualifying all objects, but that isn’t quite as sensational enough to make you click through to here 🙂

Imagine if you will, a developer expects ten records to be inserted into a table and the query has been running for almost an hour.  A quick glance and sp_who2 shows that it is using a lot of IO, I mean a lot. The first thing that happens is a command decision from the boss to kill the query. I took a quick peek and found the plan still in the plan cache while we waited for this thing to roll back. As I look at the plan I see something really wrong. It looks like the table receiving the inserts isn’t getting the ten records we thought but 169 million records, every record from the table in the select clause.

I look at the query and on the surface everything looks good. When I hit the check mark it compiles without error. Looking at the plan in detail I notice a little icon that tipped me off on where to look next.

image

Oh that’s bad. Warnings: No join predicate. Ouch. But this is a sub-query, how can I have no join predicate? Lets take a look at an example.

DROP TABLE #t1
DROP TABLE #t2
GO
CREATE TABLE #t1 (
  t1id    INT   NOT NULL,
  t1name  VARCHAR(50)   NULL,
  t1place VARCHAR(50)   NULL)
  
ALTER TABLE #t1
 ADD   PRIMARY KEY ( t1id )
 
CREATE TABLE #t2 (
  t2id    INT   NOT NULL,
  t1name  VARCHAR(50)   NULL,
  t1place VARCHAR(50)   NULL)
  
ALTER TABLE #t2
 ADD   PRIMARY KEY ( t2id )
 
INSERT INTO #t1
SELECT 
  spid,
  loginame,
  hostname
FROM   
  MASTER.dbo.sysprocesses
WHERE  hostname <> ''

INSERT INTO #t2
SELECT 
  spid,
  loginame,
  hostname
FROM   
  MASTER.dbo.sysprocesses
WHERE  hostname <> ''

This gives us two tables with some data. We want to find all the records in #t1 that also exist in #t2 but instead of a join we will use an IN and a sub-query.

SELECT 
  t1name,
  t1place
FROM   
  #t1
WHERE  t1id IN (SELECT 
                 t1id
                FROM   
                 #t2)
 

Looks simple enough and will pass the compile test all day long even though t1id doesn’t exist in #t2. Since you can use columns from the top level query in the sub-query this is a perfectly valid piece of T-SQL.

It gives us this plan when we look at it though.

image

And there you have it, a join without an ON clause as far as the optimizer is concerned. By not qualifying all the columns in the sub-query we opened ourselves up to this error. This isn’t the optimizers fault! If we re-write this as it was mean to be:

SELECT 
  t1name,
  t1place
FROM   
  #t1
WHERE  t1id IN (SELECT 
                 t2id
                FROM   
                 #t2)
 

We get a plan that is more to our liking.

image

You could also re-write this as a join instead of using the IN clause and would have avoided this problem as well. Scary way to learn that SQL Server will do exactly what you tell it to even if you are wrong!