Category Archives: Learning

Changing Directions

I See Dead Tech….

Knowing when a technology is dying is always a good skill to have. Like most of my generation we weren’t the first on the computer scene but lived through several of it’s more painful transitions. As a college student I was forced to learn antiquated technologies and languages. I had to take a semester of COBOL. I also had to take two years of assembler for the IBM 390 mainframe and another year of assembler for the x86 focused on the i386 when the Pentium was already on the market. Again and again I’ve been forced to invest time in dying technologies. Well not any more!

Hard drives are dead LONG LIVE SOLID STATE!

I set the data on a delicate rinse cycle

I’m done with spinning disks. Since IBM invented them in nineteen and fifty seven they haven’t improved much over the years. They got smaller and faster yes but they never got sexier than the original. I mean, my mom was born in the fifties, I don’t want to be associated with something that old and way uncool. Wouldn’t you much rather have something at least invented in the modern age in your state of the art server?

Don’t you want the new hotness?

I mean seriously, isn’t this much cooler? I’m not building any new servers or desktop systems unless they are sporting flash drives. But don’t think this will last. You must stay vigilant, NAND flash won’t age like a fine wine ether. There will be something new in a few years and you must be willing to spend whatever it takes to deploy the “solid state killer” when it comes out.

Tell Gandpa Relational is Soooo last century

The relational model was developed by Dr. EF Codd while at IBM in 1970, two years before I was born. Using some fancy math called tuple calculus he proved that the relational model was better at seeking data on these new “hard drives” that IBM had laying around. That later tuned into relational algebra that is used today. Holy cow! I hated algebra AND calculus in high school why would I want to work with that crap now?

NoSQL Is The Future!

PhD’s, all neck ties and crazy gray hair.

Internet Scale, web 2.0 has a much better haircut.

In this new fast paced world of web 2.0 and databases that have to go all the way to Internet scale, the old crusty relational databases just can’t hang. Enter, NoSQL! I know that NoSQL covers a lot of different technologies, but some of the core things they do very well is scale up to millions of users and I need to scale that high. They do this by side stepping things like relationships, transactions and verified writes to disk. This makes them blazingly fast! Plus, I don’t have to learn any SQL languages, I can stay with what I love best javascript and JSON. Personally, I think MongoDB is the best of the bunch they don’t have a ton of fancy PhD’s, they are getting it done in the real world! Hey, they have a Success Engineer for crying out loud!!! Plus if you are using Ruby, Python, Erlang or any other real Web 2.0 language it just works out of the box. Don’t flame me about your NoSQL solution and why it is better, I just don’t care. I’m gearing up to hit all the major NoSQL conferences this year and canceling all my SQL Server related stuff. So long PASS Summit, no more hanging out with people obsessed with outdated skills.

Head in the CLOUD

Racks and Racks of Spaghetti photo by: Andrew McKaskill

Do you want this to manage?

Or this?

With all that said, I probably won’t be building to many more servers anyway. There is a new way of getting your data and servers without the hassle of buying hardware and securing it, THE CLOUD!

“Cloud computing is computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Parallels to this concept can be drawn with the electricity grid where end-users consume power resources without any necessary understanding of the component devices in the grid required to provide the service.” http://en.wikipedia.org/wiki/Cloud_computing

Now that’s what I’m talking about! I just plug in my code and out comes money. I don’t need to know how it all works on the back end. I’m all about convenient, on-demand network access to a shared pool of configurable computing resources. You know, kind of like when I was at college and sent my program to a sysadmin to get a time slice on the mainframe. I don’t need to know the details just run my program. Heck, I can even have a private cloud connected to other public and private clouds to make up The Intercloud(tm). Now that is sexy!

To my new ends I will be closing this blog and starting up NoSQLServerNoIOTheCloud.com to document my new jersey, I’ll only be posting once a year though, on April 1st.

See you next year!

Fast File Copy With Managed Code: UBCopy update!

If you have been following my trials with working with C# and files you know it hasn’t been a bed of roses. I ran into a roadblock when I was trying to build a high performance file copy tool in the CLR. I eventually found a solution. It works but it isn’t very clean. So, I did a little more work, removed a broken bit and simplified another bit to make it a little more stable. I’m not done yet. I know there are a couple of places I can clean the code up a bit more and a way to speed things up a bit more at the cost of using more memory, but that is for another blog post.

Logging, to the rescue

The first major change was adding a logging facility to the UBCopy program via log4net. I had stayed away from adding it fearing to much overhead but after some tweaking it seems pretty darn fast. This allowed me to do some tight timings and log debug information to a log file to track long running copies without much fuss. It also allows me to turn off all console messaging since this is used in an automated fashion if something fails I’ve got the log file to look at.

Please don’t lock my file

There are a few places where the file target file being written to could potentially be locked. Right now I have to create the file and set the file length then close it before opening it for unbuffered writes. I haven’t done anything about this grab yet. The second one is when I am writing the last block to the file. Since you have to write in page aligned multiples I was closing the file, reopening it in a buffered mode and flushing the last partial buffer to disk. I haven’t fixed these up yet but I am working on a solution. I haven’t encountered a problem yet where the file gets locked during this process but it is still a potential threat that must be dealt with. Even though the file stream is seekable and writeable you can’t issue a setlength command on it. I plan on looking deeper into the filestream code and see what they are doing when you issue a setlength. If I can eliminate the open,close and open routine it will speed things up a bit and keep anyone from grabbing my file handle.
Here’s and example of what I’m talking about, after the set file size is done there is a very small window of opportunity here.

//open output file set length to prevent growth and file fragmentation and close it.
            //We do this to prevent file fragmentation and make the write as fast as possible.
            try
            {
                _outfile = new FileStream(_outputfile, FileMode.Create, FileAccess.Write, FileShare.None, 8,
                                          FileOptions.WriteThrough);
                //set file size to minimum of one buffer to cut down on fragmentation
                _outfile.SetLength(_infilesize > CopyBufferSize ? _infilesize : CopyBufferSize);
                _outfile.Close();
                _outfile.Dispose();
            }
            catch (Exception e)
            {
                throw;
            }
            //open file for write unbuffered
            try
            {
                _outfile = new FileStream(_outputfile, FileMode.Open, FileAccess.Write, FileShare.None, 8,
                                          FileOptions.WriteThrough | FileFlagNoBuffering);

            }
            catch (Exception e)
            {
                throw;
            }

Unbuffered, mostly

Since unbuffered writes prevent you from writing data that isn’t memory page aligned (usually 4096), you get an error when writing to the file. To get around this all writes are the same size period, this really only effects the last write which may have a partially full buffer. It will write down the full buffer, including invalid data. Luckily, an easy fix is to reset the end of file pointer back to the correct location and everything is happy again. This allows me to eliminate an additional bit of code. here is the old code.

//open file for write buffered We do this so we can write the tail of the file
//it is a cludge but hey you get what you get in C#
outfile = new FileStream(outputfile, FileMode.Open, FileAccess.Write, FileShare.None, 8,
             FileOptions.WriteThrough);
//go to the right position in the file
outfile.Seek(infilesize - bytesRead1, 0);
//flush the last buffer syncronus and buffered.
outfile.Write(Buffer1, 0, bytesRead1);

And now the new, faster and better way!

//close the file handle that was using unbuffered and write through and move the EOF pointer.
Log.Debug("Close Write File Unbuffered");
_outfile.Close();
_outfile.Dispose();

try
{
    if (IsDebugEnabled)
    {
        Log.Debug("Open File Set Length");
    }
    _outfile = new FileStream(_outputfile, FileMode.Open, FileAccess.Write, FileShare.None, 8,
                                FileOptions.WriteThrough);
    _outfile.SetLength(_infilesize);
    _outfile.Close();
    _outfile.Dispose();
}
catch (Exception e)
{
    if (IsDebugEnabled)
    {
        Log.Debug("Failed to open for write set length");
        Log.Debug(e);
    }
    throw;
}

Buffered or Unbuffered

If the file was smaller than the size of the buffer I was just copying the file buffered. This was a check and route to a diffrent code path for copying files. Not a huge deal but just another place for problems to creep up over time. So, I removed it when I changed the way unbuffered multi-threaded writes were happening I was able to get rid of the buffered copy routine all together. Now, it will initially set the file size to a single buffer. Write the data and reset the file pointer to the original size.

You want a MOVE but get a COPY

There was also a bug that on small files UBCopy wouldn’t do a move just a copy in some cases. There was an error in the code that prevented the delete of the source file after the copy to the new location due to a lock that was being held by the program. Not the end of the world, just wasn’t working like it should 100% of the time.

So, here is the new AsyncUnbuffCopy routine. Calling it is super easy. If you want to use the code as-is you will need to download log4net and include it in your project.

//
// AsyncUnbuffCopy.cs
//
// Authors:
//  Wesley D. Brown <wes@planetarydb.com>
//
// Copyright (C) 2010 SQLServerIO (http://www.SQLServerIO.com)
//
// Permission is hereby granted, free of charge, to any person obtaining
// a copy of this software and associated documentation files (the
// "Software"), to deal in the Software without restriction, including
// without limitation the rights to use, copy, modify, merge, publish,
// distribute, sublicense, and/or sell copies of the Software, and to
// permit persons to whom the Software is furnished to do so, subject to
// the following conditions:
//
// The above copyright notice and this permission notice shall be
// included in all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
//

using System;
using System.IO;
using System.Security.Cryptography;
using System.Text;
using System.Threading;
using log4net;

namespace UBCopy
{
    internal class AsyncUnbuffCopy
    {
        private static readonly ILog Log = LogManager.GetLogger(typeof(AsyncUnbuffCopy));
        private static readonly bool IsDebugEnabled = Log.IsDebugEnabled;

        //file names
        private static string _inputfile;
        private static string _outputfile;

        //checksum holders
        private static string _infilechecksum;
        private static string _outfilechecksum;

        //show write progress
        private static bool _reportprogress;

        //cursor position
        private static int _origRow;
        private static int _origCol;

        //number of chunks to copy
        private static int _numchunks;

        //track read state and read failed state
        private static bool _readfailed;

        //syncronization object
        private static readonly object Locker1 = new object();

        //buffer size
        public static int CopyBufferSize;
        private static long _infilesize;

        //buffer read
        public static byte[] Buffer1;
        private static int _bytesRead1;

        //buffer overlap
        public static byte[] Buffer2;
        private static bool _buffer2Dirty;
        private static int _bytesRead2;

        //buffer write
        public static byte[] Buffer3;

        //total bytes read
        private static long _totalbytesread;
        private static long _totalbyteswritten;

        //filestreams
        private static FileStream _infile;
        private static FileStream _outfile;

        //secret sauce for unbuffered IO
        const FileOptions FileFlagNoBuffering = (FileOptions)0x20000000;

        private static void AsyncReadFile()
        {
            //open input file
            try
            {
                _infile = new FileStream(_inputfile, FileMode.Open, FileAccess.Read, FileShare.None,
CopyBufferSize, FileFlagNoBuffering);
            }
            catch (Exception e)
            {
                if (IsDebugEnabled)
                {
                    Log.Debug("Failed to open for read");
                    Log.Debug(e);
                }
                throw;
            }
            //if we have data read it
            while (_totalbytesread < _infilesize)
            {
                if (IsDebugEnabled)
                {
                    Log.Debug("Read _buffer2Dirty    : " + _buffer2Dirty);
                }
                _bytesRead1 = _infile.Read(Buffer1, 0, CopyBufferSize);
                Monitor.Enter(Locker1);
                try
                {
                    while (_buffer2Dirty) Monitor.Wait(Locker1);
                    Buffer.BlockCopy(Buffer1, 0, Buffer2, 0, _bytesRead1);
                    _buffer2Dirty = true;
                    _bytesRead2 = _bytesRead1;
                    _totalbytesread = _totalbytesread + _bytesRead1;
                    Monitor.PulseAll(Locker1);
                    if (IsDebugEnabled)
                    {

                        Log.Debug("Read       : " + _totalbytesread);
                    }
                }
                catch (Exception e)
                {
                    Log.Fatal("Read Failed.");
                    Log.Fatal(e);
                    _readfailed = true;
                    throw;
                }
                finally { Monitor.Exit(Locker1); }
            }
            //clean up open handle
            _infile.Close();
            _infile.Dispose();
        }

        private static void AsyncWriteFile()
        {
            //open output file set length to prevent growth and file fragmentation and close it.
            //We do this to prevent file fragmentation and make the write as fast as possible.
            try
            {
                if (IsDebugEnabled)
                {
                    Log.Debug("Open File Set Length");
                }
                _outfile = new FileStream(_outputfile, FileMode.Create, FileAccess.Write, FileShare.None, 8,
                                          FileOptions.WriteThrough);

                //set file size to minimum of one buffer to cut down on fragmentation
                _outfile.SetLength(_infilesize > CopyBufferSize ? _infilesize : CopyBufferSize);

                _outfile.Close();
                _outfile.Dispose();
            }
            catch (Exception e)
            {
                Log.Fatal("Failed to open for write set length");
                Log.Fatal(e);
                throw;
            }

            //open file for write unbuffered
            try
            {
                if (IsDebugEnabled)
                {
                    Log.Debug("Open File Write Unbuffered");
                }
                _outfile = new FileStream(_outputfile, FileMode.Open, FileAccess.Write, FileShare.None, 8,
                                          FileOptions.WriteThrough | FileFlagNoBuffering);

            }
            catch (Exception e)
            {
                Log.Fatal("Failed to open for write unbuffered");
                Log.Fatal(e);
                throw;
            }

            var pctinc = 0.0;
            var progress = pctinc;

            //progress stuff
            if (_reportprogress)
            {
                if (IsDebugEnabled)
                {
                    Log.Debug("Report Progress : True");
                }
                pctinc = 100.00 / _numchunks;
            }
            if (IsDebugEnabled)
            {
                Log.Debug("While Write _totalbyteswritten          : " + _totalbyteswritten);
                Log.Debug("While Write _infilesize - CopyBufferSize: " + (_infilesize - CopyBufferSize));
            }
            while ((_totalbyteswritten < _infilesize) && !_readfailed)
            {
                if (IsDebugEnabled)
                {
                    Log.Debug("Write Unbuffered _buffer2Dirty    : " + _buffer2Dirty);
                }
                lock (Locker1)
                {
                    if (IsDebugEnabled)
                    {
                        Log.Debug("Write Unbuffered Lock");
                    }
                    while (!_buffer2Dirty) Monitor.Wait(Locker1);
                    if (IsDebugEnabled)
                    {
                        Log.Debug("Write Unbuffered _buffer2Dirty    : " + _buffer2Dirty);
                    }
                    Buffer.BlockCopy(Buffer2, 0, Buffer3, 0, _bytesRead2);
                    _buffer2Dirty = false;
                    if (IsDebugEnabled)
                    {
                        Log.Debug("Write Unbuffered _buffer2Dirty    : " + _buffer2Dirty);
                    }
                    _totalbyteswritten = _totalbyteswritten + CopyBufferSize;
                    if (IsDebugEnabled)
                    {
                        Log.Debug("Written Unbuffered : " + _totalbyteswritten);
                    }
                    Monitor.PulseAll(Locker1);
                    //fancy dan in place percent update on each write.

                    if (_reportprogress && !IsDebugEnabled)
                    {
                        Console.SetCursorPosition(_origCol, _origRow);
                        if (progress < 101 - pctinc)
                        {
                            progress = progress + pctinc;
                            Console.Write("%{0}", Math.Round(progress, 0));
                        }
                    }
                }
                try
                {
                    _outfile.Write(Buffer3, 0, CopyBufferSize);
                }
                catch (Exception e)
                {
                    Log.Fatal("Write Unbuffered Failed");
                    Log.Fatal(e);
                    throw;
                }
            }

            //close the file handle that was using unbuffered and write through
            Log.Debug("Close Write File Unbuffered");
            _outfile.Close();
            _outfile.Dispose();

            try
            {
                if (IsDebugEnabled)
                {
                    Log.Debug("Open File Set Length");
                }
                _outfile = new FileStream(_outputfile, FileMode.Open, FileAccess.Write, FileShare.None, 8,
                                          FileOptions.WriteThrough);
                _outfile.SetLength(_infilesize);
                _outfile.Close();
                _outfile.Dispose();
            }
            catch (Exception e)
            {
                if (IsDebugEnabled)
                {
                    Log.Debug("Failed to open for write set length");
                    Log.Debug(e);
                }
                throw;
            }
        }

        public static int AsyncCopyFileUnbuffered(string inputfile, string outputfile, bool overwrite,
bool movefile, bool checksum, int buffersize, bool reportprogress)
        {
            if (IsDebugEnabled)
            {
                Log.Debug("inputfile      : " + inputfile);
                Log.Debug("outputfile     : " + outputfile);
                Log.Debug("overwrite      : " + overwrite);
                Log.Debug("movefile       : " + movefile);
                Log.Debug("checksum       : " + checksum);
                Log.Debug("buffersize     : " + buffersize);
                Log.Debug("reportprogress : " + reportprogress);
            }
            //report write progress
            _reportprogress = reportprogress;

            //set file name globals
            _inputfile = inputfile;
            _outputfile = outputfile;

            //setup single buffer size, remember this will be x3.
            CopyBufferSize = buffersize * 1024 * 1024;

            //buffer read
            Buffer1 = new byte[CopyBufferSize];

            //buffer overlap
            Buffer2 = new byte[CopyBufferSize];

            //buffer write
            Buffer3 = new byte[CopyBufferSize];

            //clear all flags and handles
            _totalbytesread = 0;
            _totalbyteswritten = 0;
            _bytesRead1 = 0;
            _buffer2Dirty = false;

            //if the overwrite flag is set to false check to see if the file is there.
            if (File.Exists(outputfile) && !overwrite)
            {
                if (IsDebugEnabled)
                {
                    Log.Debug("Destination File Exists!");
                }
                Console.WriteLine("Destination File Exists!");
                return 0;
            }

            //create the directory if it doesn't exist
            if (!Directory.Exists(outputfile))
            {
                try
                {
                    // ReSharper disable AssignNullToNotNullAttribute
                    Directory.CreateDirectory(Path.GetDirectoryName(outputfile));
                    // ReSharper restore AssignNullToNotNullAttribute
                }
                catch (Exception e)
                {
                    Log.Fatal("Create Directory Failed.");
                    Log.Fatal(e);
                    Console.WriteLine("Create Directory Failed.");
                    Console.WriteLine(e.Message);
                    throw;
                }
            }

            //get input file size for later use
            var inputFileInfo = new FileInfo(_inputfile);
            _infilesize = inputFileInfo.Length;

            //get number of buffer sized chunks used to correctly display percent complete.
            _numchunks = (int)((_infilesize / CopyBufferSize) <= 0 ? (_infilesize / CopyBufferSize) : 1);

            if (IsDebugEnabled)
            {
                Log.Debug("File Copy Started");
            }
            Console.WriteLine("File Copy Started");

            //create read thread and start it.
            var readfile = new Thread(AsyncReadFile) { Name = "ReadThread", IsBackground = true };
            readfile.Start();

            if (IsDebugEnabled)
            {
                //debug show if we are an even multiple of the file size
                Log.Debug("Number of Chunks: " + _numchunks);
            }

            //create write thread and start it.
            var writefile = new Thread(AsyncWriteFile) { Name = "WriteThread", IsBackground = true };
            writefile.Start();

            if (_reportprogress)
            {
                //set fancy curor position
                _origRow = Console.CursorTop;
                _origCol = Console.CursorLeft;
            }

            //wait for threads to finish
            readfile.Join();
            writefile.Join();

            //leave a blank line for the progress indicator
            if (_reportprogress)
                Console.WriteLine();

            if (IsDebugEnabled)
            {
                Log.Debug("File Copy Done");
            }

            Console.WriteLine("File Copy Done");

            if (checksum)
            {
                if (IsDebugEnabled)
                {
                    Log.Debug("Checksum Source File Started");
                }
                Console.WriteLine("Checksum Source File Started");
                //create checksum read file thread and start it.
                var checksumreadfile = new Thread(GetMD5HashFromInputFile) {
Name = "checksumreadfile", IsBackground = true };
                checksumreadfile.Start();

                if (IsDebugEnabled)
                {
                    Log.Debug("Checksum Destination File Started");
                }
                Console.WriteLine("Checksum Destination File Started");
                //create checksum write file thread and start it.
                var checksumwritefile = new Thread(GetMD5HashFromOutputFile) {
Name = "checksumwritefile", IsBackground = true };
                checksumwritefile.Start();

                //hang out until the checksums are done.
                checksumreadfile.Join();
                checksumwritefile.Join();

                if (_infilechecksum.Equals(_outfilechecksum))
                {
                    if (IsDebugEnabled)
                    {
                        Log.Debug("Checksum Verified");
                    }
                    Console.WriteLine("Checksum Verified");
                }
                else
                {
                    if (IsDebugEnabled)
                    {
                        Log.Debug("Checksum Failed");
                        Log.DebugFormat("Input File Checksum : {0}", _infilechecksum);
                        Log.DebugFormat("Output File Checksum: {0}", _outfilechecksum);
                    }
                    Console.WriteLine("Checksum Failed");
                    Console.WriteLine("Input File Checksum : {0}", _infilechecksum);
                    Console.WriteLine("Output File Checksum: {0}", _outfilechecksum);
                }
            }

            if (movefile && File.Exists(inputfile) && File.Exists(outputfile))
                try
                {
                    File.Delete(inputfile);
                }
                catch (IOException ioex)
                {
                    if (IsDebugEnabled)
                    {
                        Log.Error("File in use or locked cannot move file.");
                        Log.Error(ioex);
                    }
                    Console.WriteLine("File in use or locked");
                    Console.WriteLine(ioex.Message);
                }
                catch (Exception ex)
                {
                    if (IsDebugEnabled)
                    {
                        Log.Error("File Failed to Delete");
                        Log.Error(ex);
                    }
                    Console.WriteLine("File Failed to Delete");
                    Console.WriteLine(ex.Message);
                }
            return 1;
        }

        //hash input file
        public static void GetMD5HashFromInputFile()
        {
            var fs = new FileStream(_inputfile, FileMode.Open, FileAccess.Read, FileShare.None,
CopyBufferSize);
            MD5 md5 = new MD5CryptoServiceProvider();
            byte[] retVal = md5.ComputeHash(fs);
            fs.Close();

            var sb = new StringBuilder();
            for (var i = 0; i < retVal.Length; i++)
            {
                sb.Append(retVal[i].ToString("x2"));
            }
            _infilechecksum = sb.ToString();
        }

        //hash output file
        public static void GetMD5HashFromOutputFile()
        {
            var fs = new FileStream(_outputfile, FileMode.Open, FileAccess.Read, FileShare.None,
CopyBufferSize);
            MD5 md5 = new MD5CryptoServiceProvider();
            byte[] retVal = md5.ComputeHash(fs);
            fs.Close();

            var sb = new StringBuilder();
            for (var i = 0; i < retVal.Length; i++)
            {
                sb.Append(retVal[i].ToString("x2"));
            }
            _outfilechecksum = sb.ToString();
        }
    }
}

At The End of the IO Road With C#? Pave New Road!

Not being one for letting a problem get the best of me, I took another look at the asynchronous overlapped IO problem. If you read my last post on the subject, you know I’ve done a lot of work on this already. None of the things I said last time have changed at all. If you want to do asynchronous and un-buffered IO in C# using the native file stream calls you can’t… So, I rolled my own. The kicker is, I don’t use any unmanaged code to do this. No call to VirtualAlloc() or anything else using DLL imports. Oh, and the speed is spectacular.

The Goal

My ultimate goal was to build a routine that would do un-buffered asynchronous IO. That means I don’t want the OS doing any buffering or funny stuff with the IO’s I issue. That goes for reads and writes. SQL Server uses this method to harden writes to the disk and it also performs well with excellent predictability. If you have ever use windows to do a regular copy you will see it eating up memory to buffer both reads and writes. If you copy the same file a couple of times you will notice that the first time it runs in about the speed you expect it, but the second time it may run twice as fast. This is all Windows, buffering as much data and holding on to that buffer. That’s great for smaller files but if you are pushing around multi-gigabyte files it is a disaster. As the system becomes starved for memory it pages then starts throttling back. Your 100MB/sec copy is now crawling along at 20MB/sec.

Where we left off..

I had settled on a simple routine that would allow me to do un-buffered reads from a file and write to a buffered file ether on disk or across the network.

internal class UnBufferedFileCopy
{
	public static int CopyBufferSize = 8 * 1024 * 1024;
	public static byte[] Buffer = new byte[CopyBufferSize];
	const FileOptions FileFlagNoBuffering = (FileOptions)0x20000000;

	public static int CopyFileUnbuffered(string inputfile, string outputfile)
	 {
		var infile = new FileStream(inputfile, FileMode.Open, FileAccess.Read
		, FileShare.None, 8, FileFlagNoBuffering | FileOptions.SequentialScan);
		var outfile = new FileStream(outputfile, FileMode.Create, FileAccess.Write
		, FileShare.None, 8, FileOptions.WriteThrough);

		int bytesRead;
		while ((bytesRead = infile.Read(Buffer, 0, CopyBufferSize)) != 0)
		{
			outfile.Write(Buffer, 0, bytesRead);
		}

		outfile.Close();
		outfile.Dispose();
		infile.Close();
		infile.Dispose();
		return 1;
	}
}

There are two problems with this routine. First off, only the read from source is truly un-buffered. C# offers the write through flag and I thought that would be enough. I fired up process monitor and watched the IO issued on writes and it wasn’t buffer sized requests, it was always broken up into 64k chunks. So, the read request would fetch say 16MB of data and pass that to the write request who would then break that up into chunks. This wasn’t the behavior I was going for! Doing some additional research I found adding the no buffering flag to the write through flag gave me the results I was after, almost. You can’t do un-buffered writes. Synchronous or asynchronous doesn’t matter. To do a un-buffered write the buffer area that you build from the byte array must be page aligned in memory and all calls must return a multiple of the page size. Again, this just isn’t possible in managed code. So, I investigated a horrible kludge of a solution. I do un-buffered writes until I get to the last block of data. Then I close and reopen the file in a buffered mode and write the last block. It isn’t pretty but it works. It also means that I can’t use write through and un-buffered on a file smaller than the buffer size. Not a huge deal but something to be aware of if you are doing a lot of small files. If you are going the small file route the first routine will probably be OK.

internal class UnBufferedFileCopy
{
	public static int CopyBufferSize = 8 * 1024 * 1024;
	public static byte[] Buffer1 = new byte[CopyBufferSize];
	const FileOptions FileFlagNoBuffering = (FileOptions)0x20000000;

	public static int CopyFileUnbuffered(string inputfile, string outputfile)
	{
		var infile = new FileStream(inputfile, FileMode.Open, FileAccess.Read
			, FileShare.None, 8, FileFlagNoBuffering | FileOptions.SequentialScan);
		//open output file set length to prevent growth and file fragmentation and close it.
		//We have to do it this way so we can do unbuffered writes to it later
		outfile = new FileStream(outputfile, FileMode.Create, FileAccess.Write
			, FileShare.None, 8, FileOptions.WriteThrough);
		outfile.SetLength(infilesize);
		outfile.Dispose();

		//open file for write unbuffered
		outfile = new FileStream(outputfile, FileMode.Open, FileAccess.Write
			, FileShare.None, 8, FileOptions.WriteThrough | FileFlagNoBuffering);
		long totalbyteswritten;
		int bytesRead1;
		//hold back one buffer
		while (totalbyteswritten < infilesize - CopyBufferSize)
		{
			bytesRead1 = _infile.Read(Buffer1, 0, CopyBufferSize);
			totalbyteswritten = totalbyteswritten + CopyBufferSize;
			outfile.Write(Buffer1, 0, bytesRead1);
		}

		//close the file handle that was using unbuffered and write through
		outfile.Dispose();

		//open file for write buffered We do this so we can write the tail of the file
		//it is a cludge but hey you get what you get in C#
		outfile = new FileStream(outputfile, FileMode.Open, FileAccess.Write, FileShare.None, 8,
		FileOptions.WriteThrough);

		//go to the right position in the file
		outfile.Seek(infilesize - bytesRead1, 0);
		//flush the last buffer syncronus and buffered.
		outfile.Write(Buffer1, 0, bytesRead1);

		outfile.Dispose();
		infile.Dispose();
		return 1;
	}
}

This is as close to fully un-buffered IO on both the read and write side of things. There is a lot going on, but it is still synchronous all the way. If you look at performance monitor it will show you a saw tooth pattern as you read then write since you are only ever doing one or the other. Using this to copy a file across the LAN to another server never got better than 75MB/Sec throughput. Not horrible but a long way from the 105MB/Sec I get from something like FastCopy or TerraCopy. Heck, it’s not even close to the theoretical 125MB/Sec a gigabit connection could support. That leaves the last piece of the puzzle, going asynchronous.

Threading in C#, To Produce or Consume?

We know that using the asynchronous file IO built into C# isn’t an option. That doesn’t mean we can’t pattern something of our own like it. I’ve done quite a bit of threading in C#. It isn’t as difficult as C/C++ but you can still blow your foot off. It adds a whole other level of complexity to your code. This is where a little thought and design on paper and using a flow chart can help you out quite a bit. Also, it’s good to research design patterns and multi-threading. A lot of smart people have tackled these problems and have developed well designed solutions. Our particular problem is a classic producer consumer pattern, a simple one at that. We have a producer, the read thread, putting data in a buffer. We have a consumer, the write thread, that takes that data and writes it to disk. My first priority is to model this as simply as possible. I’m not worried with multiple readers or writers. I am concerned with locking and blocking. Keeping the time something has to be locked to a minimum is going to be key. That lead me to a simple solution. One read thread and the buffer it reads into, one write thread and the buffer it reads from and one intermediate buffer to pass data between them. Basically, an overlap buffer that is the same size as the read and write buffer. To give you a better visual example before showing you the code here are a couple of flow charts.

Read File
http://www.lucidchart.com/documents/view/4cac057f-d81c-472e-9764-52c00afcbe04

Write File
http://www.lucidchart.com/documents/view/4cac0726-dd14-46a6-8d44-53710afcbe04

There a few of things you need to be aware of. There is no guarantee of order on thread execution. That is why I’m using a lock object and a semaphore flag to let me know if the buffer is actually available to be written or read from. Keep the lock scope small. The lock can be a bottleneck and basically drop you back into a synchronous mode. Watch for deadlocks. With the lock and the semaphore flag in play if your ordering is wrong you can get into a deadlock between the two threads where they just sit and spin waiting for ether the lock or the flag to clear. At this point I’m confident I don’t have any race or deadlocking situations.

Here is a simplified sample, I’m serious this is as small a sample as I could code up.

internal class AsyncUnbuffCopy
{
	//file names
	private static string _inputfile;
	private static string _outputfile;
	//syncronization object
	private static readonly object Locker1 = new object();
	//buffer size
	public static int CopyBufferSize;
	private static long _infilesize;
	//buffer read
	public static byte[] Buffer1;
	private static int _bytesRead1;
	//buffer overlap
	public static byte[] Buffer2;
	private static bool _buffer2Dirty;
	private static int _bytesRead2;
	//buffer write
	public static byte[] Buffer3;
	//total bytes read
	private static long _totalbytesread;
	private static long _totalbyteswritten;
	//filestreams
	private static FileStream _infile;
	private static FileStream _outfile;
	//secret sauce for unbuffered IO
	const FileOptions FileFlagNoBuffering = (FileOptions)0x20000000;

	private static void AsyncReadFile()
	{
		//open input file
		_infile = new FileStream(_inputfile, FileMode.Open, FileAccess.Read, FileShare.None, CopyBufferSize,
		FileFlagNoBuffering);
		//if we have data read it
		while (_totalbytesread < _infilesize)
		{
			_bytesRead1 = _infile.Read(Buffer1, 0, CopyBufferSize);
			lock (Locker1)
			{
				while (_buffer2Dirty)Monitor.Wait(Locker1);
				Buffer.BlockCopy(Buffer1, 0, Buffer2, 0, _bytesRead1);
				_buffer2Dirty = true;
				Monitor.PulseAll(Locker1);
				_bytesRead2 = _bytesRead1;
				_totalbytesread = _totalbytesread + _bytesRead1;
			}
		}
		//clean up open handle
		_infile.Close();
		_infile.Dispose();
	}

	private static void AsyncWriteFile()
	{
		//open output file set length to prevent growth and file fragmentation and close it.
		//We have to do it this way so we can do unbuffered writes to it later
		_outfile = new FileStream(_outputfile, FileMode.Create, FileAccess.Write, FileShare.None, 8,
		FileOptions.WriteThrough);
		_outfile.SetLength(_infilesize);
		_outfile.Close();
		_outfile.Dispose();
		//open file for write unbuffered
		_outfile = new FileStream(_outputfile, FileMode.Open, FileAccess.Write, FileShare.None, 8,
		FileOptions.WriteThrough | FileFlagNoBuffering);
		while (_totalbyteswritten < _infilesize - CopyBufferSize)
		{
			lock (Locker1)
			{
				while (!_buffer2Dirty) Monitor.Wait(Locker1);

				Buffer.BlockCopy(Buffer2, 0, Buffer3, 0, _bytesRead2);
				_buffer2Dirty = false;
				Monitor.PulseAll(Locker1);
				_totalbyteswritten = _totalbyteswritten + CopyBufferSize;
			}
			_outfile.Write(Buffer3, 0, CopyBufferSize);
		}
		//close the file handle that was using unbuffered and write through
		_outfile.Close();
		_outfile.Dispose();
		lock (Locker1)
		{
			while (!_buffer2Dirty) Monitor.Wait(Locker1);
			//open file for write buffered We do this so we can write the tail of the file
			//it is a cludge but hey you get what you get in C#
			_outfile = new FileStream(_outputfile, FileMode.Open, FileAccess.Write, FileShare.None, 8,
			FileOptions.WriteThrough);
			//this should always be true but I haven't run all the edge cases yet
			if (_buffer2Dirty)
			{
				//go to the right position in the file
				_outfile.Seek(_infilesize - _bytesRead2, 0);
				//flush the last buffer syncronus and buffered.
				_outfile.Write(Buffer2, 0, _bytesRead2);
			}
		}
		//close the file handle that was using unbuffered and write through
		_outfile.Close();
		_outfile.Dispose();
	}
	
	public static int AsyncCopyFileUnbuffered(string inputfile, string outputfile, int buffersize)
	{
		//set file name globals
		_inputfile = inputfile;
		_outputfile = outputfile;
		//setup single buffer size, remember this will be x3.
		CopyBufferSize = buffersize * 1024 * 1024;
		//buffer read
		Buffer1 = new byte[CopyBufferSize];
		//buffer overlap
		Buffer2 = new byte[CopyBufferSize];
		//buffer write
		Buffer3 = new byte[CopyBufferSize];
		//get input file size for later use
		var f = new FileInfo(_inputfile);
		long s1 = f.Length;
		_infilesize = s1;

		//create read thread and start it.
		var readfile = new Thread(AsyncReadFile) { Name = "ReadThread", IsBackground = true };
		readfile.Start();

		//create write thread and start it.
		var writefile = new Thread(AsyncWriteFile) { Name = "WriteThread", IsBackground = true };
		writefile.Start();

		//wait for threads to finish
		readfile.Join();
		writefile.Join();
		Console.WriteLine();
		return 1;
	}
}

As you can see, we have gotten progressively more complex with each pass until we have finally arrived at my goal. With zero unmanaged code and only one undocumented flag I’ve built a C# program that actually does fast IO like the low level big boys. To handle the small file issue I just drop back to my old copy routine to move these files along. You can see a working sample at http://github.com/SQLServerIO/UBCopy It also has an MD5 verification built in as well.

So, how well does it work?

FastCopy 1.99r4
TotalRead = 1493.6 MB
TotalWrite = 1493.6 MB
TotalFiles = 1 (0)
TotalTime= 15.25 sec
TransRate= 97.94 MB/s
FileRate = 0.07 files/s

UBCopy 1.5.2.1851 — Managed Code
File Copy Started
%100
File Copy Done
File Size MB : 1493.62
Elapsed Seconds : 15.26
Megabytes/sec : 102.63
Done.

I think it will due just fine.

The Dangers of Sub-queries!!!!

Ok, now that I have your attention this really should be titled the danger of not qualifying all objects, but that isn’t quite as sensational enough to make you click through to here 🙂

Imagine if you will, a developer expects ten records to be inserted into a table and the query has been running for almost an hour.  A quick glance and sp_who2 shows that it is using a lot of IO, I mean a lot. The first thing that happens is a command decision from the boss to kill the query. I took a quick peek and found the plan still in the plan cache while we waited for this thing to roll back. As I look at the plan I see something really wrong. It looks like the table receiving the inserts isn’t getting the ten records we thought but 169 million records, every record from the table in the select clause.

I look at the query and on the surface everything looks good. When I hit the check mark it compiles without error. Looking at the plan in detail I notice a little icon that tipped me off on where to look next.

image

Oh that’s bad. Warnings: No join predicate. Ouch. But this is a sub-query, how can I have no join predicate? Lets take a look at an example.

DROP TABLE #t1
DROP TABLE #t2
GO
CREATE TABLE #t1 (
  t1id    INT   NOT NULL,
  t1name  VARCHAR(50)   NULL,
  t1place VARCHAR(50)   NULL)
  
ALTER TABLE #t1
 ADD   PRIMARY KEY ( t1id )
 
CREATE TABLE #t2 (
  t2id    INT   NOT NULL,
  t1name  VARCHAR(50)   NULL,
  t1place VARCHAR(50)   NULL)
  
ALTER TABLE #t2
 ADD   PRIMARY KEY ( t2id )
 
INSERT INTO #t1
SELECT 
  spid,
  loginame,
  hostname
FROM   
  MASTER.dbo.sysprocesses
WHERE  hostname <> ''

INSERT INTO #t2
SELECT 
  spid,
  loginame,
  hostname
FROM   
  MASTER.dbo.sysprocesses
WHERE  hostname <> ''

This gives us two tables with some data. We want to find all the records in #t1 that also exist in #t2 but instead of a join we will use an IN and a sub-query.

SELECT 
  t1name,
  t1place
FROM   
  #t1
WHERE  t1id IN (SELECT 
                 t1id
                FROM   
                 #t2)
 

Looks simple enough and will pass the compile test all day long even though t1id doesn’t exist in #t2. Since you can use columns from the top level query in the sub-query this is a perfectly valid piece of T-SQL.

It gives us this plan when we look at it though.

image

And there you have it, a join without an ON clause as far as the optimizer is concerned. By not qualifying all the columns in the sub-query we opened ourselves up to this error. This isn’t the optimizers fault! If we re-write this as it was mean to be:

SELECT 
  t1name,
  t1place
FROM   
  #t1
WHERE  t1id IN (SELECT 
                 t2id
                FROM   
                 #t2)
 

We get a plan that is more to our liking.

image

You could also re-write this as a join instead of using the IN clause and would have avoided this problem as well. Scary way to learn that SQL Server will do exactly what you tell it to even if you are wrong!

SQL Saturday #35 Notes and Observations

First of all, I want to congratulate all the volunteers that made this happen. It was a very well organized event and ran smoothly. I had a great time. It was nice meeting people that I couldn’t have met any other way.

 

As A Vendor…

The Good:

Ryan Adams did a very good job keeping things coordinated up to the event. Making sure that everything we were entitled to we got. Always very responsive to emails and questions.

The day of I always had Ryan or one of the volunteers stop by between sessions and check that everything was good. I have always had a good experience with PASS events, but I’ve never had so many people checking on us before!

Needs Improvement:

Table placements. I just didn’t understand the flow and layout of the event until I saw the venue first hand. I would have picked a different table. I don’t think it hurt us, we had crazy foot traffic and lots of conversations.

It did bottleneck up sometimes around the vendor tables as sessions let out but I think over all the placement was OK. There isn’t much room to work with and I don’t know if I would have done much better in their shoes!

Overall:

As a vendor I was very happy with the event and the amount of time I got to spend talking to folks about Nitrosphere and what we do. As a new company getting out and meeting people is very important.

Only having one or two big conferences a year is difficult and costs 5 to 10 times the amount of money that a SQL Saturday does.

 

As a Speaker…

The Good:

Again, well coordinated no scheduling issues or anything like that. I found the different tracks layer out well. The meet and greet the night before was nice.

Speaker room was big enough.Internet access seemed fine to me.

Again, I was checked on by the staff over and over to make sure things were OK.

We also had a handler feeding us time to help keep us on track.

Needs Improvement:

Recording sessions was spotty. It was a last minute thing and most of us could have used a little hand holding getting it right.

Overall:

As a speaker I was happy again with the organization and attention to detail.

 

As an attendee…

The Good:

Lots of tracks an sessions for everyone. I enjoyed seeing so many local and new speakers making the break.

Plenty of interaction between people and speakers.

The food was great, I NEVER get the chicken salad, I ate two for lunch :). Oh the ice cream…. so evil.

Needs Improvement:

Bathroom Queue Length’s were a little long but did clear up.

Finding the stairs to the second floor was fun.

Overall:

Yet again, no real complaints. Plenty of seating solid flow and awesome choices. I still can’t believe this was a free day of training!

 

I will be making room for other SQL Saturdays going forward.