Category Archives: Rant

Why does change control for SQL Server have to be so hard?

I’ve been dealing with change control and source code repositories for most of my professional career. While I’ve seen change control and integration advance steadily for writing programs it feels like the database part of things is just stuck in the stone age. For months now I’ve been researching solutions for source control, change management, and deployment of database objects. The conclusion I’ve come to is there is no solution. Well, no easy solution. I was very happy in the early days of SQL Server 2005 when they announced source control integration into management studio. It was a great pain for me personally to have Visual Studio, and the solution architecture it offered and not have that on the database side of things. Alas, it wasn’t meant to be. What they meant buy source control was using the previous generation of integration and then crippling it.

Really?

image

This doesn’t look like much of a solution to me.

I know what most of you are thinking. If you have Visual Studio use it. That works for me but not the people on my team that only have access to SSMS. It also means I have to jump between two tools to do one thing, work with SQL Server. I have been told that Microsoft is basically pushing you to Visual Studio for all of your development needs. Leaving SSMS as a management tool only. If Visual Studio did everything SSMS did it wouldn’t be that big a deal for me personally.

 

Options Available

SQL Server Management Studio Hacks

I tried several things to work around the limitations SSMS has. I found you could manually edit the solution file to get extra folders. The only problem with that is they all show up as ether Queries or Miscellaneous. Other than that one and the old fix for sorting files by name there aren’t any other hacks I can find.

Toad for SQL Server

Toad1

Generally has a nice look and feel.It has all the development and management features to be a true replacement for management studio. I tried all the normal things that I do in SSMS in Toad and several things were better. The debugger was nice and the statement optimizer is also a nice addition. It does fall down flat in some basic key areas. I never could get it to display an execution plan. As a T-SQL guy the plan is a must. I know it is a bug somewhere. Having something this fundamental during and evaluation is a big red light though.

The only down side is it doesn’t support Sourcegear Vault/Fortress which is a real shame. Lots of SMB’s use Vault for source control since it is miles better than visual source safe and much cheaper than team system.

ApexSQL Edit

apexsql1

That left only one other contender in this fight. ApexSQL Edit has been around quite a while as well. Initially, it has a similar look and feel to Toad. I know there isn’t a lot that you can do to since both look like Office. I is also missing the management features but I can live with that. The goal is to get the developers a tool they can develop in and use our code repository easily. ApexSQL Edit did include support for Vault and it worked as expected. Again, I started using it daily like I would SSMS. Everything I tried worked, for the most part. 95% of the time it would generate an execution plan. Not as clean as SSMS but it had more options on how to display the plan, which I liked. I did have a few crashes, but this was a beta build and I will let that go until I test the full release. Since this was a beta I did provide feedback and initially the folks at ApexSQL were very responsive. Eventually though everything just went quiet accept for the sales guys asking me how things were going. Right now they are a no go until the stability issues are addressed and the RTM is out so I can do a full evaluation again.

 

Final Thoughts

What I hoped would be a pretty easy exercise turned out to be a real work out. For all of SSMS’s problems it is stable and familiar. I was really hoping that ether Toad or ApexSQL Edit would solve my problems. I haven’t given up on ApexSQL Edit yet, we will just have to play the waiting game and keep using an inadequate solution until someone comes up with something better.

When Technical Support Fails You – UPDATE and Answers!

As promised and update on what has happened so far. A correction needs to be made. the P800 is a PCIe 1.0 card so the bandwidth is cut in half from 4GB/sec to 2GB/sec.

My CDW rep did get me in contact with an HP technical rep who actually knew something about the hardware in question and its capabilities. It was one of those good news, bad news situations. We will start with the bad news. The performance isn’t off. My worst fears were confirmed.

The Hard Disks

The HP Guy (changing the names to protect the innocent) told me their rule of thumb for the performance of the 2.5” 73GB 15K drives is 10MB/Sec. I know what you are thinking, NO WAY! But, I’m not surprised at all. What I was told is the drives ship with the on board write cache disabled. They do this for data integrity reasons. Since the cache on the drive isn’t battery backed if there was any kind of failure the potential for data loss is there. There are three measurements of hard disk throughput, disk to cache, cache to system and disk to system. Disk to cache is how fast data can be transferred from the internal data cache to the disk usually sequentially. On our 15k drive this should be on average 80MB/sec. Disk to system, also referred to burst speed, is almost always as fast as our connection type. Since we are using SAS that will be close to 250MB/sec. Disk to system is no caching at all. Without the cache several IO reordering schemes aren’t used, there is no buffer between you and the system, so you are effectively limited by the Areal Density and the rotational speed of the disk. This gets us down to 10 to 15 megabytes a second. Write caching has a huge impact on performance. I hear you saying the controller has a battery backed cache on it, and you would be right.

The Disk Controller

The P800 controller was the top of the line that HP had for quite a while. It is showing its age now though. The most cache you can get at the moment is 512MB. It is battery backed so if there is a sudden loss of power the data in cache will stay there for as long as the battery holds out. When the system comes back on the controller will attempt a flush to disk. The problem with this scheme is two fold. The cache is effectively shared across all your drives since I have 50 drives total attached to the system that is around 10.5 megabytes per drive. Comparable drives ship with 16 to 32 megabytes of cache on them normally. The second problem is the controller can’t offload the IO sorting algorithms to the disk drive effectively limiting it’s throughput. It does support native command queuing and elevator sorting but applied at the controller level just isn’t as fast as at the disk level.If I had configured this array as a RAID 6 stripe the loss of performance from that would have masked the other bottlenecks in the controller. Since I’ve got this in a RAID 10 the bottleneck is hit much sooner with fewer drives. On the P800 this limit appears to be between 16 and 32 disks. I won’t know until I do some additional testing.

Its All My Fault

If you have been following my blog or coming to the CACTUSS meetings you know I tell you to test before you go into production. With the lack of documentation I went with a set of assumptions that weren’t valid in this situation. At that point I should have stopped and done the testing my self. In a perfect world I would have setup the system in a test lab run a series of controlled IO workloads and come up with the optimal configuration. I didn’t do as much testing as normal and now I’m paying the price for that. I will have to bring a system out of production as I run benchmarks to find the performance bottlenecks.

The Good News

I have two P800’s in the system and will try moving one of the MSA70’s to the other controller. This will also allow me to test overall system performance across multiple PCIe busses. I have another system that is an exact duplicate of this one and originally had the storage configured in this way but ran into some odd issues with performance as well.

HP has a faster external only controller out right now the P411. This controller supports the new SASII 6G protocols, has faster cache memory and is PCIe 2.0 complainant. I am told it also has a faster IO processor as well. We will be testing these newer controllers out soon. Also, there is a replacement for the P800 coming out next year as well. Since we are only using external chassis with this card the P411 may be a better fit.

We are also exploring a Fusion-io option for our tempdb space. We have an odd workload and tempdb accounts for half of our write operations on disk. Speeding up this aspect of the system and moving tempdb completely away from the data we should see a marked improvement over all.

Lessons Learned or Relearned

Faced with the lack of documentation, don’t make assumptions based on past experiences. Test your setup thoroughly. If you aren’t getting the information you need, try different avenues early. Don’t assume your hardware vendor has all the information. In my case, HP doesn’t tell you that the disks come with the write cache disabled. They also don’t give you the full performance specifications for their disk controllers. Not even my HP Guy had that information. We talked about how there was much more detailed information on the EVA SAN than there was on the P800.

Now What?

Again, I can’t tell you how awesome CDW was in this case. My rep, Dustin Wood, went above and beyond to get me as much help as he could, and in the end was a great help. It saddens me I couldn’t get this level of support directly from HP technical support. You can rest assured I will be giving HP feedback to that effect. By not giving the customer and even their own people all the information sets everyone up for failure.

I’m not done yet. There is a lot of work ahead of me, but at least I have some answers.You can bet I’ll be over at booth #414 next week at PASS asking HP some hard questions!

When Technical Support Fails You

I have had the pleasure of being a vendor, and technical support for both hardware and software products. I know it isn’t easy. I know it isn’t always possible to fix everything. The level of support I’ve received from HP on my current issue is just unacceptable. This is made more frustrating by the lack of documentation. The technical documents show capacity. How many drives in an array, Maximum volume size but nothing on throughput.Every benchmark they have seems to be relative to another product with no hard numbers. For example, the P800 is 30% faster than the previous generation.

I’m not working with a complicated system. It’s a DL380 G5 with a P800 and two MSA70’s fully populated with 15k 73GB hard drives. 46 of them are in a RAID 10 array with 128k stripe. Formatted it NTFS with a 64k block size and sector aligned the partition. Read/Write cache is set at 25%/75%. This server originally just had one MSA70. We added the second for capacity expansion and expected to see a boost in performance as well. As you can probably guess, there wasn’t any increase in performance at all.

Here is what I have as far as numbers. Some of these are guesses based on similar products.

P800 using two external miniSAS 4x connectors maximum throughput of 2400 MB/sec (2400Mbit per link x 4 per connector x 2 connectors).
The P800 uses a PCIe x8 connection to the system at 4,000 MB/Sec (PCIe 2.0 2.5GHz 4GB/sec each direction).
Attached to the controller are 15k 73GB 2.5” hard drives 46 of them for a raw speed 3680 MB/Sec of sequential read or write speed (23x80MB/sec write sequential 2 MSA70’s RAID 10 46 Drives total based on Seagate 2.5 73GB SAS 15.1k)

Expected write speed should be around 1200 megabytes a second.

We get around 320 MB/Sec sequential write speed and 750MB/sec in reads.

Ouch.

Did I mention I also have a MSA60 with 8 7.2k 500GB SATA drives that burst to 600MB/sec and sustain 160MB/Sec writes in a RAID 10 array? Yeah, something is rotten in the state of Denmark.

With no other options before me I picked up the phone and called.

I go through HP’s automated phone system, which isn’t that painful at all, to get to storage support. Hold times in queue were very acceptable. A level one technician picked up the call and started the normal run of questions. It only took about 2 minutes to realize the L1 didn’t understand my issue and quickly told me that they don’t fix performance issues period. He told me to update the driver, firmware, and reboot. Of course none of that had worked the first time but what the heck, I’ll give it the old college try. Since this is a production system I am limited on when I can just do these kinds of things. This imposed lag makes it very difficult to keep an L1 just sitting on the phone for five or so hours on hold while they wait for me to complete the assigned tasks. I let him go with the initial action plan in place with an agreement that he would follow up.Twice I got automated emails that the L1 had tried to call and left voicemails for me. Twice, there were no voicemails. I sent him my numbers again just to be on the safe side. Next, I was told to run the standard Array Diagnostic Utility and a separate utility that they send you to gather all the system information and logs, think a PSSDiag or SQLDiag. After reviewing the logs he didn’t se anything wrong and had me update the array configuration utility. I was then told they would do a deeper examination of the logs I had sent and get back to me. Three days later I got another email saying the L1 had tried to call and left me a message. Again there was no voicemail on my cell or my desk phone. I sent a note back to the automated system only to find the case had been closed!

I called back in to the queue and gave the L1 who answered my case number, he of course told me it was closed. He read the case notes to me, the previous L1 had logged it as a network issue and closed the case. If I had been copying files over the network and not to another local array I can see why it had been logged that way. I asked to open a new case and to speak to a manager. I was then told the manager was in a meeting. No problem, I’ll stay on the line. After 45 minutes I was disconnected. Not one to be deterred, I called back again. The L1 that answered was professional and understanding. Again, I was put on hold while I waited for the manager to come out of his meeting. About 10 minutes later I was talking to him. He apologized and told me my issues would be addressed.

I now had a new case number and a new L1. Again, we dumped the diagnostic logs and started from the beginning. This time he saw things that weren’t right. There was a new firmware for the hard drives, a new driver for the P800, and a drive that was showing some errors. Finally, I felt like I was getting somewhere! At this point it has been ten days since I opened the previous case. We did another round of updates. A new drive was dispatched and installed. The L1 did call back and actually managed to ether talk to me or leave a message. When nothing had made any improvement he went silent. I added another note to the case requesting escalation.

That was eight days ago. At this point I have sent seven sets of diagnostic logs. Spent several hours on the phone. And worked after hours for several days. The last time I talked to my L1, the L2’s were refusing to accept the escalation. It was clearly a performance problem and they don’t cover that. The problem is, I agree. Through this whole process I have begged for additional documentation on configuration and setup options, something that would help me configure the array for maximum performance.

They do offer a higher level of support that covers performance issues, for a fee of course. This isn’t a cluster or a SAN. It is a basic setup in every way. The GUI walks you through the setup, click, click, click, monster RAID 10 array done. What would this next level of paid support tell me?

My last hope is CDW will be able to come through with documentation or someone I can talk to. They have been very understanding and responsive through this whole ordeal.

Thirty one days later, I’ve still got the same issue. I now have ordered enough drives to fill up the MSA60. The plan is to transfer enough data to free up one of the MSA70’s. Through trial and error, I will figure out what the optimum configuration is. Once I do I’ll post up my findings here.

If any of you out there in internet-land have any suggestions I’m all ears.

The Price of Convenience is Non-Ownership and Loss of Privacy.

This is as far as I ever plan to stray from writing purely technical posts. It just strikes so close to home. I am completely conflicted on this subject. As a user of all of these things I rail against limits imposed on me by the greedy companies and the governments that serve them. As a business man and developer, I really want to get paid and get upset when people assume that using my work has value to them but not in a way worth paying for. Most of us don’t remember not having a phone. We were connected, to our neighbors, friends and for the right price, the world. Some of us in rural areas had to make compromises and shared our phone line with others, the ever favorite “Party Line”. It was a line but not much of a party. What you could rely on though is at some point you listened in on your neighbor and they in turn listened in on you. We gave up some privacy for the awesome ability to call someone whenever we felt like it. Things have come a long way. Now, you are expected to be accessible via cell phone pretty much 24×7 and with that comes all the wonderful data services so people also expect you to be reachable via email, chat or whatever you like. More and more we have a tether to each over like never before in our history. If I can’t reach out and touch you, something must be wrong! My mom didn’t panic if she called the house and I didn’t answer. I was out side running around and knew to check in every so often.Now, If I get an email, don’t answer it within minutes I also get a text message checking up on me. If that goes unheeded the phone rings, that may escalate to all my friends or co-workers phones ringing assuming one of them is near by me. If that all comes up a draw the worst is assumed. Long gone is the pleasure of just dropping out for a few days, or even hours without someone noticing your absence. We are on the cusp of a permanently connected society, for those lucky enough to afford it in the beginning, and later mandated for those that need watching. The second effect of all this technology is more and more things becoming non-physical in nature. With this loss also goes the traditional methods of controlling them, and by that extension creating scarcity and demand for those items. It also means that the physical thing has less value compared to the idea behind it. From the beginning of software, and digital only creations, had to balance the desire to protect them and extract value from them. This has been at odds with peoples desire to share those items freely like they would anything else.

So we tried to restrict the digital goods. To me Software As A Service has been around since day one, just not your ability to enforce the concept completely. This idea is now being transferred to every thing we “use”. I once owned books, now I have a right under the license to read it but not share it, I can’t resell it or even give it away. Fair use is going to go away, because in a world where anything can be duplicated for free the item has no value, listening to the song has value, reading the book has value so whenever you “loan” that book or song to someone it has inherently lost its value. Mainly because the notion of “loaning” something by duplicating it strips away scarcity all together. Everyone had to put in a key to unlock something that you have already purchased. The physical item has zero value, the use and ideas behind them are the commodity. But, this fell well short when we became “mostly on”. The amount of people now hammering away at the old key system meant it fell in hours or minutes compared to the days or months before.

This gave rise to our favorite worst solution, Digital Rights Management. This misguided concept was an attempt to make the digital artifact, music, books and other entertainment, act just like the old physical things, but with more restrictions of course. Tighter control equals more profit.But, this very quickly showed a flaw, if the company in question went under or simply wished to stop supporting it you lost the use of it. All digital equated to non-ownership.This was just a stop gap until a better solution, total control, came along. Total control isn’t really possible unless everyone is “always on”. Once that is archived you have a since of freedom again, I can loan someone a book or a song control is transferred to them I loose the use of it but retain “ownership” of it.

Or do I?

Ever since we started keying software the concept of licensing use, not actual ownership of a thing became real, even though that license may be a perpetual one ownership is never implied. I don’t own a copy of Windows, I have the right to use it under terms that I have been dictated to by the owner of the work in question. If I violate these one sided terms I loose the privilege, and my money by the way, to use the software. Just as if I had purchased a pirated copy, my money would be forfeit unless I could get it back from the scoundrels that sold the software to me. These concepts aren’t new, again they just weren’t really enforceable at the single person level. The companies and governments would target larger organizations who were making illegal copies for the sole purpose of making money for next to nothing. The closer we get to always on we in essence become the organization of thieves, not selling to other people, just simply denying the owner of the work to make any money off of the item. Always on can now give us the illusion of scarcity and bring back the things we use to do with physical only items. These two things, persistent personal connection and a purely digital object has come together in the perfect storm. The buzz this week has been the news that Amazon has total control of items you purchased via their store and whisper net to transfer to your Kindle. The thing that to me has really struck home all the ideas of license, ownership and your rights to a thing come into clear focus. Amazon shouldn’t have sold you the ability to read a digital representation of an artists ideas, you know a book.Is that a problem? Nope, they simply reach into your Kindle, remove the offending material, refund your money for the loss of the privilege to use this item and poof, things are all better. People weren’t happy with this at all, this wasn’t like they broke the terms of service, or even that Amazon went out of business and the DRM they use is now off line. Amazon simply made some mistakes, they sold you the use of something they had bought the right to use but the person who sold them the rights didn’t have it to sell. Amazon won’t do it again though, honest.

Wow, this isn’t going to get any clearer or easer as time marches on. What will we do when we can manufacture things directly in our home, the physical things we need like chairs, TV’s or computers. If someone screwed up could they reach in and dissolve that item, refunding your money? Another little blurb that has been making the rounds is Starcraft2 not having a “LAN” mode you can only play it through their service called battle.net. It’s just like an MMO without the massively part. But people are up in arms about it. I think it will blow over.

Lastly, the RIAA as stated DRM is not the way. No joke, the enforcers on the music side of things who whole heartedly embraced DRM have now turned their back on it. Oh, they aren’t suing people anymore well, not most people. Why bother with law suits and DRM when you can just reach in and take it back. Eventually, if everything plays out, there simply won’t be a way to get an illegal copy of anything. Always on, also means always accessible. I’m not saying it will happen tomorrow but it will happen. Some people will say that loosing revenue of those who want to enjoy something “off line” will keep this from happening. Hog wash, the gain of revenue by the act of control greatly offsets that loss.

This is the future, and it is now.

As long as a society, we become more and more connected the concept of ownership will slip farther and farther away and be reserved for those things that have true rarity. Just to be a total geek about it, I think Star Trek:TNG has it right. When you can manufacture anything at the push of a button and energy is fundamentally free the need for money really looses its value. The world moves back into a much more pure state of barter for those things that have real value. Do I think this will be the ultimate outcome? Nope, to pull from another great thinker service equals citizenship. At any point in our past when a culture has more consumers than producers and services have as much or more value than goods, it collapses. I don’t think that will happen this time. We have the missing ingredients, the always on connection and the ability to give up privacy, and ultimately, freedom to those that control the use of an item, or enforces the ability to use it. So, until Star Trek zips in and I don’t need money anymore I guess I’ll just have to try and find the balance.

I, for one, welcome our new digital overlords. They tell me we have always been at war with Eastasia.