Category Archives: Pliant

Pliant Technology, Enterprise Flash Drives For Your SQL Server: Part 2

Adding In Others For Contrast

In our first part we introduced Pliant and the LS 300 drive. In part 2 we get down to the details. To give a better idea where you stand with the setup described last time I’m throwing in two other storage setups. A RAID 10 array made up of 12 500GB 7200 RPM drives attached via SATA II controllers In a RAID 0 configuration I was able to get 800MB/sec in sequential throughput so it isn’t horrible, just not “enterprise” worthy. A Patriot Torqx 128GB based on Indilinx Bigfoot SSD controller, not the greatest SSD on the consumer market but Indilinx was the king of the previous generation. I will be using the LSI controller just like I did for the Pliant LS 300.

Patriot Torqx Specifications:
Available in 64GB, 128GB and 256GB capacities
Interface: SATA I/II
Raid Support: 0, 1, 0+1
256GB and 128GB: Sequential Read: up to 260MB/s Sequential Write: up to 180MB/s
MTBF: >2,500,000 Hours
Data Retention: 5 years at 25°C
Data Reliability: Built in BCH 8, 12 and 16-bit ECC
10 Year Warranty

RAID support? I’m not sure what they are saying here other than don’t put this drive in a RAID 5 or RAID 6 setup at all. Mean time between failures(MTBF) is a pretty useless number, I would have rather seen a maximum write life or writes per day metric. It has ECC error checking, since this is an MLC based drive that doesn’t surprise me at all. 10 year warranty, yep 10 YEARS! This was one of the reasons I bought this drive. And I’m glad I did, it has already been replaced once.

The Setup

Since we are just testing storage systems I’m not as concerned with the host machine. It is more than up to the task of generating IO’s. I used Iometer 2008.06.18-RC2 for testing and my trusty
Iometer SQL Server IO Patterns File. After the test runs I used my other tool the Iometer output parser and importer to process the results and import them into a SQL Server table. The tests consisted of two different patters. These two patterns are close to what I’ve seen in the real world and loosely based on the Intel database test pattern. I run these test at different queue depths with a single worker
OLTP Heavy Read:
A mix of 8KB and 64KB size request with 90% of them being read request and 10% being write request. This test is 100% random access.

OLTP Moderate Read:
A mix of 8KB and 64KB size request with 65% of them being read request and 35% being write request. This test is 100% random access.

Lots And Lots of Graphs

This first set is OLTP Heavy Read at a queue depth of 1. Average Response Time is in milliseconds (ms).

Interesting to see the Torqx drive actually performing better than the Pliant drive. Since this is an extremely light load and mostly read only we can assume that the Torqx is tuned more towards that kind of workload. The hard disks put in a respectable showing, for hard disks.

OLTP Heavy Read at a queue depth of 4. Average Response Time is in milliseconds (ms).

As soon as we put some kind of load the Pliant drive just walks away from the other two drives. The Torqx is still five times faster than the RAID 10 setup.

OLTP Heavy Read at a queue depth of 8. Average Response Time is in milliseconds (ms).

Again, as the workload ramps up the Pliant really just ends up in a category all its own. We are still in a decent zone for the RAID setup but the single Torqx drive still is four to five times faster.

OLTP Heavy Read at a queue depth of 32. Average Response Time is in milliseconds (ms).

Now we are pushing past the bounds of the SATA based Torqx and the SATA based RAID setup. The Pliant drive just keeps getting faster jumping from 13,000 IO/sec to 22,000 IO/sec. Response times are still very impressive as well.

OLTP Heavy Read at a queue depth of 128. Average Response Time is in milliseconds (ms).

This is what we would call a “worst case scenario” for the RAID setup. With only 12 drives we are at a queue length of 10 for each drive. Response times are showing it too with the average being 110ms. Even the Torqx drive can’t shed the IO load at this point while the Pliant drive drives past 26,000 IO/sec and inches up on 500MB/sec as well. That last statement is accurate. Since this is a dual-port drive even though its a SAS 300 drive it is able to use both ports for read and writes. I did run the test up to 256 outstanding IO/sec but the Pliant drive was capped out and was starting to add some to the response time. The RAID array and the Torqx drive were getting so slow that the Pliant drive was hard to see on the average response time graph.

This second set is OLTP Moderate Read at a queue depth of 1. Average Response Time is in milliseconds (ms).

This workload is much more write intensive and the Pliant LS 300 jumps out in front very quickly. Even at 1 queue depth it is shaming the Torqx on write performance. The RAID array is performing pretty well with lower than expected response times.

OLTP Moderate Read at a queue depth of 4. Average Response Time is in milliseconds (ms).

Quickly the Pliant drive starts to walk away with this contest. It clearly has much more capacity for write workloads than the Torqx or RAID array.

OLTP Moderate Read at a queue depth of 8. Average Response Time is in milliseconds (ms).

Here we are again at the end of the road for the RAID array. The Torqx drive is holding on but response times are getting long. It is only managing to pull a two fold increase in performance over the RAID array.

OLTP Moderate Read at a queue depth of 32. Average Response Time is in milliseconds (ms).

Now things are just embarrassing for the RAID array and the Torqx drive. Both showing that write heavy workloads aren’t the best fit. Again, the Pliant drive is starting to get response times in the millisecond range but at 320MB/Sec and 18,000 IO/Sec I would have to call that a fair trade.

OLTP Moderate Read at a queue depth of 128. Average Response Time is in milliseconds (ms).

At last we have hit a wall with the RAID array and the Torqx drive. With the Torqx drive posting up numbers that are less than two times the RAID array it is starting to show its real weaknesses. The Pliant drive however is pulling a solid 22,ooo IO/Sec and creeping up on 43oMB/Sec of throughput. All of this from a single SAS 3.5″ drive.

Final Thoughts

I’ve had the Pliant LS 300 in my lab for quite a while now. I’ve also had the Patriot Torqx and this particular RAID array setup. All three have been running hard during the last three months. The Pliant drive did show some signs of slowing down as it settled into the workloads. The RAID array lost three drives total and as I stated earlier, the first Torqx drive I had gave up the ghost in the first month. I’ve said it before, and I will say it again. If you need an enterprise drive then buy an enterprise drive! Don’t get a drive that has a SATA interface and is dressed up like it is ready for the big show. I can say without a doubt the the Pliant LS 300 is one of the finest solid state disk I’ve ever worked with.

Pliant Technology, Enterprise Flash Drives For Your SQL Server: Part 1

Pliant Technology, New Kid On The Block

If you have been reading my storage series, and in particular my section on solid state storage, you know I have a pretty rigid standard for enterprise storage. Several months ago I contacted Pliant Technology about their Enterprise Flash Drives. It didn’t surprise me when they made the recent announcement about being acquired by SanDisk. Between Pliants’ enterprise ready technology and SandDisks’ track record at the consumer level I think they will be a new force to be reckoned with for sure. Pliant drives are already being sold by Dell and now will have a much larger channel partnerships with the new acquisition. They are one of the very few offering a 2.5″ or even more rare 3.5″ form factor using a  dual port SAS interface. I have been hammering on this drive for months now. It has taken everything I can throw at it and asked for more.

Enterprise Flash Drives

Pliant send me a Lightning LS 3.5″ 300S in a nondescript box. What surprised me is how heavy the drive is. I was expecting a featherweight drive like all the rest of the 2.5″ SSD’s I’ve worked with. This drive is very well made indeed. Another thing was the fins on top of the drive, something I’m use to seeing on 15,000 RPM drives but not on something with no moving parts. It never got hot to the touch so I’m not sure if they are really needed. The bottom of the drive has all the details on a sticker.

If you look closely at the SAS connector you will see many more wires than visible pins. This is because it is a true dual port drive. If you could see the other side of the SAS connector you would see another set of little pins in the center divider for the second port.

Normally, this port is used as a redundant path to the drive so you can lose a host bus adapter and still function just fine. Technically, you could use Multi-Path IO to use both channels in a load balancing configuration. Something I’ve never done on a traditional hard drive since you get zero benefit from the extra bandwidth at all. Solid state drives are a different beast though. A single drive can easily use the 300 megabytes available to a SAS 1.0 port. If you look at the specification sheet for this drive you will see they list read speeds of 525 MB/Sec and write speeds of 320 MB/Sec both above the 300 MB/sec available to a single SAS port. MPIO load balancing makes the magic happen. Since this drive was finalized before the 600 MB/Sec SAS 2.0 standard was in wide production it only makes since to use both ports for reads and writes. Since it doesn’t seem to be hitting more than 525 MB/Sec for reads I don’t know how much the drive would benefit from an upgrade to SAS 2.0.

Meet The HBA Eater

The big problem isn’t the MB/Sec throughput it is the number of IO’s this beast is capable of. Again, according to the spec sheet a single drive can generate 160,000 IO/Sec. That isn’t a typo. Even latest and best consumer grade SSD’s aren’t getting anywhere near that number, most top out in the 35,000 range with a few getting as high as 60,000. Lucky for us LSI has released a new series of host bus adapters capable of coping. The SAS 9211-4i boasts four lanes of SAS 2.0 and a throughput of more than 290,000 IO/Sec. More than enough to test a single LS 300S.

That answers the IO question but we still have to deal with the dual port issue if we wish to get every ounce out of the LS 300s. I tried several different approaches to get the second port to show up in windows as a usable active port. The drive chassis I had said they supported the feature but all of them had issues. I actually bought an additional drive cage that also reported to support dual port drives in an active/active configuration. Alas, it had issues as well. I was beginning to think there may be something wrong with the drive Pliant sent me! I finally just bought a mini-sas cable that supported dual port drives.

As you can see this cable is different. The two yellow wires are each a single SAS channel the other wires are for power. That means on my four port card I can hook up two dual port drives. Finally, windows saw two drives and I was able to configure MPIO in an active/active configuration!

Until Next Time….

Now that we have all the hardware in place and configured we will take a look at the benchmarks and long term stress tests in the next article.