Category Archives: Syndicated

Quick Tip: SSIS and SAP BW

Frustration.

If you have ever had to work with Integration Services and data sources other than the Microsoft variety you know how frustrating it can be. I recently started a project using SQL Server 2012 and SAP BW 7. The customer had already decided that the other commercial options weren’t viable and settled on the Microsoft Connector 1.1 for SAP BW. A test package was built and worked.

I had already read through Using SQL Server 2008 Integration Services with SAP BI 7.0 and felt comfortable with the instructions. Even though we were using SSIS 2012 I knew there was very little differences between the 1.0 connector and the 1.1 connector. I built out our new package and all seemed well. The SAP BW connection in the connection manager hooked up and tested out. The SAP BW source also tested out pulling the metadata and even delivered preview data just fine. Next, I attempted to run the package. Just as I expected it ran and I got the data I expected. The next three attempts all resulted in SAP BW connector timeouts. I kept poking and prodding it. The SAP admin didn’t see any issues ether.

So, I did what I always do. I stopped tussling with the package and walked through all the steps again. I went back and read the doc again. I made a few notes and shot an email off to the SAP administrator. I settled on one thing pointed out in the doc:

“We want to keep the number of parallel processes to a reasonable value for the overall DTP process type DTP_LOAD, but this parallelism can lead to a timeout error during the Open Hub DTP extraction through Microsoft Connector for SAP BI”

Sure enough, it was set to something other than 1.

VICTORY!

Now, we get all our records and don’t have the timeout issues. The downside seems to be a bit slower performance. If the choice is a faster data pump that only runs once every five or so attempts or one that runs slower but runs every time I’ll slow up a bit. The fact that everything verified and I even got preview data on every attempt was hugely frustrating and time consuming. Knowing when to stop and reevaluate the problem and having a good relationship with the SAP administrator probably saved me hours if not days of cursing and shaking my fist at both Microsoft and SAP.

Solid State Storage: Enterprise State Of Affairs

Here In A Flash!

Its been a crazy last few years in the flash storage space. Things really started taking off around 2006 when NAND flash and moores law got together. in 2010 it was clear that flash storage was going to be a major part of your storage makeup in the future. It may not be NAND flash specifically though. It will be some kind of memory and not spinning disks.

Breaking The Cost Barrier.

For the last few years, I’ve always told people to price out on the cost of IO not the cost of storage. Buying flash storage was mainly a niche product solving a niche problem like to speed up random IO heavy tasks. With the cost of flash storage at or below standard disk based SAN storage with all the same connectivity features and the same software features I think it’s time to put flash storage on the same playing field as our old stalwart SAN solutions.

Right now at the end of 2012, you can get a large amount of flash storage. There is still this perception that it is too expensive and too risky to build out all flash storage arrays. I am here to prove at least cost isn’t as limiting a factor as you may believe. Traditional SAN storage can run you from 5 dollars a Gigabyte to 30 dollars a Gigabyte for spinning disks. You can easily get into an all flash array in that same range.

Here’s Looking At You Flash.

This is a short list of flash vendors currently on the market. I’ve thrown in a couple non-SAN types and a couple traditional SAN’s that have integrated flash storage in them. Please, don’t email me complaining that X vendor didn’t make this list or that Y vendor has different pricing. All the pricing numbers were gathered from published sources on the internet. These sources include, the vendors own website, published costs from TPC executive summaries and official third party price listings. If you are a vendor and don’t like the prices listed here then publicly publish your price list.

There are always two cost metrics I look at dollars per Gigabyte in raw capacity and dollars per Gigabyte in usable capacity. The first number is pretty straight forward. The second metric can get tricky in a hurry. On a disk based SAN that pretty much comes down to what RAID or protection scheme you use. Flash storage almost always introduces deduplication and compression which can muddy the waters a bit.

Fibre Channel/iSCSI vendor list

Nimbus Data

Appearing on the scene in 2006, they have two products currently on the market. the S-Class storage array and the E-Class storage array.

The S-Class seems to be their lower end entry but does come with an impressive software suite. It does provide 10GbE and Fibre Channel connectivity. Looking around at the cost for the S-Class I found a 2.5TB model for 25,000 dollars. That comes out to 9.7 dollars per Gigabyte in raw space. The S-Class is their super scaleable and totally redundant unit. I found a couple of quotes that put it in at 10.oo dollars a Gigabyte of raw storage. Already we have a contender!

Pure Storage

In 2009 Pure Storage started selling their flash only storage solutions. They include deduplication and compression in all their arrays and include that in the cost per Gigabyte. I personally find this a bit fishy since I always like to test with incompressible data as a worst case for any array. This would also drive up their cost. They claim between 5.00 and 10.00 dollars per usable Gigabyte and I haven’t found any solid source for public pricing on their array yet to dispute or confirm this number. They also have a generic “compare us” page on their website that at best is misleading and at worst plain lies. Since they don’t call out any specific vendor in their comparison page its hard to pin them for falsehoods but you can read between the lines.

Violin Memory

Violin Memory started in earnest around 2005 selling not just flash based but memory based arrays. Very quickly they transitioned to all flash arrays. They have two solutions on the market today. The 3000 series which allows some basic SAN style setups but also has direct attachments via external PCIe channels. It comes in at 10.50 dollars a Gigabyte raw and 12 dollars a Gigabyte usable. The 6000 series is their flagship product and the pricing reflects it. At 18.00 dollars per Gigabyte raw it is getting up there on the price scale. Again, not the cheapest but they are well established and have been used and are resold by HP.

Texas Memory Systems/IBM

If you haven’t heard, TMS was recently purchased by IBM. Based in Houston, TX I’ve always had a soft spot for them. They were also the first non-disk based storage solution I ever used. The first time I put a RamSan in and got 200,000 IO’s out of the little box I was sold. Of course it was only 64 Gigabytes of space and cost a small fortune. Today they have a solid flash based fibre attached and iSCSI attached lignup. I couldn’t find any pricing on the current flagship RamSan 820 but the 620 has been used in TPC benchmarks and is still in circulation. It is a heavy weight at 33.30 dollars a Gigabyte of raw storage.

Skyera

A new entrant into this space they are boasting some serious cost savings. They claim a 3.00 dollar per Gigabyte usable on their currently shipping product. The unit also includes options for deduplication and compression which can drive the cost down even further. It is also a half depth 1U solution with a built-in 10GbE switch. They are working on a fault tolerant unit due out second half of next year that will up the price a bit but add Fibre Channel connectivity. They have a solid pedigree as they are made up of the guys that brought the Sanforce controllers to market. They aren’t a proven company yet, and I haven’t seen a unit or been granted access to one ether. Still, I’d keep eye on them. At those price points and the crazy small footprint it may be worth taking a risk on them.

IBM

I’m putting the DS3524 on a separate entry to give you some contrast. This is a traditional SAN frame that has been populated with all SSD drives. With 112 200 GB drives and a total cost of 702908.00 it comes in at 31.00 a Gigabyte of raw storage. On the higher end but still in the price range I generally look to stay in.

SUN/Oracle

I couldn’t resist putting in a Sun F5100 in the mix. at 3,099,000.00 dollars it is the most expensive array I found listed. It has 38.4 Terabytes of raw capacity giving us a 80.00 dollars per Gigabyte price tag. Yikes!

Dell EqualLogic

When the 3Par deal fell apart Dell quickly gobbled up EqualLogic, a SAN manufacturer that focused on iSCSI solutions. This isn’t a flash array. I wanted to add it as contrast to the rest of the list. I found a 5.4 Terabyte array with a 7.00 dollar per Gigabyte raw storage price tag. Not horrible but still more expensive that some of our all flash solutions.

Fusion-io

What list would be complete without including the current king of the PCIe flash hill Fusion-io. I found a retail price listing for their 640 Gigabyte Duo card at 19,000 dollars giving us a 29.00 per usable Gigabyte. Looking at the next lowest card the 320 Gigabyte Duo at 7495.00 dollars ups the price to 32.20 per useable Gigabyte. They are wicked fast though 🙂

So Now What?

Armed with a bit of knowledge you can go forth and convince your boss and storage team that a SAN array fully based on flash is totally doable from a cost perspective. It may mean taking a bit of a risk but the rewards can be huge.

 

SSIS and Oracle All Your Non-options

Why do some things have to be so hard?

I have been asking myself that question for the better part of two weeks as I wrestle with SSIS 2008 R2 and getting data out of Oracle and into SQL Server.

It’s like the shell game, only with drivers.

Like, go native man!

Like anyone else working in SSIS and dealing with Oracle I started with the default drivers that ship with SSIS. Technically, they work. There are a couple of glaring caveats. They only work in 32 bit mode. Huge non starter on our 64 bit system. They are slow. I know that’s like saying the sky is blue, especially if you don’t have any context. Well I do have some context. Migrating packages from DTS to SSIS I’ve got historical run times and also did some test runs before actually converting the packages. The native 32 bit drivers were slower or just equal to the equivalent on the SQL Server 2000 box running the Oracle 8 drivers. They don’t return the proper metadata column data types. Everything comes back as a wide string a.k.a. st_wstr or varchar for you table creating types. Decimal(18,2)? Thats a varchar(50) for you. varchar(10)? You guessed it, varchar(50) should do it! This beyond anything else was probably the biggest problem.

No, only use what Oracle provides!

I decided to install the Oracle drivers. Let the pain begin!
First, you have to create an account on the Oracle Developer Network site. Really? I just need some drivers. I guess it could have been worse, like a sharp stick to the eye. You need to download the 32 bit and the 64 bit driver packs. Each one weighs in at 700MB compressed. They do include a ton of tools that I have no clue how to use, thats a bonus. Here is your sharp stick to the eye as you get to use of one of the worst installers in the history of installers. After about a dozen tries I finally found out the magic combination to get only the drivers cutting out about 1.3GB from being piled onto my server. Oh, and you get to do it twice. Next, as you look at where it put the drivers at you realize that each install is named client, client_1 and so on. No clue at all which is the 32 bit or 64 bit install bits. Get that figured out, you can go fix your borked path. The installer will gladly stick its path right at the beginning giving your hours of fun trying to figure out exactly what is broken. That is if your path isn’t already too long and it just skips this bit for you. And finally, you get to manually add a system variable pointing to your tnsnames.ora file usually stored in /app/<nt login>/11.2/client/network/tnsnames.ora

After all your hard work you are rewarded with Ole Db, ADO and ADO.Net drivers HUZZAH!
First thing I found out is the Ole Db drivers work as well as the native drivers as far as metadata is concerned. Performance was better. The ADO and ADO.Net do take it up a notch. You do get some additional metadata goodness from these drivers. I did get decimal and float types back but pretty much every string came back as st_wstr again. It would size correctly them that was nice. It did mean I had to add conversion from wide string to string. On a table with 86 columns and 64 million records this also was a bad combination. You do get to run in both 32 bit and 64 bit.

Use what random people on twitter recommend!

Well, not quite that bad. I posted a 14o character version of this post to the mighty #sqlhelp hashtag and lo’ my friend Merrill Aldrich (twitter|blog) and he simply said “Can you use the Attunity connector? #sqlhelp” Huh? When I did a search for just Attunity it brought me to their website http://www.attunity.com/ I didn’t see exaclty what Merrill was talking about. Doing a search for Attunity Connector brought me to the gold I’d been looking for http://www.attunity.com/products/attunity-connect/ssis-connectors-for-oracle-and-teradata. Apparently, Attunity makes connectors for Oracle and Teradata and releases them for F R E E. There are two versions currently 1.2 for SSIS 2008 R2 and 2.0 for SSIS 2012. The installer is rough but not Oracle rough. One of the nice things is installing the 64 bit drivers also installs the 32 bit drivers. Unless you have a problem with the installer and the 64 bit installer only installs the 32 bit drivers. After a few searches and a few more failed install attempts I found that you need to install the Visual C++ 2008 SP1 redistributable package. I only needed to do this on my server since I had Visual Studio 2010 already installed on my laptop. Once that was done the installer worked just fine. Except it really didn’t “install” everything. You still have to manually add them to your toolbox sidebar for data flows before you will see the source and destination connectors for Oracle. It’s totally worth it.

First off with a bang, all the metadata returned was 100% spot on. No more fussing with conversion steps or guessing what the data type should be. They are faster. Not by a small margin ether. On 50~ packages they were around 25% or more faster than the Oracle provided drivers. You still need to have a TNS names file you can’t use the machine name, port and service name directly. (that I know of)

And there was much joy to be had.

I do think it is sad that neither Microsoft or Oracle has a good solution to this issue. I’m glad Microsoft is supporting Attunity I wished they would ship them by default. As for Oracle, I know why Oracle developers and DBA’s get paid so much. If getting drivers installed was this hard I can’t imagine getting the whole database setup and going, ugh.

The Fundamentals of Storage Systems – Shared Consolidated Storage Systems

Shared Consolidated Storage Systems – A Brief History

Hey, “Shared Consolidated Storage Systems” did you just make that up? Why yes, yes I did.

For as long as we have had computers there has been a need to store and retrieve data. We have covered the basics of hard disks, RAID and solid state storage. We have looked at all of this through the aspect of being directly attached to a single server. It’s time we expand to attaching storage pools to servers via some kind of network. The reason I chose to say shared and consolidated storage instead of just SAN or Storage Area Network was to help define, broaden and give focus to what we really mean when we say SAN, NAS, Fibre Channel or even iSCSI. To understand where we are today we need to take a look back at how we got here.

Once, There Were Mainframes…

Yep, I know you have heard of these behemoths. They still roam the IT Earth today. Most of us live in an x86 world though. We owe much to Mainframes. One of these debts is networked storage. Way back when, I’m talking like the 1980’s now, Mainframes would attach to their storage via a system bus. This storage wasn’t internal the way we think of direct attached storage though. They had massive cables running from the Mainframe to the storage pods. The good folks at IBM and other big iron builders wanted to simplify the standard for connecting storage and other peripherals.

 

Who doesn’t love working with these cables?

You could never lose this terminator!

Out With The 1960’s And In with the 1990’s!

Initially IBM introduced it’s own standard in the late 80’s to replace the well aged bus & tag and other similar topologies with something that was more robust and could communicate over optical fiber. ESCON was born. The the rest of the industry backed Fibre Channel which is a protocol that works over optical fiber or copper based networks, more importantly it would be driven by a standards body and not a single vendor. Eventually, Fibre Channel won out. In 1994 Fibre Channel was ratified and became the defacto standard even IBM got on board. Again, we are still talking about connecting storage to a single Mainframe, longer connections were possible and the cabling got a lot cleaner though. To put this in perspective, SQL Server 4.2 was shipping at that point with 6.0 right around the corner.

High Performance Computing  and Editing Video.

One of the other drivers for Fibre Channel was the emerging field of High Performance Computing (HPC) and the need to connect multiple mainframes or other compute nodes to backend storage. Now we are really starting to see storage attached via a dedicated network that is shared among many computers. High end video editing and rendering farms also drove Fibre Channel adoption. Suddenly, those low end pc-based servers had the ability to connect to large amounts of storage just like the mainframers’.

Commodity Servers, Enterprise Storage.

Things got interesting when Moore’s Law kicked into high gear. Suddenly you could buy a server from HP, Dell or even Gateway. With the flood of cheaper yet powerful servers containing either an Intel, MIPS, PPC or Alpha chip you didn’t need to rely on the mainframe so heavily. Coupled with Fibre Channel and suddenly you had the makings for a modern system. One of the biggest challenges in this emerging commodity server space was storage management. Can you deal with having hundreds of servers and thousands of disks without any real management tools? What if you needed to move some unused storage from server A to Server B? People realized quickly that maintaining all these islands of storage was costly and also dangerous. Even if they had RAID systems if someone didn’t notice the warnings you could lose whole systems and the only people who knew something was up was the end user.

Simplify, Consolidate, Virtualize and Highly Available

Sound familiar? With the new age of networked storage we needed new tools and methodologies. We also gained some nifty new features. Network attached storage became much more than a huge hard drive. To me, if you are calling your storage solution a SAN it must have a few specific features.

Simplify

Your SAN solution must use standard interconnects. That means if it takes a special cable that only your vendor sells it doesn’t qualify. In this day and age, if a vendor is trying to lock you into specific interface cards and cables they are going to go the way of the dodo very quickly. Right now the two main flavors are Fiber Optics and copper twisted pair a.k.a Ethernet. It must also reduce your management overhead this usually means a robust software suite above and beyond your normal RAID card interface.

Consolidate

It must be able to bring all your storage needs together under one management system. I’m not just talking disks. Tape drives and other storage technologies like deduplication appliances are in that category. The other benefit to consolidation is generally much better utilization of these resources. Again, this falls back to how robust the software stack that your SAN or NAS comes with.

Virtualize

It must be able to abstract low level storage objects away from the attached servers allowing things like storage pools. This plays heavily into the ability to manage the storage that is available to a server and maintain consistency and up time. How easily can I add a new volume? Is it possible to expand a volume at the SAN level without having to take the volume off-line? Can other resources share the same volumes enabling fun things like clustering?

Highly Available

If you are moving all your eggs into one HUGE basket it better be one heck of a basket. Things like redundant controllers where one controller head can fail but the SAN stays on line without any interruption to the attached servers. Multiple paths into and out of the SAN so you can build out redundant network paths to the storage. Other aspects like SAN to SAN replication to move your data to a completely different storage network in the same room or across the country may be available for a small phenomenal add on fee.

If your SAN or NAS hardware doesn’t support these pillars then you may be dealing with something as simple as a box of disks in a server with a network card. Realize that most SANs and NAS’es are just that. Specialized computers with lots of ways to connect with them and some really kick-ass software to manage it all.

Until Next Time…

Now that we have a bit of history and a framework we will start digging deep into specific SAN and NAS implementations. Where they are strong and where they fall flat.

One Great Day And Mixed Feelings

If it happens two times then you know the first time wasn’t a fluke.

Today was my anniversary date for the SQL Server MVP award. I wasn’t expecting to be renewed. I was though. Three of my friends weren’t added to the MVP roster. All three of them have put in the time and work. If this was as simple of do X and Y get MVP it would be easy to say you didn’t do X enough or Y enough. That’s not the nature of an award.

a·ward verb (used with object)
1. To give as due or merited; assign or bestow: to award prizes.
dictionary.com

An award is given. Let me say that one more time. An award is GIVEN. You may have done enough to earn an award. That doesn’t guarantee you will be given it.

I deserved it!

I dare say many have deserved it and not been granted MVP status. I thought I was one of those people. In 2004 I did a ton of crazy traveling and promotion for SQL Server 2005. I was a user group coordinator for two user groups almost 2 hours apart from each other. I sacrificed a lot. I felt I was entitled to the MVP award. I had been passed over before but this time I deserved it. I didn’t get it. Was I mad? Did I feel a bit betrayed? You bet I did. Why should someone that hangs out in a user forum all day be more worthy than me? I had a hard time accepting that I was passed over, AGAIN. It changed me. It changed my outlook on things. I sat back and evaluated why I was giving so much of my time supporting a product, made by one of the wealthiest software companies in the world, FOR FREE? Eventually, I realized it wasn’t the product or the company. I was supporting my career,  my desire to learn more and the people around me who also just wanted to learn as much as I did. So, for the most part I got over it. I quit flinching every time I was introduced as an MVP or former MVP. I stopped getting angry every time someone would say “I was sure you were an MVP!” I stopped letting my world revolve around achieving MVP status. Even though many of my friends and colleagues were current or former MVP’s. I just put it aside. I said if all the work I did in 2004 wasn’t enough then I can’t imagine how much more I could do, what else I could give up to prove I was MVP material. I just kept doing what I loved to do, working with a product I was passionate about teaching what I knew and learning from others whenever I could.

Being happy for others.

Eventually, I just got really zen about it. I watched others get the MVP and I was always happy for them. My favorite was when Jen McCown (@midnightdba) got her award. I watched it live on DBAs@Midnight. I was so happy for her I cried. I cheered at the screen and realized just how awesome Sean can be sometimes. I had several conversations about how Jen “came out of nowhere” and was awarded “early” in her efforts. Had Jen been community driven as long as I had? No. Did she take a sabbatical to have a family? Yes. Did she F**KING CRUSH IT when she got back in the swing of things? Oh hell yeah. She started blogging, recording videos and speaking in 2008. In January of 2011 she was awarded. She didn’t write a few blog posts. she wrote HUNDREDS. She didn’t record a few vids she (and Sean) started a live show on fridays. She was just everywhere, for TWO YEARS SOLID. To say just just popped fully into her MVP in 2011 is a great injustice to the amount of work she put in. Did others work harder during that time than Jen? Maybe, but I couldn’t name them.

I Finally made it.

When I was awarded last year I was unbelievably fortunate to be surrounded by my friends and the community I support. I was, and am, extremely humbled to be an awardee. Today, when I hadn’t received my nod, I was ready to pull the MVP logo from my intro slide and give the best presentation I possibly could. MVP or no MVP I love what I do. I love teaching. I love community. If I don’t get renewed next year it won’t change a thing. I will still travel on my dime, give my time and do my best. If you think that is “lip service” then I am sad for you. If not being an MVP keeps you from doing the things you love then maybe you really aren’t doing what you love.

We are all human.

 

It’s not wrong to want the MVP award. Its not wrong to work towards that goal. If you think you earned it and didn’t get it, thats your fault. You aren’t alone in the “I should be an MVP” club. As a former member I know just how bitter it can make you sometimes. If you want to earn something, go get your Microsoft Certified Master. It shows you are technically one of the best with SQL Server. You don’t have to speak, blog, record videos or hang out on the technet forums for years hoping to be recognised. If you work your ass off for it and you earn it Microsoft hands your certification right over.

For those of you who haven’t been awarded yet, please don’t stop trying. More importantly, don’t stop giving to the community who appreciates it more than Microsoft ever will. Realize you change lives when you teach others. Your and theirs.

So, now that I’ve ranted and rambled about the MVP what is it? Again, Jen wrote it up well.