That’s not my job…maybe…

October 25th, 2010
by The Grateful DBA
Not my job

I found myself in a small, most likely very insignificant little situation at work today. When I looked back on it afterward, I felt unexpectedly fresh; the kind of fresh one experiences after an epiphany of minor proportions. I had to reflect and really think twice about why seemingly doing nothing more than “my job” left me feeling like a better DBA and person in general.

In this particular situation, I had two responsibilities; to ensure the code conformed to company standards/syntax, and assure the procedure itself affected the expected number of records BEFORE it got run against prod. Simple enough right? It’s not my role (at this point in the process at least) to ensure correct logic is being followed. After a 9-hour day (not gargantuan by DBA standards, but long enough to be exhausted) a script came across my desk for review. This script contained an innocent enough local variable, “@CustomerID”. This local variable was declared but never assigned a value. Fifteen lines later down the script, @Customer was checked to see if it was NOT NULL; and if so, an UPDATE statement was run. Looking it over a 2nd time, the logical error of it never being able to run the UPDATE became clear; yet syntactically it was correct, and company standards were all being met. My options flashed across my mind. Do “my job” and advance it forward; or cause yourself and perhaps several others to have to stay late at work to fix the script and send it back through the pipeline?

I suppose I wouldn’t be writing about this had I taken the easy way out. And I must stress, this is NOT about patting myself on the back. This was a very small thing in reality. I rejected the ticket for the logic error despite that not being part of what I was asked to do in this situation. A VP came by my desk 5 mins later and inquired immediately about why I rejected it. After explaining the logic error, his initial annoyance at his guys having to stay late due to me bouncing the ticket, disappeared completely. He thanked me for my diligence and basically said that if his team were more careful in the first place, they’d be home by now. Small victory for the Grateful DBA.

As I drove home a bit later, it occurred to me just how good it felt to go that extra step. How the satisfaction of doing the “right thing” outweighed the small irritation of putting in an extra 45 mins on a Monday evening. How it not only was the best thing for the organization, but bonus, was probably a fairly good thing for my career.

So I ask; did you bite your tongue today when something came across your desk that wasn’t your job? What did you do extra today?

Have a grateful day…Troy

Tags: ,
Posted in Best Practices, Musings | Comments (2)

COUNTing with the DISTINCT Clause

October 4th, 2010
by The Grateful DBA
One ringy-dingy...

Ok, that’s not the count I’m referring to, but one of the many important tasks we DBAs are charged with from time to time is to convert INSERTS/UPDATES/DELETES (IUDs) into “count scripts” in order to verify the amount of records the DML statement would affect before it goes into production. While this may seem like one of the most basic things we have to do daily, the importance cannot be stressed enough. A proper and thorough series of checks and balances is vital in not only the biggest of organizations, but small companies as well. Nobody wants to run scripts against prod only to find out later that instead of affecting the 100 rows you thought it would, your dev left out the WHERE clause and you just updated every row in the table. Eeek! Now you have to take time away from other important things you could/should be doing, and have to test out your data recovery strategy. People (specially Vice Presidents) get extremely irritated at situations like this. Jobs WILL be on the line. Even if you were not the dev who wrote the script, you were still the DBA who let it slide past unchecked. This can be very hazardous to your career.

And since The Grateful DBA wants you to have a long and prosperous SQL career, I offer this little tip regarding one count script situation you might find yourself in; that of an IUD based off a SELECT DISTINCT statement. It’s a bit trickier than you might suspect, but since my readers are way above average intelligence, I have no doubts it’ll be eazy-peazy for you.

Here’s an example of a typical customer INSERT you may have to convert to a count script:

INSERT INTO Customer
     (ID,
     firstName,
     lastName,
     nickname,
     address1,
     address2,
     city,
     stateID,
     zip,
     entryDate)
SELECT DISTINCT
     1,
     t.firstName,
     t.lastName,
     t.firstName,
     t.address1,
     t.address2,
     @city,
     s.longStateName,
     ISNULL(t.zip,'00000'),
     getdate()
FROM CustomerTemp t
     INNER JOIN States s ON s.ID = t.ID

We have a little of everything here: constants, joined values, repeats, functions, & local variables. Let’s see what it would take to convert this to an accurate count of exactly how many records we’re inserting. Here are all the rules you need to remember…

CONSTANTS – All constants must be removed from the list. This includes things like numbers and string literals for instance. These will naturally return the same value each and every time iterated. In the case of our example script, this means we’ll need to remove the “1”.

REPEATS – Values in the final count list cannot be repeated more than once or else an error will indeed be thrown. In our example, we must remove one of the two “t.firstName”s.

FUNCTIONS – Function calls such as the ISNULL in our example are allowed and valid, but must be aliased. Even if it’s exactly the same as the fieldname itself, an alias is required. As well, deterministic functions such as “getdate()” must be removed.

LOCAL VARIABLES – Similarly to the rule concerning constants, local variables will also naturally insert the same value each time, so must be removed from the script. In our example, the “@city” is a goner.

THE WRAPPER – No, I’m not referring to Jay-Z or Eminem; I mean how the count statement is formed. From my (albeit brief) research, there are apparently several “legal” ways of forming the final statement. The method I’ve always used, and to be honest, have found easiest to read and grasp what’s being accomplished, is to wrap the new, slimmer select in it’s own “SELECT COUNT(*) FROM”. The inside SELECT must be surrounded by parenthesis and must have it’s own unique alias. Even if you have no intention on ever using the alias for anything important, nonetheless it’s required.

This is what our example above turns into after we’ve trimmed it down…

SELECT COUNT(*)
FROM
     (SELECT DISTINCT t.firstName,
          t.lastName,
          t.address1,
          t.address2,
          s.longStateName,
          ISNULL(t.zip,'00000') zip
     FROM CustomerTemp t
          INNER JOIN States s ON s.ID = t.ID) A

I hope this helps when you find yourself faced with a similar task to accomplish. As always, I welcome feedback.

Have a Grateful day…Troy

Tags: , , ,
Posted in T-SQL | Comments (1)

Grateful DBA Updates

August 8th, 2010
by The Grateful DBA

Howdy all. A couple pieces of housekeeping on the blog to share with you.

First, I have the pleasure of announcing that I’ve been asked to speak on failover options for SQL Saturday #49 in Orlando, FL on 10/16/10. Andy and the gang down there always do a bang-up job with their annual event. I have no doubt this year will be just as memorable. More on that as we get closer to the event, but make sure to be on the lookout for the #SQLSat49 Twitter hashtag.

Next I’m happy to announce that I’ve been asked to present on failover/DR options for the 11/17/10 Jacksonville SQL Server Users Group monthly meeting. This will be my first presentation to a user’s group after more than a half-dozen SQL Saturdays. So while slightly nervous, I accept the task head-on and look forward to the night with great anticipation. We always go out for munchies and beverages afterward, so if you’re in the Jacksonville area, would love to see you there.

Permanent links to both these events are across the silver bar at the top of this page.

And finally, I’ve updated the resume attached to this site via the Google Docs link at the top to reflect my most recent position here in Jacksonville.

Thanks and have a grateful day…Troy

Tags: , , , ,
Posted in Life, Speaking | Comments (0)

SSMS Default Save Location

July 22nd, 2010
by The Grateful DBA

Now, I certainly understand that when a monstrous piece of software as complicated as SQL Server 2008 R2 is released, that there are literally 3.2 gajillion details to get right. And I freely admit that Microsoft got 99% of them right on with this latest release, but this one’s really got me scratching my head.

SSMS allows us to specify a default save location for query execution result sets, just as you would expect an enterprise level product should.

QUERY RESULTS DIALOG

But what SSMS doesn’t allow us to do, is to specify a default save location for the query definition itself. Notice what’s conspicuously missing…

QUERY EXECUTION DIALOG

My current position requires me to consistently save scripts to the same folder throughout the day, then go ahead and create a new folder the next day and save that day’s scripts in there. This folder is of course buried 5 or 6 levels deep on my local machine. The process of having to navigate EVERY time through all six levels just to save the script is maddening to say the least. So, what’s a modern DBA to do to alleviate this situation? Well, my work-around saves me loads of time and really wasn’t difficult to set up as I thought at first it might have been.

The “Save File As” menu option does indeed provide us with this handy bar on it’s left side:

It occurred to me that since they were kind enough to offer me a expressway to my desktop, I could leverage that to resolve my problem. And sure enough all it takes is an ordinary shortcut on the desktop that I only have to change it’s definition each morning and viola, I’m two clicks at the most away from being able to save my script into my multi-nested location.

Notice I’ve labeled it “Today,” so that part never has to change. I simply create the new folder with today’s date (the actual nested one…which I have to do anyway) and then quickly change the path the desktop shortcut points to. That’s really all there is to it. Two clicks and I’m saving my script to the proper place.

I know none of this is mind-blowing cutting edge stuff, but it’s certainly made my life easier and I wanted to share this little epiphany with you kind people.

Have a grateful day…Troy

Tags: ,
Posted in SSMS | Comments (3)

Spring/Summer Speaking Schedule

April 12th, 2010
by The Grateful DBA

The Grateful DBA is happy to announce that I’ll be speaking at several SQL conferences this spring and summer.   Not exactly what one would designate a “whirlwind tour,” yet nonetheless I’m truly looking forward to each for many specific reasons.  I will be speaking at each of these conferences on a subject which consumes much of my day; transactional replication.  So…if you happen to catch it more than once and it sounds familiar, now you know why.

The schedule and a couple notes about each event…

  • 4/24/10, SQL Saturday #41, Atlanta, GA – Besides the normal “I’m-REALLY-looking-forward-to-this-SQL-Saturday” reasons, this weekend has an extra bonus attached, and it’s called Railroad Earth.   For those not familiar with this wonderful bluegrass-rock-roots-jam band, search them out on MySpace, YouTube, etc for a taste of their style.  Anyway, they just so happen to be at the Variety Playhouse that evening after the conference and I will of course will be making my way downtown to boogie.
  • 5/8/10, SQL Saturday #38, Jacksonville, FL – This event holds a special place in my heart only because it’s put on by my local SQL Server user group, JSSUG.  I will be taking an active role in not just presenting, but helping Brian, Devin, & Scott prepare in any way needed.  It takes place right around the corner here in Jacksonville on the lovely campus of The University of North Florida.
  • 6/5/10, SQL Saturday #22, Pensacola, FL – You may have guessed by now that these SQL Saturdays are numbered sequentially, and this one being #22 tells me they’ve been working on getting it together for quite a while.  The official presenter’s list has not been decided upon yet as I believe the call for speakers hasn’t technically closed.  I have of course applied to present and certainly hope to be asked to speak.  Either way, I will attend this fine event as a day of free training with some of the top SQL minds in the country is just too good to pass up. (Edited 4/19/10:  Speaker’s list appeared over the weekend…it’s official!  See ya there!!)
  • 7/31/10, SQL Saturday #40, Miami, FL – Same thing here as for Pensacola; the call for speakers won’t close for another month or so, but I certainly hope to be asked to present.  Despite the expectation of oppressive heat (c’mon, it is South Florida in July), I always love traveling south the length of my home state as well as catching up with several close friends I happen to have in the Boca/Palm Beach area.

If you attend any or all of these events, please please make sure to find me and say hello.  I truly look forward to meeting the 3 people who read my blog.  😉

Have a Grateful day…Troy

Tags: , ,
Posted in Replication, Speaking | Comments (1)

Missing Clustered Indexes w/ Row Counts

April 1st, 2010
by The Grateful DBA

This post is a follow-up to Thomas LaRock’s excellent article here, so first go read that. I’ll wait.

You back? Good.

As most of you know tables missing clustered indexes (CI) are fairly bad performance-wise, but why? Even disregarding the fact that SQL Azure actually requires a CI for each and every table in the cloud database, and hence you may be “tripped up” on this in the future. More importantly, SQL Server sequentially reads the data in a CI one extent at a time. Brad McGehee states, “This makes it very easy for the disk subsystem to read the data quickly from disk, especially if there is a lot of data to be retrieved.” Heaps cause the server to do so much extra work due to unordered nature of NCI’s or having no indexes at all, that performance is bound to suffer.

My goal here is to extend the functionality of Thomas’ little script to do two additional things which I find extremely helpful. One, have the script work all at once for every database on a particular server. Certainly this is not an issue for those of you in environments with one database per server, but I’d bet that’s more the exception than the rule for most of us.

Secondly, I find it very useful to have a row count included to assist me with deciding on exactly how to proceed with the data provided. While I don’t disagree with the need to potentially trash any tables you find with 0 rows, that cleanup project is for another day. I’m concerned today with missing CI’s on tables that truly need them. Unfortunately, row count is not included in either of the joined tables in Thomas’ script. I solve that problem by creating a temp table filled from sp_spaceused and join it to the original resultset.

A couple caveats for you before checking out this script. Firstly, being that I work in an environment that’s 95% SQL Server 2000, this script was written specifically for 8.0. Now, I did indeed test it on 2005 and it does work, but I’d bet a lot of coin there are much easier ways of doing this using DMV’s and other new features. Secondly, the script has a hard time with alternate schemas besides ‘dbo’. While I’m fully aware the script can probably be changed to take current schema into account (perhaps from the ‘information_schema.tables’ system view), I don’t have the need to in my environment so honestly, I just never took the time to change the script to handle alternate schemas. Third, don’t hassle me for using cursors. I don’t use them in production, but am not averse to using them carefully in maintenance scripts that get run “occasionally.” I know all about the performance implications but not really an issue here for a script that takes less than a second to run…and on an ad-hoc basis at that.

Thanks again to the SQLRockstar for inspiring me to clean this up and post it out there for all.

Have a Grateful day…Troy


USE master
DECLARE cur1 CURSOR FOR
  SELECT [name] FROM sysdatabases
  WHERE [name] NOT IN ('tempdb','master','msdb','model')
        
DECLARE @db VARCHAR(75)
DECLARE @sql1 NVARCHAR(3000)
OPEN cur1
FETCH NEXT FROM cur1 INTO @db
WHILE @@FETCH_STATUS <> -1
  BEGIN
    SET @sql1 = N'SET NOCOUNT ON

      -- CREATE TEMP TABLE TO HOLD ALL TABLE NAMES
      CREATE TABLE #so (dbName VARCHAR(75),
        tableName VARCHAR(75),
        tableRows INT)
    
      -- FILL FROM sysobjects
      INSERT INTO #so
      SELECT ''' + @db + ''' [db], name [table], 0
      FROM ' + @db + '.dbo.sysobjects
        INNER JOIN ' + @db + '.information_schema.tables ist ON ist.table_name = ' + @db + '.dbo.sysobjects.name
      WHERE xtype = ''u''
        AND id NOT IN
          (SELECT id FROM ' + @db + '.dbo.sysindexes WHERE indid = 1)
        AND ist.table_schema = ''dbo''
        AND ' + @db + '.dbo.sysobjects.name NOT IN (''cdata'',''cglobal'')

      -- CREATE CURSOR TO ITERATE TABLE NAMES AND RUN sp_spaceused AGAINST THEM
      DECLARE cur2 cursor for
        SELECT tableName FROM #so
      OPEN cur2
      DECLARE @tn VARCHAR(75)
      DECLARE @sql2 NVARCHAR(1000)
      FETCH NEXT FROM cur2 INTO @tn
      WHILE @@FETCH_STATUS <> -1
        BEGIN
          -- CREATE TEMP TABLE TO HOLD ROW AMOUNTS (AND OTHER STUFF WE DONT CARE ABOUT NOW)
          CREATE TABLE #su
            (tableName varchar(100),
                NOR varchar(100),
                RS varchar(50),
                DS varchar(50),
                IxS varchar(50),
                US varchar(50))
          
          -- FILL THE NEW TEMP TABLE
          SET @sql2 = N''INSERT INTO #su EXEC ' + @db + '.dbo.sp_spaceused '''''' + @tn + ''''''''
          EXEC sp_EXECUTESQL @sql2
            
          -- JOIN THE 2 TEMP TABLES TOGETHER TO UPDATE THE FIRST WITH THE ROW NUMBER COUNT
          UPDATE #so
          SET tableRows = su.NOR
          FROM #su su
            INNER JOIN #so so ON so.tableName = su.tableName
          WHERE ISNULL(su.NOR,0) <> 0
          
          -- CLEAN UP
          DROP TABLE #su

          -- ITERATE        
          FETCH NEXT FROM cur2 INTO @tn
        END
  
      -- CLEAN UP
      CLOSE cur2        
      DEALLOCATE cur2

      -- OUR OUTPUT!!
      SELECT * FROM #so
      WHERE tableRows > 0
                  ORDER BY tableRows DESC

      -- CLEAN UP
      DROP TABLE #so'
    
    -- EXECUTE THE ENTIRE BLOCK OF DYNAMIC SQL
    EXEC SP_EXECUTESQL @sql1  
    
    -- ITERATE
    FETCH NEXT FROM cur1 INTO @db
  END

-- CLEAN UP
CLOSE cur1
DEALLOCATE cur1

Tags: , ,
Posted in Index Tuning | Comments (1)

What Teachers Make

March 29th, 2010
by The Grateful DBA

Apparently this video has been around for three years and this morning was the first time it’s made it’s way across my eyes.  I think it’s absolutely wonderful and Taylor Mali says it all so so perfectly.  I have several close friends who are actual college level teachers, as well as it’s a career my undergraduate son is considering.  Not only that, but many of YOU, my SQL Server colleagues are outstanding teachers in your own right.

And thank you to Seth Godin for bringing this video to my attention this morning.

So teachers stand tall, be proud, and next time someone laughs about how much teachers make, smile securely knowing ignorance truly is bliss.

Have a grateful day…Troy

Tags: ,
Posted in Life | Comments (0)

FTI Catalog Relocation

March 1st, 2010
by The Grateful DBA
relocation

Had a relatively simple maintenance to perform this past weekend; the kind so simple, it’s the fodder of DBA dreams everywhere. I was assigned the task to move the data and log files for four separate databases to another drive on the same server. The server designation itself for these four database wasn’t to change; simply the file path to the mdf and ldf. To make it even easier, none of these four were involved with replication; actually quite a rarity in our environment. Our network operations team provided me with a new SAN share on this box, which afforded me the opportunity to ease some of the drive space issues I ran into occasionally.

At first glance, a simple detach-move-reattach was all that appeared necessary for this straightforward operation. But a deeper look prior to the maintenance window’s arrival indicated all four of these databases were full text enabled and populating six separate catalogs each. Ok, no need for panic, this doesn’t turn it from a half hour operation into a six-hour one, but dropping the 24 catalogs then recreating and repopulating them did indeed add a substantial amount of time to the project. It got me wondering just how efficient my approach really was.

Detaching and reattaching a database does by default disable full-text indexing on that database. After the files have been properly moved, the task at hand is to move the full-text catalogs to folder alongside the new mdf/ldf location. For example, the “Users” table is FTI’d and the original catalog in stored on the M drive, same as the data and t-log files. After the move, the mdf and ldf are now on the N drive but right-clicking the “Users” table shows the catalogs still mapped to M. Not only this, but they are grayed out in the full-text editing dialog box, disallowing a simple remapping of the catalog location. My solution was to carefully document the fields/names/primary keys/etc. used for each table’s FTIs, then remove all catalogs and re-add them individually with them mapped to N as needed. Not a mind-numbingly horrible operation as some DBA stuff goes, but complicated enough in that one truly has to concentrate and not miss ANY details whatsoever.

While I accomplished the whole thing with hours still to spare in the maintenance window, I told myself there just has to be an easier way to accomplish the FTI portion. I suppose I could script out the whole process from dropping of the original catalogs to the recreation at the new location in T-SQL. Now while that would indeed save me mountains of time during the window, just how long would it take to create those very scripts? Have I really saved myself work? I don’t think so with that option.

It occurred to me that this catalog location must of course be stored in one of the system tables somewhere. What if I simply ran an update against that row then repopulated the index? Or maybe there is some non-documented-super-duper-secret hidden system stored procedure I could fire off? Could it be that simple?

Turns out that it isn’t that simple after all.

Sudarshan offers a SQL Server 2005 solution towards the bottom of this forum post which seems to indicate the magic “Keep Full Text Indexes” checkbox included with the detach dialog takes care of the issue very easily. There are a couple gotchas according to Sudarshan, but his solution appears quick and elegant. Lest it be noted I haven’t actually tested it because I currently work in a predominantly SQL Server 2000 environment. This looks viable for ’05 and most likely ’08, but proceed cautiously due to me not knowing this author personally.

Microsoft offers up two support pages that address this very issue; here and here. As your typical MSDN pages do, these offer tremendous detail into the process; more than most humans need. They recommend not just a simple “point the catalogs somewhere else” solution, but registry changes as well. Now, well I’ve been around a bit and registry changes don’t scare me as much as they used to, I still try and avoid them whenever possible. Too many horror stories have surfaced over the years of even the slightest of registry changes wreaking total havoc across multiple systems, that I’m ALWAYS going to be hesitant to manually make those changes. If that’s what goes on behind the scene when I complete a wizard or close a dialog box, well then fine, that I trust. If I have a choice that’s slightly more laborious yet means I don’t have to visit regedit, then I’m all for it.

While my “manual” solution appears to possibly be the quickest methodology for a small number of databases, obviously one’s going to have major issues when it comes to scaling and trying to accomplish this over 20, 30 or more databases simultaneously. Like so so many other SQL Server topics, this comes down to an “It depends” answer once again. Take everything into account. How many databases? How long is your maintenance window? Your ability for producing similar scripts to run at a later time? Your tolerance for making direct registry changes? Take all these factors into account and I’m sure the solution you end up with with, will much more appropriate for your situation.

I’m very interested in your feedback on this topic. Have I over-complicated a relatively simple issue? Or have I stumbled upon one of those timeless aspects of SQL Server that simply needs dealing with on a manual basis? How have you accomplished this in the past? I look forward to your responses.

Have a Grateful day…Troy

Tags: , , , ,
Posted in Full Text Indexing | Comments (0)

What NOT To Do. True SQL Confessions.

February 24th, 2010
by The Grateful DBA

I just love irony…
Irony

Flipping through the channel guide last night and I came across TLC’s What Not to Wear. Reminded me of the hilarious meme floating around the SQL Twitter community yesterday regarding SQL confessions. Basically, we were “baring our soul” and confessing to committing some of the cardinal of database sins. And while the conversation was 110% tongue-in-cheek, it occurred to me that just like the yentas on TLC telling you what not to wear, this thread is in essence an excellent guide on what NOT to do with SQL. Seriously, here we have some of the best SQL minds in the world not only having a blast and amusing the crap out of anyone following the hashtag, but also teaching us valuable lessons. Albeit in a backwards offhanded manner, but nonetheless I see a tremendous amount of wisdom if one can remember to look at it correctly.

So, I repost just a small percentage of the wackiness that ensued yesterday with a few stipulations. First and MOST IMPORTANT; this is not a list of advice from me. PLEASE don’t go running into the IT Director’s office and insist all your company’s clients databases get converted to Access, just because the Grateful DBA posted it. But…maybe it will get you to think once or twice when a client is starting to hint that maybe they’d prefer an Access solution because they’ve already forked out coin for Office.

Second, while I’m all for giving credit where credit is due…99% of the time, I think this may be the exception to the rule. While this is all in good fun, I shudder at the thought of some lazy slob happening upon this blog and seeing a highly respected MVP’s name next to something outrageously ridiculous and taking it out of context. To avoid any possibility of that, I’m not including any author names here. Understand these are all “overheards,” and only a couple of them were written by myself. If you’re truly industrious I suppose you could do Twitter searches for the phrases below and determine who wrote them. If you do, I say you have too much time on your hands.

So, enjoy this list of true SQL confessions. And like I stated, while these are all meant as jokes, I honestly believe there’s a lot to be gained by slapping on the ol’ irony glasses and examining exactly what they’re trying to tell you.

  • “I shrunk a database last week.”
  • “I ran DBCC CHECKDB in production without telling anyone.”
  • “I go in the server room and pull random wires as soon as I go off call.”
  • “I sneak cursors into others’ code.”
  • “I have a production account whose username and password are the same.”
  • “I have tlogs on RAID 5.”
  • “My sa password is sa.”
  • “A query I wrote yesterday not only uses dynamic sql but in a loop executing it hundreds of times.”
  • “I’ve made developers sysadmin in production so they could push code changes without testing them.”
  • “I used rollback within a trigger.”
  • “I eat bacon 3 meals a day.”
  • “I sometimes do exactly the thing I tell you not to in my blogs.”
  • “I run CHECKPOINT every 2 seconds.”
  • “I used a cross join to fill a table variable.”
  • “When my tempdb drive fills up, I hook up a USB drive and create a new tempdb file there.”
  • “I randomly kill other people’s queries, just to make my day interesting.”
  • “I’ve turned off database mirroring on all production sql servers as a quick fix for performance issues.”
  • “I’m not really a DBA…but I did stay at a Holiday Inn Express last night.”
  • “I am not a DBA, but I sleep with one.
  • “I don’t enforce refferential integrity . Chaos is much more fun.”
  • “I play Lady Gaga without remorse before presentations now.” (Breaking my rule; giving @SQLChicken credit for this one)
  • “I’ve given a developer db_owner just to shut them up.”
  • “I like clustered GUIDs. They make it really easy to speed up a database. By unclustering them.”
  • “I tried installing SQL 2008 on Microsoft Bob.”

Have a Grateful day…Troy

Tags: , ,
Posted in Best Practices | Comments (3)

Winner Winner Chicken Dinner…not quite.

February 16th, 2010
by The Grateful DBA

While it’s all in good fun, very cool I made it into the semi-finals in Thomas LaRock’s “Name That Caption” contest. At this point, the public will vote for the top three and from there, Thomas is going to choose the one he likes best. Winner gets a free book (one I would most likely pay for anyway, but don’t tell Thomas), but more importantly, the winner gets bragging rights amongst all my noisy Twitter friends. Oh yeah, that’s worth it all!

Links:
    – The original post
    – My entries
    – The voting page

So if you haven’t already, head on over to the third link there and throw your vote into the mix. Sure, I’d prefer to have you vote for me, but the other four are clever enough, I certainly wouldn’t blame you for voting for ANY of the five. It’s all for fun, so best of luck to all.

Have a Grateful day…Troy

Tags:
Posted in Funny Stuff | Comments (0)

Get Adobe Flash player