Ola Hallengren’s Maintenance Solution Command Log Analysis

October 23rd, 2015
by The Grateful DBA

Yeah yeah yeah, it’s been a while since I blogged last. Spent a glorious two years in New York City, but am very happily back in Jacksonville now. That whole story is a blog post in itself, so won’t get into it here. Suffice to say, it’s nice to be back home…full of renewed energy and raring to go. This is me…
To say that a large number of shops use Ola Hallengren’s maintenance solution is probably the understatement of the decade. It has proven itself over the years to be as thorough, easy-to-use, and flexible as any free, custom, or paid backup/index maintenance solution out there…and there are MANY out there, so that’s saying a lot. The balance of this post is assuming you’re familiar with it’s workings, capabilities, & output. While there’s a lot of verbage out there in blogland about how to use and set up the routines, there isn’t a lot to be found on how one might want to analyze the output. Thusly, I’ve put together over the years a few scripts I’d love to share.

Within Ola’s scripts, there exists the ability to log all activity to a table called “CommandLog”. I highly recommend one turns this option on for every part of the solution you employ. This means the same table will contain rows for each backup (full, t-log, or diff) taken, each index maintained (rebuilt or reorganized), each statistic updated, as well as for each time you use the scripts to perform DBCC integrity checks. The table of metadata is extremely valuable in analyzing exactly what took place the night before, the last month, etc. For the most part, turning on the logging is simply a matter of including a “@LogToTable = ‘Y'” parameter in the procedure call. Easy peazy.

One thing I really like to know is how many index maintenance operations were performed. It may be the case that normal business processes (combined with less-than-stellar schema) fragment our indexes on a regular basis and you don’t have any control over it. This might be so for instance when you do a lot of writes against a clustering key that places these new rows in one of the middle extents, and not at the end. This will cause page splits and natural fragmentation. As DBA’s, it’s our job to limit fragmentation as much as possible. Maybe we alter the cluster key. Maybe we alter the fill factor. Maybe we change the INSERT process to place new rows at the end. Whatever methodology you employ, you’d like to see some sort of reduction in the number of rebuilds and then eventually, reorgs as well. This is what I use to track them…

FROM commandlog c
WHERE c.command like '%reorganize%'
  AND NOT EXISTS (SELECT 1 FROM IndexExceptions WHERE IndexName = c.IndexName)
GROUP BY CONVERT (DATE, c.starttime)

FROM commandlog c
WHERE c.command like '%rebuild%'
  AND NOT EXISTS (SELECT 1 FROM IndexExceptions WHERE IndexName = c.IndexName)
GROUP BY CONVERT (DATE, c.starttime)

The “IndexExceptions” table reference you see in the 4th row of each SELECT, refers to a small table I keep to exclude any indexes from my counts that I know for 100% sure that I’m going to leave be, and I don’t want them in my report. It’s simple schema is…

CREATE TABLE [dbo].[IndexExceptions](
  [IndexExceptionID] [int] IDENTITY(1,1) NOT NULL,
  [IndexName] [varchar](200) NOT NULL,
  [Reason] [varchar](max) NULL

When I want to analyze exactly how long my index maintenance operations are taking, I use the following…

SELECT commandtype Activity,
  CAST(CONVERT(CHAR(11), starttime, 113) AS DATETIME) Work_Date,
  COUNT(*) How_Many,
  AVG(DATEDIFF(ms,starttime,endtime)) Avg_ms,
  AVG(DATEDIFF(s,starttime,endtime)) Avg_Secs,
  AVG(DATEDIFF(mi,starttime,endtime)) Avg_Mins
FROM commandlog
WHERE commandtype NOT IN ('xp_create_subdir','xp_delete_file','RESTORE_VERIFYONLY','BACKUP_DATABASE')
GROUP BY commandtype
  ,CAST(CONVERT(CHAR(11), starttime, 113) AS DATETIME)
ORDER BY CommandType, CAST(CONVERT(CHAR(11), starttime, 113) AS DATETIME) DESC

…use the WHERE clause to change what you see and what you don’t.

This snippet will tell you how many times individual indexes where rebuilt/reorg’d in a specific period. Adjust both parts of the WHERE clause as necessary…

SELECT DatabaseName, ObjectName, IndexName, COUNT(*) CommandCount
FROM dbo.CommandLog
WHERE commandtype = 'ALTER_INDEX'
  AND DATEDIFF(d,starttime,GETDATE()) < 14
GROUP BY DatabaseName, ObjectName, IndexName
ORDER BY CommandCount DESC

And this SELECT tells us important information about a single index; including it’s current fill factor (in case you want to change it to a lower amount and in theory, reduce splits and fragmentation), the current fragmentation percentage, as well as the size of the index. Again, change the WHERE clause as neccessary…

SELECT starttime,
  objectname [table],
  CAST(CAST(extendedinfo as XML).query('data(/ExtendedInfo/Fragmentation)') as VARCHAR(10)) as fragmentation,
  CAST(CAST(extendedinfo as XML).query('data(/ExtendedInfo/PageCount)') as VARCHAR(10)) as [pagecount],
FROM commandlog c
JOIN sys.indexes i ON i.name = c.indexname
WHERE indexname = 'IndexNameHere'
 AND CommandType like '%alter%'
ORDER by databasename, objectname, c.starttime DESC

These are just some of the things you can do with the CommandLog table and the plethora of useful information it contains. Of course, feel free to use these scripts, adjust them, and let me know how it works for you.

Thanks and have a grateful day…Troy

Tags: , , , , ,
Posted in Backups, Index Tuning, SSMS, T-SQL | Comments (0)

Converting a SQL Execution Plan from XML to Graphical

March 29th, 2012
by The Grateful DBA

While I’ve been familiar for many years with the ‘sys.dm_exec_query_plan’ dmv that will produce an XML formatted version of an execution plan (using the ‘query_plan’ column), I had to refresh my memory today on just how  to convert it to something a bit more readable.  Well, today I get to refresh your memory as well.

I’m probably not exaggerating when I say 99% of us DBA’s prefer to look at the graphical version of an (actual) execution plan over the XML gobbledygook.  While even the graphical version can get extremely large and hard to deal with, it’s nothing like trying to find what you need from an XML file of the same plan.  Today I wrote a small script to gather performance metrics from joining ‘sys.dm_exec_query_stats’, ‘sys.dm_exec_sql_text’, and ‘sys.dm_exec_query_plan’.  The query_plan column produces a very handy hyperlink to the execution plan currently in cache, but by default it comes out in XML format.  So, how can I change to a graphical version so that I can much more easily iterate through it and see what scans/seeks/etc are being used?

Just like a waltz, it turns out it is indeed a very simple three-step process…

    1. Click the actual hyperlink and let the XML plan load in SSMS
    2. Click File > Save Plan.xml As…
    3. Save it locally with the file extension “.sqlplan”

Really, that’s all there is to it.  Just double-click this new “.sqlplan” file and it should open in SSMS as a graphical plan….ready for you to peruse to your heart’s content.  Told you it was easy.

Have a grateful day…Troy

Tags: , , ,
Posted in DMV's, SSMS, T-SQL | Comments (0)

Which JOIN Should I Use?

February 16th, 2011
by The Grateful DBA

A couple years ago, I purchased Paul Nielsen’s wonderful book “SQL Server 2005 Bible” published by Wiley.  It includes a very small little sidenote-type of box discussing an excellent parable regarding joins.  Paul’s analogy comes to mind almost every time I create a JOIN.  Remembering it alone is sufficient to refresh my memory on which JOIN does what.  It’s easy to remember, makes it’s point, and is succinct; what more could you ask?

So, I thought I would re-tell Paul’s example in order that this may possibly reach an individual or two that doesn’t have this book.  This will help both new and old SQL’ites alike.

Consider a very old style, ultra-conservative church where men are expected to sit on one side of the aisle (the right for the sake of this example), and women on the other (the left).  Married couples can indeed attend this church, but they still must sit on their respective sides.  The minister  instructs all married individuals who’s spouse is on the other side of the aisle, to stand up, grab their partner’s hand, and file out of the church.  This is our INNER JOIN.  Only matched “pieces” from each side are considered in the output.

Now imagine that instead of married individuals only, the minister instructs all the men to stand up.  Those that are married are to grab the hand of their spouse on the other side of the aisle, and then all men and only the married women are exited from the church.  Here is our OUTER JOIN; a RIGHT one in this case.  Our output consists of all members of the right group and only matches from the left.

Conversely,  perhaps the minister instead does the opposite.   He instructs all women to stand, grab their husband’s hand, and then all women and their partners exit the building.  This is our LEFT OUTER JOIN.  All members from the left group are included and as well only matches from the right group come along.

Or instead, the minister instructs everyone to stand up, all the married folks to grab hands, and then everyone leaves the building.  This is our FULL [OUTER] JOIN!  Everything is included in the output, but matched couples are still sent out together.

That’s it.  Simple, but extremely useful when any bit of doubt creeps into your mind about which JOIN may be appropriate in a specific situation.  Again, I give credit where it is due; and this time it is to Paul Nielsen for teaching me this wonderful memory aid.

Another post coming in the next couple days to announce a job change for me that I’m very very excited about.  Look for that by this time next week.

Have a grateful day…Troy

Tags: , , , ,
Posted in T-SQL | Comments (0)

SQL Saturday #62 – Tampa, FL 2011

January 4th, 2011
by The Grateful DBA

Many of my colleagues have produced short blog posts regarding their participation in SQL Saturday #62 in Tampa next weekend.  Here are just a few I’ve seen so far, but I’m sure there are many more:

As you may have guessed, I too have the honor and pleasure to have been asked to speak at this fine event.  I will be doing my session on “An Introduction to Transactional Replication.”  Ok, I know that topic isn’t going to bring sexy back, but TR is a very real option for many SQL shops for both redundancy, reporting, offloading, and many other potential reasons.  This is a beginners session, so if you do attend it, understand the level of the intended audience.

This occasion kicks off what I expect to be a banner year for the SQL Saturday franchise as it is (naturally) the first one of 2011.  I know that Pam & Jose are working harder than ever to make this weekend a grand one; for both the speakers and attendees.  Sometimes it gets lost that the speakers are attendees as well.  I’ll be speaking from 11:00AM – 12:00 Noon, but will be attending sessions the entire rest of the day.  This year’s line-up is fantastic from start to finish; with sessions covering topics from SSIS, to replication, to Powershell, and a whole lot more.  The real problem is how do you pick just one when there are 3 or 4 per time slot you’d like to see!!  Decisions decisions.

They are doing something a bit different this year on the Friday before the regular conference; what’s being called a “Day of Data”.  For $99, one can attend day-long sessions with either Denny Cherry on storage and virtualization, or Stacia Misner’s introduction to SSIS.  Both these folks are excellent speakers and really know their stuff.  If you have the time and resources, I highly recommend attending one of these sessions.

And most importantly, if you do attend this event, your mandatory assignment is to introduce yourself to the Grateful DBA (if we don’t know each other already.)  Please don’t be shy and do say hello.  I’d love to meet the three people that read my blog.  See you there!!

Have a grateful day…Troy

Tags: , ,
Posted in Speaking | Comments (1)

2010 – The Grateful DBA Year in Review

December 30th, 2010
by The Grateful DBA

Hello there all!  Hippiest and healthiest of holiday seasons to each and every one of you.  Several of my colleagues have recently penned reviews of their past year in not only the blogging, but the speaking, job, and family arenas as well.  And after all, while indeed this is a technical blog, work and SQL Server are just small pieces of this puzzle we call life.  I thought I’d do the same.  Certainly this can become an annual tradition in where every 12 months I get the “pleasure” of a summary of my professional life.  This will be an excellent way to gauge both personal and professional undertakings year over year.  See, the statistical nerd in me always comes out eventually.  So, we brave forward…

  • January
    • 1.12.10 – Blog post – “Shrinking SQL Server Transaction Logs in Simple Recovery Mode”
    • 1.17.10 – Blog post – “What Three Events Brought You Here?”
    • 1.23.10 –  SQL Saturday #32 – Tampa, FL –  “Introduction to Transactional Replication”
  • February
    • 2.2.10 – Blog post – “DBA Survivor Name That Caption Contest”
    • 2.16.10 – Blog post – “Winner Winner Chicken Dinner….not quite.”
    • 2.24.10 – Blog post – “What NOT To Do. True SQL Confessions”
  • March
    • 3.1.10 – Blog post – “FTI Catalog Relocation”
    • 3.29.10 – Blog post – “What Teachers Make”
  • April
    • 4.1.10 – Blog post – “Missing Clustered Indexes w/ Row Counts”
    • 4.12.10 – Blog post – “Spring / Summer Speaking Schedule”
    • 4.24.10 – SQL Saturday #41 – Atlanta, GA – “Introduction to Transactional Replication”
  • May
    • 5.8.10 – SQL Saturday #38 – Jacksonville, FL – “Introduction to Transactional Replication”
    • 5.17.10 – Started new job at LPS in Jacksonville, FL
  • June
    • 6.5.10 – SQL Saturday #22 – Pensacola, FL – “Introduction to Transactional Replication”
  • July
    • 7.22.10 – Blog post – “SSMS Default Save Location”
    • 7.31.10 – SQL Saturday #40 – Miami / Ft. Lauderdale, FL – “Introduction to Transactional Replication”
    • 7.31.10 – SQL Saturday #40 – Miami / Ft. Lauderdale, FL – “Log Shipping vs. Replication; the Great Debate”
  • August
    • 8.8.10 – Blog post – “Grateful DBA Updates”
  • September
  • October
  • November
    • 11.4.10 – Blog post – “What’s  Your Favorite SSMS Font?”
    • 11.19.10 – Blog post – “UnSQL-Friday 001 – Branding & Bunny Murder”
    • 11.26.10 – Blog post – “Dropping Multiple Tables at Once via T-SQL”
    • 11.29.10 – Blog post – “Copy/Paste from SSMS to Excel”
  • December
    • 12.2.10 – Blog post – “Which is Faster; the Chicken or the Egg?”
    • 12.30.10 – Blog post – “2010 – The Grateful DBA Year in Review”

I suppose I should examine this list closely and make some spur-of-the-moment conclusions.  Ok then…

Firstly, I’m very happy with the number of times I’ve had the honor of speaking in front of my colleagues and new SQL professionals at the various SQL Saturdays & user group meetings I’ve attended.  I consider myself extremely blessed to be asked by the event organizers to have the opportunity to pass along my subject-knowledge seven different occasions in the last 12 months.  Looking at it, I’m not sure seven is a number that can be improved upon too much during 2011.  There’s only one SQL Saturday per user group per year, and JSSUG rotates it’s speakers and has an excellent pool of local resources to choose from.  Some months it seems as if half of Pragmatic Works attends our meetings and that company is chock-full of very smart individuals.  I suppose I could travel a little further and attend additional SQL Saturdays (Baton Rogue, Houston, Columbia, etc).  As well, there are larger conferences (SQLRally, PASS Summit) that I have yet to attend or speak at; those options are still open as well I guess.  I only have so many vacation days people!!

I’m less pleased with the amount of technical articles I posted this year.  Nineteen isn’t anywhere nearly enough.  There are specific months I’m pretty happy about my output; November for instance.  But then there are months that I went completely through without a single post (June, September), and there isn’t ANY valid excuse why not.  It’s all a matter of will-power to sit at the desk, turn off Warcraft for a change, and pump out some serious verbosity.  This is one area I’m extremely serious about improving over the next 12 months.

Lastly, my new job with LPS.  Making the change to come over to LPS from Taleo was one of the easiest things I’ve had to do this year.  It’s no surprise to anyone who knows me, how unhappy I was at Taleo.  As much as 95% of the people there were very cool and awesome to be around; the other 5% made my life miserable.  Names aren’t important; what is, is that I corrected the situation and now I’m in a place I’m MUCH happier professionally.  Some days I don’t feel like I’m being tested enough, but then, some days I am, and other days I simply don’t want to.  For the most part, I truly like the people I work with here and I’ve made some close friends that I know will be that way for a long time in the future.  The only thing I’m less-than-pleased about is the 23 mile journey from the beach where I live to downtown Jacksonville.  I could definitely do without that hour to and hour and a half in my car each day, but sometimes one has to make sacrifices.  And this is exactly what I did.

Overall, it was a great 2010.  My career has moved forward in leaps and bounds, and I’ve had the honor of the company of some of the best SQL minds in the country by my side at SQL Saturdays.  In 2011, I resolve to at least double my blogging output and to speak at least the same number of professional events (which btw, will commence with SQL Saturday #62 in a couple weeks in Tampa!)

Mahalo everyone and have a grateful new year…Troy

Tags: , ,
Posted in Musings, Speaking | Comments (0)

Which is Faster; the Chicken or the Egg?

December 2nd, 2010
by The Grateful DBA

Due to a company policy, all UPDATEs at my office eventually headed to prod, are required to be based solely off a primary key and must be written out one line at a time.  This pertains if the PKs are sequential or not.  Yesterday I received a script for approval that contained 14,000 individual UPDATE statements based off an identity key and comprising 100% of the table.  So imagine if you will (said in my best Rod Serling voice) 14,000 of these…

UPDATE Customer SET firstName = ‘Foo’ WHERE id = 1
UPDATE Customer SET firstName = ‘Foo’ WHERE id = 2
UPDATE Customer SET firstName = ‘Foo’ WHERE id = 3

It occurred to me that as long as we have sequential ID’s to update off of, then it had to be substantially quicker to do a single UPDATE using a BETWEEN clause. So, 14,000 of the above statements turn in to just one…

UPDATE Customer SET firstName = ‘Foo’ WHERE id BETWEEN 1 AND 14000

But just how much faster is the BETWEEN vs. individual UPDATEs? Turns out the answer is "way way way". Unfortunately, "way way way" isn’t extremely scientific. So the geek in me had to see exactly how much faster and wrote some scripts to test it out.

On a local SQL Server 2008 R2 installation (no network interference), I filled two local temp tables with random numbers between 1 & 50. I then simply updated this value to ’51’ both ways. First one row at a time based off the incrementing PK, and secondly all at once using BETWEEN. The difference even surprised me. At all levels I tested, it was over 99% faster, and the larger the sample size, the more significant the time savings. The results…

500 641.8 4.8 0.75% 99.25%
5000 6837.8 19.6 0.29% 99.71%
10000 15742.4 51.6 0.33% 99.67%
15000 28546.4 63.0 0.22% 99.78%
20000 63462.6 95.8 0.15% 99.85%

Ok, so what conclusions can we make from this data?

  1. Not only is it faster using the BETWEEN function, but it’s significantly faster. Over 99% faster in the sample sizes I used
  2. The time savings (percentage-wise) tends to increase as the number of UPDATEs increase as well. Good to know because this may be something to take into account when doing that pesky 1,000,000 row update; but not so much when just on a couple rows.
  3. The back-and-forth, back-and-forth, back-and-forth REALLY adds up over a larger sample size. For example; going from just 500 to 5000 UPDATEs, we’ve increased the sample size 10-fold, the BETWEEN was 4x as long, yet the individual UPDATEs took over 10x as long!
  4. We bloggers will write about anything SQL related no matter how obvious it may be if it offers even a glimmer of additional insight into our product.

Have a Grateful day…Troy

Tags: , , , , ,
Posted in Performance Tuning, T-SQL | Comments (3)

Copy/Paste From SSMS to Excel

November 29th, 2010
by The Grateful DBA

From the “did you know” category…

Did you know you could highlight an object in SSMS, right-click it, and drag it to Excel? I oft have the need to list procedures in Excel and the “old” copy/paste methodology becomes cumbersome when you have to do it over and over again. But as I said; if you right-click and drag an object across your desktop to Excel, you can then un-click and have the option to move or copy it at your destination. This is very handy, particularly in a dual monitor setup where Excel’s open on one side and SSMS on the other.

By the way, before you ask, yes it works just as well with your destination being any of Microsoft’s Office Suite. I tested it in Word & Outlook just now and it works dandy. Not sure why I’m so surprised these Office applications work well together like that; I guess I shouldn’t be.

Have a Grateful day…Troy

Tags: , , , , , ,
Posted in Office, SSMS | Comments (3)

Dropping Multiple Tables at Once Via T-SQL

November 26th, 2010
by The Grateful DBA

From time to time we have the need to drop multiple tables within a single stored procedure. Were you aware that it can be done in single DROP TABLE statement? SELECTing multiple columns, constants, variables, etc. in a single SELECT is one of the first things we learn as SQL newbies. But dropping two or more tables at the same time, while slightly less common, is just as convenient.

So, instead of…


we can…

Not only can we do this with normal tables, we also have the ability to drop temp tables; both local and global, as well at the same time. How convenient is that?

This little block of code will create normal, local temp, & global temp tables, fill them in with sample data, show the data’s there, drop the tables using this “new” syntax, then show that all three tables are indeed gone.

USE tempdb

-- Create a local temp table
CREATE TABLE #localTempTable (i INT)

-- Fill the local temp table with sample data
INSERT INTO #localTempTable
SELECT id FROM sysobjects

-- Show that the local temp table is filled
SELECT COUNT(i) localTempTable FROM #localTempTable


-- Create a global temp table
CREATE TABLE ##globalTempTable (i INT)

-- Fill the global temp table with sample data
INSERT INTO ##globalTempTable
SELECT id  FROM sysobjects

-- Show that the global temp table is filled
SELECT COUNT(i) globalTempTable FROM ##globalTempTable


-- Create a real table

-- Fill the real table with sample data
SELECT id  FROM sysobjects

-- Show that the real table is filled
SELECT COUNT(i) realTable FROM RealTable


-- Drop the tables using the "new" syntax
DROP TABLE #localTempTable, ##globalTempTable, RealTable


-- Show that the objects are indeed gone
IF OBJECT_ID(N'tempdb..#localTempTable', N'U') IS NULL
  SELECT '#localTempTable is gone'
  SELECT '#localTempTable is still there'
IF OBJECT_ID(N'tempdb..##globalTempTable', N'U') IS NULL
  SELECT '##globalTempTable is gone'
  SELECT '##globalTempTable is still there'
IF OBJECT_ID(N'tempdb..RealTable', N'U') IS NULL
  SELECT 'RealTable is gone'
  SELECT 'RealTable is still there'

And while these days it seems like we DELETE less and archive a lot more, it’s still nice to know that we do have a couple options when it comes to performing this most basic, but vital operation in SQL Server.

Have a grateful day…Troy

Tags: , , , ,
Posted in T-SQL | Comments (1)

Un-SQL Friday 001 – Branding & Bunny Murder

November 19th, 2010
by The Grateful DBA

Jen McCown in her SQL Awesomesauce blog today raised the challenge for us to wax poetic on the topic of ‘branding’ for the ceremonious very first Un-SQL Friday. I’m sure more will follow as the day progresses, but as of now, I’ve seen only two fellow bloggers chiming in; Andy Leonard & Thomas LaRock. Well, here’s my two cents on this very interesting topic.

Those six of you who come here and also follow me on Twitter, may be already aware of my appreciation for the musings of Seth Godin. While I’ve never been employed in any formal capacity as a marketer, I’ve recently had quite an internal debate over branding; and specifically my own. As he often does, Seth puts it quite succinctly:

Take the time and money and effort you'd put into an expensive logo and put them into creating a product and experience and story that people remember instead.

And while a free Twitter account and a $4.99 monthly investment to Go-Daddy for this blog don’t necessarily qualify as “money and effort” or an “expensive logo”, on my smaller scale, the decision to brand myself as the Grateful DBA ultimately comes down to just what Seth preaches. Who are you and what do you want to be known for? What does your brand say about you? And maybe most importantly, what will your brand say about you in the future? We live in an electronic age; nothing goes away forever anymore. Those familiar with the saga of our poor friend “@SQLTech2” on Twitter can certainly attest to this.

On the most basic level, we need to think about whether we should spend effort and time with a brand that may or may not even be appropriate one, two, or even 10 years down the line. While for the foreseeable future I see myself continuing in my career as a SQL Server DBA, who knows what next year may hold. Maybe I’ll move into a developer role sometime again; perhaps not even with good ol’ T-SQL, maybe even C#, Java, or who know what. Similarly, what if in the future it comes out that Jerry Garcia was the owner of a international consortium of bunny-killers? Well, I may have to seriously reconsider being associated with such a character. Well, maybe not, bunnies CAN get pretty vicious you know…

We also have to consider if people are even going to understand what your brand means, right? It occurs to me that perhaps in some third-world corner of the globe, they’ve never even heard of the Grateful Dead. Blastphemy…I know! But let’s play make believe then. The next question when they hit my site via a Bing search is of course, “what the heck is he all grateful about?” I concede this possibility tongue-in-cheek, but it’s a real issue to consider.

Once your audience knows what your brand means/refers to, what connotations does that carry with it? Are these connections something you wish to carry with you? Who would you rather do business with, “The #1 Microsoft DBA 5-years Running” or “The Crackhead DBA”?? Impressions before knowledge is dangerous territory, but unfortunately reality as well. By the way, the first guy was just arrested for embezzlement and is a polygamist. The second gal’s actual given name is “Crackhead”; don’t blame her, it’s mom & dad’s fault.

Ultimately, what it came down for me was that I’m not ashamed in any way to be associated with the band and people that have meant so much to me in the last 20 years of my life. Whereas some may carry the notion with them that the Dead are only about sex, drugs, & rock-n-roll; any truly informed person knows the band was about peace, order, & caring for your fellow man. None of those are EVER anything to be embarrassed about. So, the decision to brand myself “The Grateful DBA” was made quite easily. The band, it’s scene, & followers have actually MADE me what I am today; a more-than-competent DBA and a valued contributor to the SQL community and the world in general.

So, what does your brand say about you?

Have a Grateful day…Troy

Tags: , ,
Posted in Branding, Grateful Dead | Comments (2)

What’s Your Favorite SSMS Font?

November 4th, 2010
by The Grateful DBA

A lot of us DBAs spend day after day looking at our SSMS windows in the “Courier New” font. It’s the default. It’s the one you learned from. It’s comfortable because it’s what we’re used to. Microsoft provides us a big, long list for us to choose from should we wish to mix things up.

Do you indeed change it from the default, or you one of those “leave well enough alone” kind of DBA’s? Choose one below and I’ll post the results in a week or two.

Have a grateful day…Troy

What is your favorite SSMS font?


View Results

Loading ... Loading ...

Tags: , ,
Posted in SSMS | Comments (1)

Get Adobe Flash player