Posts Tagged ‘2008’

Ola Hallengren’s Maintenance Solution Command Log Analysis

October 23rd, 2015

Yeah yeah yeah, it’s been a while since I blogged last. Spent a glorious two years in New York City, but am very happily back in Jacksonville now. That whole story is a blog post in itself, so won’t get into it here. Suffice to say, it’s nice to be back home…full of renewed energy and raring to go. This is me…
To say that a large number of shops use Ola Hallengren’s maintenance solution is probably the understatement of the decade. It has proven itself over the years to be as thorough, easy-to-use, and flexible as any free, custom, or paid backup/index maintenance solution out there…and there are MANY out there, so that’s saying a lot. The balance of this post is assuming you’re familiar with it’s workings, capabilities, & output. While there’s a lot of verbage out there in blogland about how to use and set up the routines, there isn’t a lot to be found on how one might want to analyze the output. Thusly, I’ve put together over the years a few scripts I’d love to share.

Within Ola’s scripts, there exists the ability to log all activity to a table called “CommandLog”. I highly recommend one turns this option on for every part of the solution you employ. This means the same table will contain rows for each backup (full, t-log, or diff) taken, each index maintained (rebuilt or reorganized), each statistic updated, as well as for each time you use the scripts to perform DBCC integrity checks. The table of metadata is extremely valuable in analyzing exactly what took place the night before, the last month, etc. For the most part, turning on the logging is simply a matter of including a “@LogToTable = ‘Y'” parameter in the procedure call. Easy peazy.

One thing I really like to know is how many index maintenance operations were performed. It may be the case that normal business processes (combined with less-than-stellar schema) fragment our indexes on a regular basis and you don’t have any control over it. This might be so for instance when you do a lot of writes against a clustering key that places these new rows in one of the middle extents, and not at the end. This will cause page splits and natural fragmentation. As DBA’s, it’s our job to limit fragmentation as much as possible. Maybe we alter the cluster key. Maybe we alter the fill factor. Maybe we change the INSERT process to place new rows at the end. Whatever methodology you employ, you’d like to see some sort of reduction in the number of rebuilds and then eventually, reorgs as well. This is what I use to track them…

FROM commandlog c
WHERE c.command like '%reorganize%'
  AND NOT EXISTS (SELECT 1 FROM IndexExceptions WHERE IndexName = c.IndexName)
GROUP BY CONVERT (DATE, c.starttime)

FROM commandlog c
WHERE c.command like '%rebuild%'
  AND NOT EXISTS (SELECT 1 FROM IndexExceptions WHERE IndexName = c.IndexName)
GROUP BY CONVERT (DATE, c.starttime)

The “IndexExceptions” table reference you see in the 4th row of each SELECT, refers to a small table I keep to exclude any indexes from my counts that I know for 100% sure that I’m going to leave be, and I don’t want them in my report. It’s simple schema is…

CREATE TABLE [dbo].[IndexExceptions](
  [IndexExceptionID] [int] IDENTITY(1,1) NOT NULL,
  [IndexName] [varchar](200) NOT NULL,
  [Reason] [varchar](max) NULL

When I want to analyze exactly how long my index maintenance operations are taking, I use the following…

SELECT commandtype Activity,
  CAST(CONVERT(CHAR(11), starttime, 113) AS DATETIME) Work_Date,
  COUNT(*) How_Many,
  AVG(DATEDIFF(ms,starttime,endtime)) Avg_ms,
  AVG(DATEDIFF(s,starttime,endtime)) Avg_Secs,
  AVG(DATEDIFF(mi,starttime,endtime)) Avg_Mins
FROM commandlog
WHERE commandtype NOT IN ('xp_create_subdir','xp_delete_file','RESTORE_VERIFYONLY','BACKUP_DATABASE')
GROUP BY commandtype
  ,CAST(CONVERT(CHAR(11), starttime, 113) AS DATETIME)
ORDER BY CommandType, CAST(CONVERT(CHAR(11), starttime, 113) AS DATETIME) DESC

…use the WHERE clause to change what you see and what you don’t.

This snippet will tell you how many times individual indexes where rebuilt/reorg’d in a specific period. Adjust both parts of the WHERE clause as necessary…

SELECT DatabaseName, ObjectName, IndexName, COUNT(*) CommandCount
FROM dbo.CommandLog
WHERE commandtype = 'ALTER_INDEX'
  AND DATEDIFF(d,starttime,GETDATE()) < 14
GROUP BY DatabaseName, ObjectName, IndexName
ORDER BY CommandCount DESC

And this SELECT tells us important information about a single index; including it’s current fill factor (in case you want to change it to a lower amount and in theory, reduce splits and fragmentation), the current fragmentation percentage, as well as the size of the index. Again, change the WHERE clause as neccessary…

SELECT starttime,
  objectname [table],
  CAST(CAST(extendedinfo as XML).query('data(/ExtendedInfo/Fragmentation)') as VARCHAR(10)) as fragmentation,
  CAST(CAST(extendedinfo as XML).query('data(/ExtendedInfo/PageCount)') as VARCHAR(10)) as [pagecount],
FROM commandlog c
JOIN sys.indexes i ON = c.indexname
WHERE indexname = 'IndexNameHere'
 AND CommandType like '%alter%'
ORDER by databasename, objectname, c.starttime DESC

These are just some of the things you can do with the CommandLog table and the plethora of useful information it contains. Of course, feel free to use these scripts, adjust them, and let me know how it works for you.

Thanks and have a grateful day…Troy

Tags: , , , , ,
Posted in Backups, Index Tuning, SSMS, T-SQL | Comments (0)

Which JOIN Should I Use?

February 16th, 2011

A couple years ago, I purchased Paul Nielsen’s wonderful book “SQL Server 2005 Bible” published by Wiley.  It includes a very small little sidenote-type of box discussing an excellent parable regarding joins.  Paul’s analogy comes to mind almost every time I create a JOIN.  Remembering it alone is sufficient to refresh my memory on which JOIN does what.  It’s easy to remember, makes it’s point, and is succinct; what more could you ask?

So, I thought I would re-tell Paul’s example in order that this may possibly reach an individual or two that doesn’t have this book.  This will help both new and old SQL’ites alike.

Consider a very old style, ultra-conservative church where men are expected to sit on one side of the aisle (the right for the sake of this example), and women on the other (the left).  Married couples can indeed attend this church, but they still must sit on their respective sides.  The minister  instructs all married individuals who’s spouse is on the other side of the aisle, to stand up, grab their partner’s hand, and file out of the church.  This is our INNER JOIN.  Only matched “pieces” from each side are considered in the output.

Now imagine that instead of married individuals only, the minister instructs all the men to stand up.  Those that are married are to grab the hand of their spouse on the other side of the aisle, and then all men and only the married women are exited from the church.  Here is our OUTER JOIN; a RIGHT one in this case.  Our output consists of all members of the right group and only matches from the left.

Conversely,  perhaps the minister instead does the opposite.   He instructs all women to stand, grab their husband’s hand, and then all women and their partners exit the building.  This is our LEFT OUTER JOIN.  All members from the left group are included and as well only matches from the right group come along.

Or instead, the minister instructs everyone to stand up, all the married folks to grab hands, and then everyone leaves the building.  This is our FULL [OUTER] JOIN!  Everything is included in the output, but matched couples are still sent out together.

That’s it.  Simple, but extremely useful when any bit of doubt creeps into your mind about which JOIN may be appropriate in a specific situation.  Again, I give credit where it is due; and this time it is to Paul Nielsen for teaching me this wonderful memory aid.

Another post coming in the next couple days to announce a job change for me that I’m very very excited about.  Look for that by this time next week.

Have a grateful day…Troy

Tags: , , , ,
Posted in T-SQL | Comments (0)

Which is Faster; the Chicken or the Egg?

December 2nd, 2010

Due to a company policy, all UPDATEs at my office eventually headed to prod, are required to be based solely off a primary key and must be written out one line at a time.  This pertains if the PKs are sequential or not.  Yesterday I received a script for approval that contained 14,000 individual UPDATE statements based off an identity key and comprising 100% of the table.  So imagine if you will (said in my best Rod Serling voice) 14,000 of these…

UPDATE Customer SET firstName = ‘Foo’ WHERE id = 1
UPDATE Customer SET firstName = ‘Foo’ WHERE id = 2
UPDATE Customer SET firstName = ‘Foo’ WHERE id = 3

It occurred to me that as long as we have sequential ID’s to update off of, then it had to be substantially quicker to do a single UPDATE using a BETWEEN clause. So, 14,000 of the above statements turn in to just one…

UPDATE Customer SET firstName = ‘Foo’ WHERE id BETWEEN 1 AND 14000

But just how much faster is the BETWEEN vs. individual UPDATEs? Turns out the answer is "way way way". Unfortunately, "way way way" isn’t extremely scientific. So the geek in me had to see exactly how much faster and wrote some scripts to test it out.

On a local SQL Server 2008 R2 installation (no network interference), I filled two local temp tables with random numbers between 1 & 50. I then simply updated this value to ’51’ both ways. First one row at a time based off the incrementing PK, and secondly all at once using BETWEEN. The difference even surprised me. At all levels I tested, it was over 99% faster, and the larger the sample size, the more significant the time savings. The results…

500 641.8 4.8 0.75% 99.25%
5000 6837.8 19.6 0.29% 99.71%
10000 15742.4 51.6 0.33% 99.67%
15000 28546.4 63.0 0.22% 99.78%
20000 63462.6 95.8 0.15% 99.85%

Ok, so what conclusions can we make from this data?

  1. Not only is it faster using the BETWEEN function, but it’s significantly faster. Over 99% faster in the sample sizes I used
  2. The time savings (percentage-wise) tends to increase as the number of UPDATEs increase as well. Good to know because this may be something to take into account when doing that pesky 1,000,000 row update; but not so much when just on a couple rows.
  3. The back-and-forth, back-and-forth, back-and-forth REALLY adds up over a larger sample size. For example; going from just 500 to 5000 UPDATEs, we’ve increased the sample size 10-fold, the BETWEEN was 4x as long, yet the individual UPDATEs took over 10x as long!
  4. We bloggers will write about anything SQL related no matter how obvious it may be if it offers even a glimmer of additional insight into our product.

Have a Grateful day…Troy

Tags: , , , , ,
Posted in Performance Tuning, T-SQL | Comments (3)

Copy/Paste From SSMS to Excel

November 29th, 2010

From the “did you know” category…

Did you know you could highlight an object in SSMS, right-click it, and drag it to Excel? I oft have the need to list procedures in Excel and the “old” copy/paste methodology becomes cumbersome when you have to do it over and over again. But as I said; if you right-click and drag an object across your desktop to Excel, you can then un-click and have the option to move or copy it at your destination. This is very handy, particularly in a dual monitor setup where Excel’s open on one side and SSMS on the other.

By the way, before you ask, yes it works just as well with your destination being any of Microsoft’s Office Suite. I tested it in Word & Outlook just now and it works dandy. Not sure why I’m so surprised these Office applications work well together like that; I guess I shouldn’t be.

Have a Grateful day…Troy

Tags: , , , , , ,
Posted in Office, SSMS | Comments (3)

Dropping Multiple Tables at Once Via T-SQL

November 26th, 2010

From time to time we have the need to drop multiple tables within a single stored procedure. Were you aware that it can be done in single DROP TABLE statement? SELECTing multiple columns, constants, variables, etc. in a single SELECT is one of the first things we learn as SQL newbies. But dropping two or more tables at the same time, while slightly less common, is just as convenient.

So, instead of…


we can…

Not only can we do this with normal tables, we also have the ability to drop temp tables; both local and global, as well at the same time. How convenient is that?

This little block of code will create normal, local temp, & global temp tables, fill them in with sample data, show the data’s there, drop the tables using this “new” syntax, then show that all three tables are indeed gone.

USE tempdb

-- Create a local temp table
CREATE TABLE #localTempTable (i INT)

-- Fill the local temp table with sample data
INSERT INTO #localTempTable
SELECT id FROM sysobjects

-- Show that the local temp table is filled
SELECT COUNT(i) localTempTable FROM #localTempTable


-- Create a global temp table
CREATE TABLE ##globalTempTable (i INT)

-- Fill the global temp table with sample data
INSERT INTO ##globalTempTable
SELECT id  FROM sysobjects

-- Show that the global temp table is filled
SELECT COUNT(i) globalTempTable FROM ##globalTempTable


-- Create a real table

-- Fill the real table with sample data
SELECT id  FROM sysobjects

-- Show that the real table is filled
SELECT COUNT(i) realTable FROM RealTable


-- Drop the tables using the "new" syntax
DROP TABLE #localTempTable, ##globalTempTable, RealTable


-- Show that the objects are indeed gone
IF OBJECT_ID(N'tempdb..#localTempTable', N'U') IS NULL
  SELECT '#localTempTable is gone'
  SELECT '#localTempTable is still there'
IF OBJECT_ID(N'tempdb..##globalTempTable', N'U') IS NULL
  SELECT '##globalTempTable is gone'
  SELECT '##globalTempTable is still there'
IF OBJECT_ID(N'tempdb..RealTable', N'U') IS NULL
  SELECT 'RealTable is gone'
  SELECT 'RealTable is still there'

And while these days it seems like we DELETE less and archive a lot more, it’s still nice to know that we do have a couple options when it comes to performing this most basic, but vital operation in SQL Server.

Have a grateful day…Troy

Tags: , , , ,
Posted in T-SQL | Comments (1)

What’s Your Favorite SSMS Font?

November 4th, 2010

A lot of us DBAs spend day after day looking at our SSMS windows in the “Courier New” font. It’s the default. It’s the one you learned from. It’s comfortable because it’s what we’re used to. Microsoft provides us a big, long list for us to choose from should we wish to mix things up.

Do you indeed change it from the default, or you one of those “leave well enough alone” kind of DBA’s? Choose one below and I’ll post the results in a week or two.

Have a grateful day…Troy

What is your favorite SSMS font?


View Results

Loading ... Loading ...

Tags: , ,
Posted in SSMS | Comments (1)

COUNTing with the DISTINCT Clause

October 4th, 2010
One ringy-dingy...

Ok, that’s not the count I’m referring to, but one of the many important tasks we DBAs are charged with from time to time is to convert INSERTS/UPDATES/DELETES (IUDs) into “count scripts” in order to verify the amount of records the DML statement would affect before it goes into production. While this may seem like one of the most basic things we have to do daily, the importance cannot be stressed enough. A proper and thorough series of checks and balances is vital in not only the biggest of organizations, but small companies as well. Nobody wants to run scripts against prod only to find out later that instead of affecting the 100 rows you thought it would, your dev left out the WHERE clause and you just updated every row in the table. Eeek! Now you have to take time away from other important things you could/should be doing, and have to test out your data recovery strategy. People (specially Vice Presidents) get extremely irritated at situations like this. Jobs WILL be on the line. Even if you were not the dev who wrote the script, you were still the DBA who let it slide past unchecked. This can be very hazardous to your career.

And since The Grateful DBA wants you to have a long and prosperous SQL career, I offer this little tip regarding one count script situation you might find yourself in; that of an IUD based off a SELECT DISTINCT statement. It’s a bit trickier than you might suspect, but since my readers are way above average intelligence, I have no doubts it’ll be eazy-peazy for you.

Here’s an example of a typical customer INSERT you may have to convert to a count script:

FROM CustomerTemp t
     INNER JOIN States s ON s.ID = t.ID

We have a little of everything here: constants, joined values, repeats, functions, & local variables. Let’s see what it would take to convert this to an accurate count of exactly how many records we’re inserting. Here are all the rules you need to remember…

CONSTANTS – All constants must be removed from the list. This includes things like numbers and string literals for instance. These will naturally return the same value each and every time iterated. In the case of our example script, this means we’ll need to remove the “1”.

REPEATS – Values in the final count list cannot be repeated more than once or else an error will indeed be thrown. In our example, we must remove one of the two “t.firstName”s.

FUNCTIONS – Function calls such as the ISNULL in our example are allowed and valid, but must be aliased. Even if it’s exactly the same as the fieldname itself, an alias is required. As well, deterministic functions such as “getdate()” must be removed.

LOCAL VARIABLES – Similarly to the rule concerning constants, local variables will also naturally insert the same value each time, so must be removed from the script. In our example, the “@city” is a goner.

THE WRAPPER – No, I’m not referring to Jay-Z or Eminem; I mean how the count statement is formed. From my (albeit brief) research, there are apparently several “legal” ways of forming the final statement. The method I’ve always used, and to be honest, have found easiest to read and grasp what’s being accomplished, is to wrap the new, slimmer select in it’s own “SELECT COUNT(*) FROM”. The inside SELECT must be surrounded by parenthesis and must have it’s own unique alias. Even if you have no intention on ever using the alias for anything important, nonetheless it’s required.

This is what our example above turns into after we’ve trimmed it down…

     (SELECT DISTINCT t.firstName,
          ISNULL(,'00000') zip
     FROM CustomerTemp t
          INNER JOIN States s ON s.ID = t.ID) A

I hope this helps when you find yourself faced with a similar task to accomplish. As always, I welcome feedback.

Have a Grateful day…Troy

Tags: , , ,
Posted in T-SQL | Comments (1)

SSMS Default Save Location

July 22nd, 2010

Now, I certainly understand that when a monstrous piece of software as complicated as SQL Server 2008 R2 is released, that there are literally 3.2 gajillion details to get right. And I freely admit that Microsoft got 99% of them right on with this latest release, but this one’s really got me scratching my head.

SSMS allows us to specify a default save location for query execution result sets, just as you would expect an enterprise level product should.


But what SSMS doesn’t allow us to do, is to specify a default save location for the query definition itself. Notice what’s conspicuously missing…


My current position requires me to consistently save scripts to the same folder throughout the day, then go ahead and create a new folder the next day and save that day’s scripts in there. This folder is of course buried 5 or 6 levels deep on my local machine. The process of having to navigate EVERY time through all six levels just to save the script is maddening to say the least. So, what’s a modern DBA to do to alleviate this situation? Well, my work-around saves me loads of time and really wasn’t difficult to set up as I thought at first it might have been.

The “Save File As” menu option does indeed provide us with this handy bar on it’s left side:

It occurred to me that since they were kind enough to offer me a expressway to my desktop, I could leverage that to resolve my problem. And sure enough all it takes is an ordinary shortcut on the desktop that I only have to change it’s definition each morning and viola, I’m two clicks at the most away from being able to save my script into my multi-nested location.

Notice I’ve labeled it “Today,” so that part never has to change. I simply create the new folder with today’s date (the actual nested one…which I have to do anyway) and then quickly change the path the desktop shortcut points to. That’s really all there is to it. Two clicks and I’m saving my script to the proper place.

I know none of this is mind-blowing cutting edge stuff, but it’s certainly made my life easier and I wanted to share this little epiphany with you kind people.

Have a grateful day…Troy

Tags: ,
Posted in SSMS | Comments (3)

FTI Catalog Relocation

March 1st, 2010

Had a relatively simple maintenance to perform this past weekend; the kind so simple, it’s the fodder of DBA dreams everywhere. I was assigned the task to move the data and log files for four separate databases to another drive on the same server. The server designation itself for these four database wasn’t to change; simply the file path to the mdf and ldf. To make it even easier, none of these four were involved with replication; actually quite a rarity in our environment. Our network operations team provided me with a new SAN share on this box, which afforded me the opportunity to ease some of the drive space issues I ran into occasionally.

At first glance, a simple detach-move-reattach was all that appeared necessary for this straightforward operation. But a deeper look prior to the maintenance window’s arrival indicated all four of these databases were full text enabled and populating six separate catalogs each. Ok, no need for panic, this doesn’t turn it from a half hour operation into a six-hour one, but dropping the 24 catalogs then recreating and repopulating them did indeed add a substantial amount of time to the project. It got me wondering just how efficient my approach really was.

Detaching and reattaching a database does by default disable full-text indexing on that database. After the files have been properly moved, the task at hand is to move the full-text catalogs to folder alongside the new mdf/ldf location. For example, the “Users” table is FTI’d and the original catalog in stored on the M drive, same as the data and t-log files. After the move, the mdf and ldf are now on the N drive but right-clicking the “Users” table shows the catalogs still mapped to M. Not only this, but they are grayed out in the full-text editing dialog box, disallowing a simple remapping of the catalog location. My solution was to carefully document the fields/names/primary keys/etc. used for each table’s FTIs, then remove all catalogs and re-add them individually with them mapped to N as needed. Not a mind-numbingly horrible operation as some DBA stuff goes, but complicated enough in that one truly has to concentrate and not miss ANY details whatsoever.

While I accomplished the whole thing with hours still to spare in the maintenance window, I told myself there just has to be an easier way to accomplish the FTI portion. I suppose I could script out the whole process from dropping of the original catalogs to the recreation at the new location in T-SQL. Now while that would indeed save me mountains of time during the window, just how long would it take to create those very scripts? Have I really saved myself work? I don’t think so with that option.

It occurred to me that this catalog location must of course be stored in one of the system tables somewhere. What if I simply ran an update against that row then repopulated the index? Or maybe there is some non-documented-super-duper-secret hidden system stored procedure I could fire off? Could it be that simple?

Turns out that it isn’t that simple after all.

Sudarshan offers a SQL Server 2005 solution towards the bottom of this forum post which seems to indicate the magic “Keep Full Text Indexes” checkbox included with the detach dialog takes care of the issue very easily. There are a couple gotchas according to Sudarshan, but his solution appears quick and elegant. Lest it be noted I haven’t actually tested it because I currently work in a predominantly SQL Server 2000 environment. This looks viable for ’05 and most likely ’08, but proceed cautiously due to me not knowing this author personally.

Microsoft offers up two support pages that address this very issue; here and here. As your typical MSDN pages do, these offer tremendous detail into the process; more than most humans need. They recommend not just a simple “point the catalogs somewhere else” solution, but registry changes as well. Now, well I’ve been around a bit and registry changes don’t scare me as much as they used to, I still try and avoid them whenever possible. Too many horror stories have surfaced over the years of even the slightest of registry changes wreaking total havoc across multiple systems, that I’m ALWAYS going to be hesitant to manually make those changes. If that’s what goes on behind the scene when I complete a wizard or close a dialog box, well then fine, that I trust. If I have a choice that’s slightly more laborious yet means I don’t have to visit regedit, then I’m all for it.

While my “manual” solution appears to possibly be the quickest methodology for a small number of databases, obviously one’s going to have major issues when it comes to scaling and trying to accomplish this over 20, 30 or more databases simultaneously. Like so so many other SQL Server topics, this comes down to an “It depends” answer once again. Take everything into account. How many databases? How long is your maintenance window? Your ability for producing similar scripts to run at a later time? Your tolerance for making direct registry changes? Take all these factors into account and I’m sure the solution you end up with with, will much more appropriate for your situation.

I’m very interested in your feedback on this topic. Have I over-complicated a relatively simple issue? Or have I stumbled upon one of those timeless aspects of SQL Server that simply needs dealing with on a manual basis? How have you accomplished this in the past? I look forward to your responses.

Have a Grateful day…Troy

Tags: , , , ,
Posted in Full Text Indexing | Comments (0)

LiteSpeed Compression Ratios

December 14th, 2009

My company uses Quest Software’s LiteSpeed for SQL Server for both nightly and ad-hoc backups.  The DBA’s here before me set everything up using a compression ratio of 1 and we get actual compression and backup times that are more than adequate for our needs.  I’ve often wondered exactly what effect raising that ratio in the LiteSpeed settings would make in both how long the backup took, but in the actual compression itself.  What trade-off between size and efficiency do I have to make should I decide to raise that setting up the scale?

Let me first make it VERY clear.  I am not an employee of Quest.  I have nothing whatsoever to gain or lose by this examination.  I happen to be an extremely big fan of LiteSpeed and have found it by far to be the best backup solution I’ve had the pleasure to work with during my short professional career.  I have tremendous respect for everyone from Quest I’ve had the pleasure to come in contact with; that includes Kevin Kline, Brent Ozar & others.  This is simply a non-scientific, ground-level look at LiteSpeed’s compression ratios.  Please by all means visit Quest’s website to get all the product details and info you need.   (  Take nothing I say here as gospel.  We clear folks??   I don’t wanna wake up to a cease-and-desist order from Quest tomorrow, ok?

The results graph first, then a discussion on my conclusions…

The blue line and left x-axis represent actual compression for the ratios along the bottom.  The red line and right x-axis likewise represent the time taken vs. a native non-LS backup.  So for example, when I did my sample on compression level 3, it was compressed almost 81% and it took just about half the time of a full native backup.

Ok, so what observations can I make from these results?  As far as compression goes, there seems to be a significant jump from level one to two; that 6% could make a tremendous difference when extending these results to an entire company’s worth of backups.  From there, the compression indeed inches up, but never more than another 1/4 or 1/2 percent at a time.

Looking at the red line (time taken for the backup to complete vs a native one), we see time is even quicker than native all the way through level four.  Even five and six completed in about the same time as a native backup.  From there though, total time taken ramps up pretty quickly; specially at the highest compression ratio which took over five times as long as a native backup.

What compression ratio is best for your situation?  Like Paul Randal is so fond of saying, “It depends.”  This falls along with the dozens of other decisions we make as DBA’s on a daily basis; a trade-off between factors.  Are we severely limited in backup drive space yet have plenty of time to accomplish the backup?  If this is the case then I might choose level 7 or 8.  Is drive space never ever an issue, yet your evening backup window is?  Perhaps level 1 would be the optimal choice for you.

I think 90% of us fall somewhere between those two extremes.  Yes, storage is cheap, but rarely ever unlimited.  While some of us may have multi-terabyte server farms to backup to; it’s still irresponsible to waste it needlessly.  I wouldn’t be awfully surprised though if many of us fall into the “not-enough-hours-in-a-night” caveat where backup time is much more a significant factor.  From my testing and actual experience, I’ve found a compression factor of two or three brings the right balance into a variety of environments.

I would love some feedback and/or some opposing viewpoints from anyone; particularly if someone from Quest happens upon this.

Have a Grateful day…Troy

Tags: , , , , ,
Posted in Backups | Comments (4)

Get Adobe Flash player