Archive for the ‘Backups’ Category

Ola Hallengren’s Maintenance Solution Command Log Analysis

October 23rd, 2015

Yeah yeah yeah, it’s been a while since I blogged last. Spent a glorious two years in New York City, but am very happily back in Jacksonville now. That whole story is a blog post in itself, so won’t get into it here. Suffice to say, it’s nice to be back home…full of renewed energy and raring to go. This is me…
WIDE AWAKE!!!
To say that a large number of shops use Ola Hallengren’s maintenance solution is probably the understatement of the decade. It has proven itself over the years to be as thorough, easy-to-use, and flexible as any free, custom, or paid backup/index maintenance solution out there…and there are MANY out there, so that’s saying a lot. The balance of this post is assuming you’re familiar with it’s workings, capabilities, & output. While there’s a lot of verbage out there in blogland about how to use and set up the routines, there isn’t a lot to be found on how one might want to analyze the output. Thusly, I’ve put together over the years a few scripts I’d love to share.

Within Ola’s scripts, there exists the ability to log all activity to a table called “CommandLog”. I highly recommend one turns this option on for every part of the solution you employ. This means the same table will contain rows for each backup (full, t-log, or diff) taken, each index maintained (rebuilt or reorganized), each statistic updated, as well as for each time you use the scripts to perform DBCC integrity checks. The table of metadata is extremely valuable in analyzing exactly what took place the night before, the last month, etc. For the most part, turning on the logging is simply a matter of including a “@LogToTable = ‘Y'” parameter in the procedure call. Easy peazy.

One thing I really like to know is how many index maintenance operations were performed. It may be the case that normal business processes (combined with less-than-stellar schema) fragment our indexes on a regular basis and you don’t have any control over it. This might be so for instance when you do a lot of writes against a clustering key that places these new rows in one of the middle extents, and not at the end. This will cause page splits and natural fragmentation. As DBA’s, it’s our job to limit fragmentation as much as possible. Maybe we alter the cluster key. Maybe we alter the fill factor. Maybe we change the INSERT process to place new rows at the end. Whatever methodology you employ, you’d like to see some sort of reduction in the number of rebuilds and then eventually, reorgs as well. This is what I use to track them…


SELECT COUNT(*) REORGS, CONVERT (DATE, c.starttime) [DATE]
FROM commandlog c
WHERE c.command like '%reorganize%'
  AND NOT EXISTS (SELECT 1 FROM IndexExceptions WHERE IndexName = c.IndexName)
GROUP BY CONVERT (DATE, c.starttime)
ORDER BY CONVERT (DATE, c.starttime) DESC

SELECT COUNT(*) REBUILDS, CONVERT (DATE, c.starttime) [DATE]
FROM commandlog c
WHERE c.command like '%rebuild%'
  AND NOT EXISTS (SELECT 1 FROM IndexExceptions WHERE IndexName = c.IndexName)
GROUP BY CONVERT (DATE, c.starttime)
ORDER BY CONVERT (DATE, c.starttime) DESC

The “IndexExceptions” table reference you see in the 4th row of each SELECT, refers to a small table I keep to exclude any indexes from my counts that I know for 100% sure that I’m going to leave be, and I don’t want them in my report. It’s simple schema is…

CREATE TABLE [dbo].[IndexExceptions](
  [IndexExceptionID] [int] IDENTITY(1,1) NOT NULL,
  [IndexName] [varchar](200) NOT NULL,
  [Reason] [varchar](max) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

When I want to analyze exactly how long my index maintenance operations are taking, I use the following…

SELECT commandtype Activity,
  CAST(CONVERT(CHAR(11), starttime, 113) AS DATETIME) Work_Date,
  COUNT(*) How_Many,
  AVG(DATEDIFF(ms,starttime,endtime)) Avg_ms,
  AVG(DATEDIFF(s,starttime,endtime)) Avg_Secs,
  AVG(DATEDIFF(mi,starttime,endtime)) Avg_Mins
FROM commandlog
WHERE commandtype NOT IN ('xp_create_subdir','xp_delete_file','RESTORE_VERIFYONLY','BACKUP_DATABASE')
GROUP BY commandtype
  ,CAST(CONVERT(CHAR(11), starttime, 113) AS DATETIME)
ORDER BY CommandType, CAST(CONVERT(CHAR(11), starttime, 113) AS DATETIME) DESC

…use the WHERE clause to change what you see and what you don’t.

This snippet will tell you how many times individual indexes where rebuilt/reorg’d in a specific period. Adjust both parts of the WHERE clause as necessary…


SELECT DatabaseName, ObjectName, IndexName, COUNT(*) CommandCount
FROM dbo.CommandLog
WHERE commandtype = 'ALTER_INDEX'
  AND DATEDIFF(d,starttime,GETDATE()) < 14
GROUP BY DatabaseName, ObjectName, IndexName
ORDER BY CommandCount DESC

And this SELECT tells us important information about a single index; including it’s current fill factor (in case you want to change it to a lower amount and in theory, reduce splits and fragmentation), the current fragmentation percentage, as well as the size of the index. Again, change the WHERE clause as neccessary…


SELECT starttime,
  objectname [table],
  indexname,
  i.fill_factor,
  CAST(CAST(extendedinfo as XML).query('data(/ExtendedInfo/Fragmentation)') as VARCHAR(10)) as fragmentation,
  CAST(CAST(extendedinfo as XML).query('data(/ExtendedInfo/PageCount)') as VARCHAR(10)) as [pagecount],
  extendedinfo,
  command
FROM commandlog c
JOIN sys.indexes i ON i.name = c.indexname
WHERE indexname = 'IndexNameHere'
 AND CommandType like '%alter%'
ORDER by databasename, objectname, c.starttime DESC

These are just some of the things you can do with the CommandLog table and the plethora of useful information it contains. Of course, feel free to use these scripts, adjust them, and let me know how it works for you.

Thanks and have a grateful day…Troy

Tags: , , , , ,
Posted in Backups, Index Tuning, SSMS, T-SQL | Comments (0)

LiteSpeed Compression Ratios

December 14th, 2009

My company uses Quest Software’s LiteSpeed for SQL Server for both nightly and ad-hoc backups.  The DBA’s here before me set everything up using a compression ratio of 1 and we get actual compression and backup times that are more than adequate for our needs.  I’ve often wondered exactly what effect raising that ratio in the LiteSpeed settings would make in both how long the backup took, but in the actual compression itself.  What trade-off between size and efficiency do I have to make should I decide to raise that setting up the scale?

Let me first make it VERY clear.  I am not an employee of Quest.  I have nothing whatsoever to gain or lose by this examination.  I happen to be an extremely big fan of LiteSpeed and have found it by far to be the best backup solution I’ve had the pleasure to work with during my short professional career.  I have tremendous respect for everyone from Quest I’ve had the pleasure to come in contact with; that includes Kevin Kline, Brent Ozar & others.  This is simply a non-scientific, ground-level look at LiteSpeed’s compression ratios.  Please by all means visit Quest’s website to get all the product details and info you need.   (http://www.quest.com/)  Take nothing I say here as gospel.  We clear folks??   I don’t wanna wake up to a cease-and-desist order from Quest tomorrow, ok?

The results graph first, then a discussion on my conclusions…
LS_test

The blue line and left x-axis represent actual compression for the ratios along the bottom.  The red line and right x-axis likewise represent the time taken vs. a native non-LS backup.  So for example, when I did my sample on compression level 3, it was compressed almost 81% and it took just about half the time of a full native backup.

Ok, so what observations can I make from these results?  As far as compression goes, there seems to be a significant jump from level one to two; that 6% could make a tremendous difference when extending these results to an entire company’s worth of backups.  From there, the compression indeed inches up, but never more than another 1/4 or 1/2 percent at a time.

Looking at the red line (time taken for the backup to complete vs a native one), we see time is even quicker than native all the way through level four.  Even five and six completed in about the same time as a native backup.  From there though, total time taken ramps up pretty quickly; specially at the highest compression ratio which took over five times as long as a native backup.

What compression ratio is best for your situation?  Like Paul Randal is so fond of saying, “It depends.”  This falls along with the dozens of other decisions we make as DBA’s on a daily basis; a trade-off between factors.  Are we severely limited in backup drive space yet have plenty of time to accomplish the backup?  If this is the case then I might choose level 7 or 8.  Is drive space never ever an issue, yet your evening backup window is?  Perhaps level 1 would be the optimal choice for you.

I think 90% of us fall somewhere between those two extremes.  Yes, storage is cheap, but rarely ever unlimited.  While some of us may have multi-terabyte server farms to backup to; it’s still irresponsible to waste it needlessly.  I wouldn’t be awfully surprised though if many of us fall into the “not-enough-hours-in-a-night” caveat where backup time is much more a significant factor.  From my testing and actual experience, I’ve found a compression factor of two or three brings the right balance into a variety of environments.

I would love some feedback and/or some opposing viewpoints from anyone; particularly if someone from Quest happens upon this.

Have a Grateful day…Troy

Tags: , , , , ,
Posted in Backups | Comments (4)

Get Adobe Flash player