Most DBAs are aware that when a SQL Server database is set in simple recovery mode, then the transaction logs of it’s member databases are shrunk automatically upon a CHECKPOINT. This is all fair and good, but how many times have you viewed the folder containing your mdf’s and ldf’s and see that many of the ldf’s are countless times bigger than the minimum size?
In a production environment, there are generally so many active processes happening simultaneously that it’s usually best to leave well enough alone. But it’s a different story in a development, staging, or R&D environment. Some mornings I navigate to the ‘mssql/data’ folder on my dev server and see 90% of the ldf’s taking up huge amounts of room. In some of these cases, there is active development work going on and thusly the fluctuation in transaction log size is expected. But more often than not 75% of these 90% haven’t been touched for days or longer!
Now, I don’t know about you, but drive space is always at a premium in our environment. A client requests a refresh of the development database from production and you barely have enough room on the drive to place the backup file, let alone keep the old one just in case. Yes, you can pick away at the contents of the drive and perhaps find an old file or two you could 86, but i usually go right to the jugular and try shrinking all the transaction logs and see if that frees up enough space for your needs.
I’ll reiterate myself at the risk of redundancy, but again this is not something I would do in production. As stated earlier though, in a development or staging environment, there is no harm whatsoever in shrinking the transaction logs. I was using the tried and true method of right-clicking the database > ‘All Tasks’ > ‘Shrink the Database’ > ‘Files’ > choosing the log file from the dropdown > ‘Ok’ > ‘Cancel’. That worked well enough but was laborious on some of my dev servers which literally have hundreds of databases on them. I threw together a simple script that does this automatically for all databases on the server and all it takes is a simple ‘execute’ and a minute or two and you can regain all that potential space back. The script is nothing mind-numbingly complicated, but nevertheless I wanted to share it with you all.
Ran this on one my dev servers just this morning and freed up almost 10% of the drive’s space in a matter of seconds. No digging through the file structure to examine ldf sizes. No manual shrinking. Just a couple seconds and I all the sudden found myself with enough room to accomplish the refresh I needed to do. Absolutely perfect.
Would love any feedback or comments you may have.
In the meantime, be sure to have a grateful day…Troy
SET NOCOUNT ON; DECLARE cur CURSOR FOR SELECT [name] FROM master.dbo.sysdatabases DECLARE @db VARCHAR(50), @sql NVARCHAR(1000) OPEN cur FETCH NEXT FROM cur INTO @db WHILE @@FETCH_STATUS <> -1 BEGIN SET @sql = N'DECLARE @logname VARCHAR(100) ' SET @sql = @sql + 'USE [' + @db + '] SELECT @logname = RTRIM(LTRIM([name])) FROM sysfiles WHERE name LIKE ''%log%'' ' SET @sql = @sql + 'DECLARE @sf NVARCHAR(300) ' SET @sql = @sql + 'SET @sf = ''DBCC SHRINKFILE ('' + @logname + '') WITH NO_INFOMSGS '' ' SET @sql = @sql + 'PRINT @sf EXEC SP_EXECUTESQL @sf ' EXEC SP_EXECUTESQL @sql FETCH NEXT FROM cur INTO @db END CLOSE cur DEALLOCATE cur