Space, the final frontier ....
The common wisdom on your Database Files is to NEVER shrink your data files. Space is cheap, but it's worth knowing how to shrink the size of your data files especially when you are running out...
How to Truncate Log files in Sql Server 2008
If your transaction log files are growing beyond their disk size, it's important to consider why your log file keeps growing?
Databases Log Files will use space to quickly write to disk the many operations it requires and thus some file growth is expected when you reach beyond the initial sizing of your database. When you notice continual growth of your transaction log file, you need to start looking at your backup strategy to help manage it. Did you set your recovery model to the appropriate one? Should you be performing more transaction log backups?
for more information on the shrinkfile operation, consider reviewing the tech doc at Microsoft. You can also read a little more on your database recovery model here to ensure you are using the right one for your set of circumstances!
Wednesday, July 24, 2013
Thursday, July 19, 2012
Monday, January 17, 2011
If you are familiar with TSQL and want to get a row count from your tables you might be tempted to run a simple
Select count(*) from myTable
This however can be a lengthy wait if you have millions and millions of records. Not to mention the potential impact on a production system. Instead consider selecting your rowcounts from your system tables
SELECT OBJECT_NAME(id), ROWS, indid
WHERE indid < 2
AND OBJECT_NAME(id) IN ('myTableName')
Looking forward beyond Sql Server 2005 and beyond the following select should be used and will provide you with the needed information as system tables access may be depracated
SELECT DISTINCT OBJECT_NAME(P.object_id) AS [Name], ROWS FROM sys.indexes I
INNER JOIN sys.partitions P ON P.object_id = I.object_id AND P.index_id = I.index_id
WHERE I.index_id < 2
AND OBJECT_NAME(P.object_id) IN ('myTableName')
So why am I searching for index smaller than 2? In short from the article below you'll find that pure data pages have indid=0, and clustered index pages and the data pages have indid=1, so searching for this information as < 2 covers us. I also took the liberty of adding a distinct. I've not currently read why sometimes there may be duplicate rows, but the objectid of the table may be different and doing a Distinct helps to narrow the results to data you care about.
Monday, August 23, 2010
Hi everyone... So some new things are going to be happening with my blog... I'm going to pick some helpful hints that help me and my fellow DBA's with what we do on a Daily or weekly basis. These should be helpful for all but you are more than welcomed to share your experiences on what Tips help you the most. One thing that is comming is an article on Wait times... I had the pleasure of trying out Confio's Ignite software. And while it does some really great things, such as pointing out where the bottle necks are for performance on your server. It can be pretty pricey and really does not introduce anything you can't already do.
I'm planning on rolling out a short but continuous list of I/O wait times what they are and how to use them to boost performance on your SQL Server within the coming weeks, I will not concentrate on Sql Server 2005/2008 alone, I know that is the trend in the tech community but there are still many organizations that use Sql Server 2000, I mean... why upgrade Legacy systems and pay more money when those systems are perfectly fine, and fine tunned? Sometimes all that is needed is a little tunning :)
Monday, April 26, 2010
I had recently been reading a lot about the the wifi issues surrounding the iPad, and though this post is not SQL related, it kinda is for me. You see I was using a Remote Desktop app on my iPad when the connection started to drop out on me. Something new that I had started to experience. I determined that it had to do with bluetooth being enabled. I tried disabling it and the iPad returned to it's normal bandwidth loving behavior... the thing was that I was using a bluetooth keyboard with it to type up my sql... so I had to find an alternate solution...
Solution one... you can actually just turn up the brightness on your iPad and it will keep the wifi from going into sleep mode.. wierd right?
or in my case so I didn't need to fumble around I decided to changes some wifi settings of my TrendNet Router, I found the settings at the macrumors forum,
the settings on my wireless that fixed my problem (apparently permanently)
Under the Advanced Wireless Settings...
Change the Beacon Interval to 75
Fragmentation Threshold to 2304
RTS Threshold to 2307
To this I say to the original poster (WesleyB).. great find! Oh sure I could just use the virtual keyboard and not mess with my router, and sure I could just turn up the brightness on my device... but I love just picking up the iPad and quickly logging in to my Sql Server and pick up right where I left off. no fuss, no muss...
Wednesday, February 17, 2010
This is kinda cool. I currently do not have a need for it as my job does not require me to write new tsql code on a daily basis anymore. However, when I used to write ranking code it was always a LONG and TEDIOUS process not really because of the Ranking code, but mostly because of what it takes to get to the ranking code. ARG! My Ranking routine believe it or not consisted of a small segment of code with a CURSOR!
This is an snippet of code from what I used to use and is actually still in production today
declare tstcur CURSOR FOR SELECT id FROM @temp
declare @rank as integer
declare @tID as integer
FETCH NEXT FROM tstcur
WHILE @@FETCH_STATUS = 0
select @rank = count(*)+1 FROM @temp WHERE final_score > (select final_score from @temp where id = @tID)
insert into @temp (id, Division, responses, max_points, total_points, final_score, rank)
select id, Division, 0, max_points, total_points, final_score, 0 from @temp where id = @tID
update @temp set rank = @rank where id = @tID
FETCH NEXT FROM tstcur INTO @tID
as you can see it's not really elegant but it DID get the job done. IF you need this type of code you may certainly reproduce it for your own Sql Server 2000 needs.
I do prefer the new method of Ranking using Sql Server 2005 (or 2008). That is the Ranking function built right into the TSQL language so the above code would look like this:
SELECT RANK() OVER (
ORDER BY final_score) AS [Rank],
id, Division, responses, max_points, total_points, final_score
I quite like the non-cursor solution, and if you've read other articles by me you'll know that I'm not at all a fan of cursors (they're just plain evil!) :)
For a full tutorial on how to use the new Ranking feature... check out
Ranking Functions in Sql Server 2005 and 2008
Wednesday, December 02, 2009
Often when a database needs to be renamed a common tool that I've used in the past was always to use the sp_renamedb procedure. And that's one of the reasons I love the internet, there is always a better way to do something. Take the tips on how to rename your database without the use of this procedure. What is neat about the following article is that it also changes the logical and physical names of the database.
see more at this link...
Tuesday, August 11, 2009
Fixing smallish databases which are less than 1-2 gb may be just annoying when you are dealing with multiple indexes, but try managing some of those larger ERP databases with literally thousands of tables! Talk about a database from hell, having to sift through 10 of thousands of indexes can be a real chore if you're searching for performance bottle necks. There are some great solutions out there that all cost money per instance or per site license and can get quite pricey. But just about all of those products are charging you for something you can do on your own.
The article below is an extension of my previous blog on maintaining those indexes It's the script that has evolved from some very basic loops and DMV ( dm_db_index_physical_stats ). In my script (follow the link below) You'll find that I chose to stick to a SAMPLED stats, which essentially looks at the number of compressed pages. If you're talking millions of row of data and a very short maintenance window, you're going to want to stick to SAMPLED, otherwise, you'll opt for the full DETAILED search if
A) your maintenance window is greater than the amount of time it takes for a DETAIL report to be returned
B) your database is small enough
One of the problems that I've ran into with my production databases that use a FULL recovery model is the maintaining the size of the transaction logs. The problem is that I COULD switch them over to be SIMPLE recovery models, and then the transaction log would not grow out of control during the rebuild / reorganization (Defrag) of the indexes, but I've found this not to be an ideal situation, since in my organization even though I have a maintenance window on the weekends, the business comes first, so if they require access to their data, I must make it available. Having good backups that allow the company to return up to the minute before failure using the FULL recovery model is important to us. So in the script (see link below)
Keep Your SQL Server Indexes Fragmentation Free
My DevX article was all about Sql Server 2005 and Sql Server 2008, but what about Sql Server 2000? surely you don't want to be stuck just running a performance monitor on your all your 2000 servers right? Here is the updated 2000 script to find all fragmented tables in 2000 along with the DBCC DBREINDEX wich will rebuild them for you.
Catch-22: if the table has less than 1000 rows, a rebuild will not change anything and you may be left with having to drop and recreate the index manually anyways.
...I created a temp table manually to capture the results of SHOWCONTIG which I pipe to a grid result using the WITH TABLERESULTS hint, next this is joined against the information schema tables table. In the Where clause I specified that we are interested in 'Base Table's only so that we don't get a lot of system table specific stuff. I also added the cirteria that we want to see only fragmented tables of 5% and that have over 1000 rows. In my case my legacy databases are all well under 50gb and thus this query makes sense. This example use a cursor, as it's an old script, but I'll post up a cursor-less version as I prefer cursor-less solutions.
As always watch those log files when you are defragging indexes!
USE YourDatabaseName--Enter the name of the database you want to reindex
CREATE TABLE #db_defrag
( ObjectName VARCHAR(255),
DECLARE @sql AS VARCHAR(100)
SET @sql ='DBCC SHOWCONTIG WITH TABLERESULTS'
INSERT INTO #db_defrag
DECLARE @TableName varchar(255)
DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables tbl
INNER JOIN #db_defrag dfrg ON tbl.table_name = dfrg.ObjectName
WHERE table_type = 'base table' AND dfrg.LogicalFragmentation >= 5 AND [ROWS] > 1000
ORDER BY LogicalFragmentation DESC
FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
DBCC DBREINDEX(@TableName,' ',90)
SET @sql = 'UPDATE STATISTICS ' + @TableName
FETCH NEXT FROM TableCursor INTO @TableName
DROP TABLE #db_defrag
Tuesday, July 21, 2009
It's been a while since I've updated the blog, but I Did want to mention that I am working on a neat article, which focuses on my passion for performance. Stay tuned for the latest details...
in the mean time, have you ever been stuck with someone else's database? Or how'bout a vendor database where someone needs you to extend a task. Well finding the stored procedures is relatively simple. Remember just go through profiler, run the process and you can monitor which stored procedures are called sometimes this also provides you some feedback on which tables are being accessed. Other times you may need to report on some of this information, so you may need to search the database on where they decided to store such information. I extended my own version of Narayana's searchalltables procedure, in this new version you'll notice that you get to also search text (and ntext) fields along with only a single while loop.
Check out the latest script and article here:
Searching Unidentified Tables and Columns for Specific Content
Tuesday, June 23, 2009
Email, You use it, your colleges use it, even your systems use it. It's a part of everyday business. If you are a Sql Developer you have probably figured out how to implement email already, Often times I've seen many DBA's and Developers implement it from outside of SQL Server in rather ingenious ways...
This article outlines how to setup Sql Server Mail in Sql Server 2000 and 2005 (2008 is the same as Sql Server 2005). By bringing mail inside of your server you can now send reports, alerts and other needed information based on the triggers and alerts that matter to you most. Making execution calls to xp_cmdshell to an opensource program (Blat) and sp_send_dbmail calls for Sql Server 2005 help leverage reporting from Sql Server.
Sending E-mail from SQL Server 200X