Step 1:
Clear the suspect mode of the database using sp_resetstatus DatabaseName. This will clear the suspect flag and make the database available online
Step 2:
Change the database status to Emergency using the following command. Emergency mode allows you to access the databases as normal but with no consistency guarantee. This option also allows us to export the table data so that we can minimize the damage.
ALTER DATABASE DatabaseName SET EMERGENCY;
Step 3:
Restrict database to single user by changing the access mode as mentioned below
ALTER DATABASE DatabaseName SET SINGLE_USER;
Step 4:
Run the CHECKDB command with “REPAIR_ALLOW_DATA_LOSS” option. This option should be tried as last option as it always behaves the way it is named. We are not sure of what data it removes.
DBCC CHECKDB (DatabaseName, REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS, ALL_ERRORMSGS;
There are some best (simple) practices which prevents us from such failures. Below are some of them
* Backup your data frequently ( daily or once in two days)
* Have multiple backups. Move the backups to external drives or tapes frequently
* Validate that your backups are good by doing trial restores to alternate server
* Run CHECKDB regularly
Thursday, December 22, 2011
Tuesday, December 20, 2011
Automatically Reindex All Tables in a SQL Server Database
ECLARE @TableName varchar(255)
DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = ‘base table’
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
DBCC DBREINDEX(@TableName,’ ‘,90)
FETCH NEXT FROM TableCursor INTO @TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = ‘base table’
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
DBCC DBREINDEX(@TableName,’ ‘,90)
FETCH NEXT FROM TableCursor INTO @TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
Tuesday, December 6, 2011
Calculate Average Resources Usage Numbers For Stored Procedures That Have Metrics On My System:
SELECT CASE WHEN database_id = 32767 then 'Resource' ELSE DB_NAME(database_id)END AS DBName
,OBJECT_SCHEMA_NAME(object_id,database_id) AS [SCHEMA_NAME]
,OBJECT_NAME(object_id,database_id)AS [OBJECT_NAME]
,cached_time
,last_execution_time
,execution_count
,total_worker_time / execution_count AS AVG_CPU
,total_elapsed_time / execution_count AS AVG_ELAPSED
,total_logical_reads / execution_count AS AVG_LOGICAL_READS
,total_logical_writes / execution_count AS AVG_LOGICAL_WRITES
,total_physical_reads / execution_count AS AVG_PHYSICAL_READS
FROM sys.dm_exec_procedure_stats
ORDER BY AVG_LOGICAL_READS DESC
,OBJECT_SCHEMA_NAME(object_id,database_id) AS [SCHEMA_NAME]
,OBJECT_NAME(object_id,database_id)AS [OBJECT_NAME]
,cached_time
,last_execution_time
,execution_count
,total_worker_time / execution_count AS AVG_CPU
,total_elapsed_time / execution_count AS AVG_ELAPSED
,total_logical_reads / execution_count AS AVG_LOGICAL_READS
,total_logical_writes / execution_count AS AVG_LOGICAL_WRITES
,total_physical_reads / execution_count AS AVG_PHYSICAL_READS
FROM sys.dm_exec_procedure_stats
ORDER BY AVG_LOGICAL_READS DESC
Find Top 10 Most Expensive Queries In SQL Server
SELECT TOP 10 SUBSTRING(qt.TEXT,(qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(qt.TEXT)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2)+1),
qs.execution_count,
qs.total_logical_reads, qs.last_logical_reads,
qs.total_logical_writes, qs.last_logical_writes,
qs.total_worker_time,
qs.last_worker_time,
qs.total_elapsed_time/1000000 total_elapsed_time_in_S,
qs.last_elapsed_time/1000000 last_elapsed_time_in_S,
qs.last_execution_time,
qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_logical_reads DESC
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(qt.TEXT)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2)+1),
qs.execution_count,
qs.total_logical_reads, qs.last_logical_reads,
qs.total_logical_writes, qs.last_logical_writes,
qs.total_worker_time,
qs.last_worker_time,
qs.total_elapsed_time/1000000 total_elapsed_time_in_S,
qs.last_elapsed_time/1000000 last_elapsed_time_in_S,
qs.last_execution_time,
qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_logical_reads DESC
Friday, November 18, 2011
How to Delete Data Colleter Jobs
UPDATE [msdb].[dbo].[syscollector_collection_sets_internal]
SET [collection_job_id] = NULL
,[upload_job_id] = NULL
SET [collection_job_id] = NULL
,[upload_job_id] = NULL
Thursday, November 3, 2011
How to kill open transactions in SQL Server
USE master
GO
SET NOCOUNT ON
DECLARE @DBName varchar(50)
DECLARE @spidstr varchar(8000)
DECLARE @ConnKilled smallint
SET @ConnKilled=0
SET @spidstr = ''
SET @DBName = 'Testing'
IF db_id(@DBName) < 4
BEGIN
PRINT 'Connections to system databases cannot be killed'
RETURN
END
SELECT @spidstr = coalesce(@spidstr,',' )+'kill '+convert(varchar, spid)+ '; '
FROM master..sysprocesses WHERE dbid = db_id(@DBName)
IF LEN(@spidstr) > 0
BEGIN
EXEC(@spidstr)
SELECT @ConnKilled = COUNT(1)
FROM master..sysprocesses WHERE dbid=db_id(@DBName)
END
GO
SET NOCOUNT ON
DECLARE @DBName varchar(50)
DECLARE @spidstr varchar(8000)
DECLARE @ConnKilled smallint
SET @ConnKilled=0
SET @spidstr = ''
SET @DBName = 'Testing'
IF db_id(@DBName) < 4
BEGIN
PRINT 'Connections to system databases cannot be killed'
RETURN
END
SELECT @spidstr = coalesce(@spidstr,',' )+'kill '+convert(varchar, spid)+ '; '
FROM master..sysprocesses WHERE dbid = db_id(@DBName)
IF LEN(@spidstr) > 0
BEGIN
EXEC(@spidstr)
SELECT @ConnKilled = COUNT(1)
FROM master..sysprocesses WHERE dbid=db_id(@DBName)
END
Friday, October 14, 2011
Query to find Avg Fragmentation In percent
DECLARE @dbname VARCHAR(64)
SET @dbname = 'SunlightTest'
SELECT A.object_id
, A.index_id
, [name]
, Avg_Fragmentation_In_Percent
FROM sys.dm_db_index_physical_stats (db_id(@dbname),NULL,NULL, NULL, NULL) AS A
JOIN sys.indexes B WITH(NOLOCK) ON A.Object_id = B.Object_id
AND A.Index_id = B.Index_id
WHERE Avg_Fragmentation_In_Percent >= 60
AND B.[name] IS NOT NULL
SET @dbname = 'SunlightTest'
SELECT A.object_id
, A.index_id
, [name]
, Avg_Fragmentation_In_Percent
FROM sys.dm_db_index_physical_stats (db_id(@dbname),NULL,NULL, NULL, NULL) AS A
JOIN sys.indexes B WITH(NOLOCK) ON A.Object_id = B.Object_id
AND A.Index_id = B.Index_id
WHERE Avg_Fragmentation_In_Percent >= 60
AND B.[name] IS NOT NULL
When to Rebuild Your SQL Server Index !
SELECT 'ALTER INDEX [' + ix.name + '] ON [' + s.name + '].[' + t.name + '] ' +
CASE WHEN ps.avg_fragmentation_in_percent > 40 THEN 'REBUILD' ELSE 'REORGANIZE' END +
CASE WHEN pc.partition_count > 1 THEN ' PARTITION = ' + cast(ps.partition_number as nvarchar(max)) ELSE '' END
FROM sys.indexes AS ix INNER JOIN sys.tables t
ON t.object_id = ix.object_id
INNER JOIN sys.schemas s
ON t.schema_id = s.schema_id
INNER JOIN (SELECT object_id, index_id, avg_fragmentation_in_percent, partition_number
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL)) ps
ON t.object_id = ps.object_id AND ix.index_id = ps.index_id
INNER JOIN (SELECT object_id, index_id, COUNT(DISTINCT partition_number) AS partition_count
FROM sys.partitions
GROUP BY object_id, index_id) pc
ON t.object_id = pc.object_id AND ix.index_id = pc.index_id
WHERE ps.avg_fragmentation_in_percent > 10 AND
ix.name IS NOT NULL
CASE WHEN ps.avg_fragmentation_in_percent > 40 THEN 'REBUILD' ELSE 'REORGANIZE' END +
CASE WHEN pc.partition_count > 1 THEN ' PARTITION = ' + cast(ps.partition_number as nvarchar(max)) ELSE '' END
FROM sys.indexes AS ix INNER JOIN sys.tables t
ON t.object_id = ix.object_id
INNER JOIN sys.schemas s
ON t.schema_id = s.schema_id
INNER JOIN (SELECT object_id, index_id, avg_fragmentation_in_percent, partition_number
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL)) ps
ON t.object_id = ps.object_id AND ix.index_id = ps.index_id
INNER JOIN (SELECT object_id, index_id, COUNT(DISTINCT partition_number) AS partition_count
FROM sys.partitions
GROUP BY object_id, index_id) pc
ON t.object_id = pc.object_id AND ix.index_id = pc.index_id
WHERE ps.avg_fragmentation_in_percent > 10 AND
ix.name IS NOT NULL
Saturday, September 3, 2011
Database Restore Problem --- One or more users using this particular database
DECLARE @DatabaseName nvarchar(50)
DECLARE @SPId int
DECLARE @SQL nvarchar(100)
--SET @DatabaseName = N'AdventureWorks2008'
SET @DatabaseName = 'AWALTesting'
DECLARE my_cursor CURSOR FAST_FORWARD FOR
SELECT SPId FROM MASTER..SysProcesses
WHERE DBId = DB_ID(@DatabaseName) AND SPId <> @@SPId
OPEN my_cursor
FETCH NEXT FROM my_cursor INTO @SPId
WHILE @@FETCH_STATUS = 0
BEGIN
SET @SQL = 'KILL ' + CAST(@SPId as nvarchar(10))
print @SQL
EXEC sp_executeSQL @SQL
--KILL @SPId -- Causing Incorrect syntax near '@spid'.
FETCH NEXT FROM my_cursor INTO @SPId
END
CLOSE my_cursor
DEALLOCATE my_cursor
DECLARE @SPId int
DECLARE @SQL nvarchar(100)
--SET @DatabaseName = N'AdventureWorks2008'
SET @DatabaseName = 'AWALTesting'
DECLARE my_cursor CURSOR FAST_FORWARD FOR
SELECT SPId FROM MASTER..SysProcesses
WHERE DBId = DB_ID(@DatabaseName) AND SPId <> @@SPId
OPEN my_cursor
FETCH NEXT FROM my_cursor INTO @SPId
WHILE @@FETCH_STATUS = 0
BEGIN
SET @SQL = 'KILL ' + CAST(@SPId as nvarchar(10))
print @SQL
EXEC sp_executeSQL @SQL
--KILL @SPId -- Causing Incorrect syntax near '@spid'.
FETCH NEXT FROM my_cursor INTO @SPId
END
CLOSE my_cursor
DEALLOCATE my_cursor
Friday, August 19, 2011
KILL SPID
To quickly disconnect all non admins from a database, I use this:
ALTER DATABASE Database1 SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE
ALTER DATABASE Database2 SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE
One of the development teams recently requested that their production databases be copied to a development server on a weekly basis. In the development environment, developers have db_owner access to all of their databases.
In order to restore a database, no one can be connected to it if it already exists. Since my ALTER DATABASE statements will only work on non admins, I also need to run this to kick out developers:
DECLARE @spid varchar(10)
SELECT @spid = spid
FROM master.sys.sysprocesses
WHERE dbid IN (DB_ID('Database1'), DB_ID('Database2'))
WHILE @@ROWCOUNT <> 0
BEGIN
EXEC('KILL ' + @spid)
SELECT @spid = spid
FROM master.sys.sysprocesses
WHERE
dbid IN (DB_ID('Database1'), DB_ID('Database2')) AND
spid > @spid
END
Most SQL Server DBAs have a similar script to this, but I thought I'd post mine in case you don't have one already.
NOTE: I can't run the following commands as we are using SQL Litespeed. Their extended stored procedure to restore databases, master.dbo.xp_restore_database, uses multiple connections to the database.
ALTER DATABASE Database1 SET SINGLE_USER WITH ROLLBACK IMMEDIATE
ALTER DATABASE Database2 SET SINGLE_USER WITH ROLLBACK IMMEDIATE
ALTER DATABASE Database1 SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE
ALTER DATABASE Database2 SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE
One of the development teams recently requested that their production databases be copied to a development server on a weekly basis. In the development environment, developers have db_owner access to all of their databases.
In order to restore a database, no one can be connected to it if it already exists. Since my ALTER DATABASE statements will only work on non admins, I also need to run this to kick out developers:
DECLARE @spid varchar(10)
SELECT @spid = spid
FROM master.sys.sysprocesses
WHERE dbid IN (DB_ID('Database1'), DB_ID('Database2'))
WHILE @@ROWCOUNT <> 0
BEGIN
EXEC('KILL ' + @spid)
SELECT @spid = spid
FROM master.sys.sysprocesses
WHERE
dbid IN (DB_ID('Database1'), DB_ID('Database2')) AND
spid > @spid
END
Most SQL Server DBAs have a similar script to this, but I thought I'd post mine in case you don't have one already.
NOTE: I can't run the following commands as we are using SQL Litespeed. Their extended stored procedure to restore databases, master.dbo.xp_restore_database, uses multiple connections to the database.
ALTER DATABASE Database1 SET SINGLE_USER WITH ROLLBACK IMMEDIATE
ALTER DATABASE Database2 SET SINGLE_USER WITH ROLLBACK IMMEDIATE
Thursday, August 18, 2011
SQL SERVER AUDIT
Description: Shows what I look for when I take over existing servers.
Don't just hit F5 - read through each step to learn what I'm
checking for, and how to fix things when they indicate problems.
/*
Backups! First and foremost, before we touch anything, check backups.
Check when each database has been backed up. If databases aren't being
backed up, check the maintenance plans or scripts. If you don't have
scripts, check http://ola.hallengren.com.
*/
SELECT d.name, MAX(b.backup_finish_date) AS last_backup_finish_date
FROM master.sys.databases d
LEFT OUTER JOIN msdb.dbo.backupset b ON d.name = b.database_name AND b.type = 'D'
WHERE d.database_id NOT IN (2, 3) -- Bonus points if you know what that means
GROUP BY d.name
ORDER BY 2 DESC
/*
Where are the backups going? Ideally, we want them on a different server.
If the backups are being taken to this same server, and the server's RAID
card or motherboard goes bad, we're in trouble. We sort by media_set_id
descending because it's the primary key on the table, so it'll fly even
when MSDB is on really slow drives.
For more information about where your backups should go, check out:
http://www.brentozar.com/sql/backup-best-practices/
*/
SELECT TOP 100 physical_device_name, * FROM msdb.dbo.backupmediafamily ORDER BY media_set_id DESC
/*
Transaction log backups - do we have any databases in full recovery mode
that haven't had t-log backups? If so, we should think about putting it in
simple recovery mode or doing t-log backups.
*/
SELECT d.name, d.recovery_model, d.recovery_model_desc
FROM master.sys.databases d
LEFT OUTER JOIN msdb.dbo.backupset b ON d.name = b.database_name AND b.type = 'L'
WHERE d.recovery_model IN (1, 2) AND b.type IS NULL AND d.database_id NOT IN (2, 3)
/*
If there are any databases in full recovery mode with no t-log backups,
show the filesize of the according ldf.
*/
SELECT
d.name AS [db_name]
, f.name AS [ldf_name]
, f.physical_name
, f.size * 8 / 1024.00 AS [size_in_mb]
, d.recovery_model_desc
FROM sys.master_files f
JOIN sys.databases d ON f.database_id = d.database_id
LEFT OUTER JOIN msdb.dbo.backupset b ON d.name = b.database_name AND b.type = 'L'
WHERE d.recovery_model IN (1, 2) AND b.type IS NULL AND d.database_id NOT IN (2, 3) AND f.type = 1 -- only show log files
ORDER BY f.size DESC
/*
Is the MSDB backup history cleaned up? If you have data older than a couple
of months, this is a problem. You need to set up backup cleanup jobs.
For more information on why this can be a problem?
http://www.brentozar.com/archive/2009/09/checking-your-msdb-cleanup-jobs/
*/
SELECT TOP 1 backup_start_date, *
FROM msdb.dbo.backupset
ORDER BY backup_set_id ASC
/*
When was the last time DBCC finished successfully?
DBCC CHECKDB checks databases for corruption. You won't know
Script is from http://sqlserverpedia.com/wiki/Last_clean_DBCC_CHECKDB_date
To get sample corrupt databases - http://sqlskills.com/pastConferences.asp
*/
CREATE TABLE #temp (
ParentObject VARCHAR(255)
, [Object] VARCHAR(255)
, Field VARCHAR(255)
, [Value] VARCHAR(255)
)
CREATE TABLE #DBCCResults (
ServerName VARCHAR(255)
, DBName VARCHAR(255)
, LastCleanDBCCDate DATETIME
)
EXEC master.dbo.sp_MSforeachdb
@command1 = 'USE [?] INSERT INTO #temp EXECUTE (''DBCC DBINFO WITH TABLERESULTS'')'
, @command2 = 'INSERT INTO #DBCCResults SELECT @@SERVERNAME, ''?'', Value FROM #temp WHERE Field = ''dbi_dbccLastKnownGood'''
, @command3 = 'TRUNCATE TABLE #temp'
--Delete duplicates due to a bug in SQL Server 2008
;WITH DBCC_CTE AS
(
SELECT ROW_NUMBER() OVER (PARTITION BY ServerName, DBName, LastCleanDBCCDate ORDER BY LastCleanDBCCDate) RowID
FROM #DBCCResults
)
DELETE FROM DBCC_CTE WHERE RowID > 1;
SELECT
ServerName
, DBName
, CASE LastCleanDBCCDate
WHEN '1900-01-01 00:00:00.000' THEN 'Never ran DBCC CHECKDB'
ELSE CAST(LastCleanDBCCDate AS VARCHAR) END AS LastCleanDBCCDate
FROM #DBCCResults
ORDER BY 3
DROP TABLE #temp, #DBCCResults;
/*
If any databases have never experienced the magic of DBCC, consider doing that
as soon as practical. DBCC CHECKDB is a CPU & IO intensive operation, so
consider doing it after business hours. For more information:
http://www.sqlskills.com/blogs/paul/post/CHECKDB-From-Every-Angle-Consistency-Checking-Options-for-a-VLDB.aspx
For the demo, we'll run it on a small database right away. This won't work on
your machine unless you have a database named TimeTracking.
*/
--DBCC CHECKDB(TimeTracking)
/*
Maybe there were DBCC jobs that aren't running.
Speaking of which, are jobs failing, and who owns them?
*/
SET NOCOUNT ON
DECLARE @MaxLength INT
SET @MaxLength = 50
DECLARE @xp_results TABLE (
job_id uniqueidentifier NOT NULL,
last_run_date nvarchar (20) NOT NULL,
last_run_time nvarchar (20) NOT NULL,
next_run_date nvarchar (20) NOT NULL,
next_run_time nvarchar (20) NOT NULL,
next_run_schedule_id INT NOT NULL,
requested_to_run INT NOT NULL,
request_source INT NOT NULL,
request_source_id sysname
COLLATE database_default NULL,
running INT NOT NULL,
current_step INT NOT NULL,
current_retry_attempt INT NOT NULL,
job_state INT NOT NULL
)
DECLARE @job_owner sysname
DECLARE @is_sysadmin INT
SET @is_sysadmin = isnull (is_srvrolemember ('sysadmin'), 0)
SET @job_owner = suser_sname ()
INSERT INTO @xp_results
EXECUTE sys.xp_sqlagent_enum_jobs @is_sysadmin, @job_owner
UPDATE @xp_results
SET last_run_time = right ('000000' + last_run_time, 6),
next_run_time = right ('000000' + next_run_time, 6)
SELECT j.name AS JobName,
j.enabled AS Enabled,
sl.name AS OwnerName,
CASE x.running
WHEN 1
THEN
'Running'
ELSE
CASE h.run_status
WHEN 2 THEN 'Inactive'
WHEN 4 THEN 'Inactive'
ELSE 'Completed'
END
END
AS CurrentStatus,
coalesce (x.current_step, 0) AS CurrentStepNbr,
CASE
WHEN x.last_run_date > 0
THEN
convert (datetime,
substring (x.last_run_date, 1, 4)
+ '-'
+ substring (x.last_run_date, 5, 2)
+ '-'
+ substring (x.last_run_date, 7, 2)
+ ' '
+ substring (x.last_run_time, 1, 2)
+ ':'
+ substring (x.last_run_time, 3, 2)
+ ':'
+ substring (x.last_run_time, 5, 2)
+ '.000',
121
)
ELSE
NULL
END
AS LastRunTime,
CASE h.run_status
WHEN 0 THEN 'Fail'
WHEN 1 THEN 'Success'
WHEN 2 THEN 'Retry'
WHEN 3 THEN 'Cancel'
WHEN 4 THEN 'In progress'
END
AS LastRunOutcome,
CASE
WHEN h.run_duration > 0
THEN
(h.run_duration / 1000000) * (3600 * 24)
+ (h.run_duration / 10000 % 100) * 3600
+ (h.run_duration / 100 % 100) * 60
+ (h.run_duration % 100)
ELSE
NULL
END
AS LastRunDuration
FROM @xp_results x
LEFT JOIN
msdb.dbo.sysjobs j
ON x.job_id = j.job_id
LEFT OUTER JOIN
msdb.dbo.syscategories c
ON j.category_id = c.category_id
LEFT OUTER JOIN
msdb.dbo.sysjobhistory h
ON x.job_id = h.job_id
AND x.last_run_date = h.run_date
AND x.last_run_time = h.run_time
AND h.step_id = 0
LEFT OUTER JOIN sys.syslogins sl ON j.owner_sid = sl.sid
/*
Right up there with data integrity, security's really important.
Who else has sysadmin or securityadmin rights on this instance?
I care about securityadmin users because they can add themselves to the SA
role at any time to do their dirty work, then remove themselves back out.
Don't think of them as other sysadmins.
Think of them as users who can get you fired.
*/
SELECT l.name, l.denylogin, l.isntname, l.isntgroup, l.isntuser
FROM master.sys.syslogins l
WHERE l.sysadmin = 1 OR l.securityadmin = 1
ORDER BY l.isntgroup, l.isntname, l.isntuser
/*
Now would be an excellent time to open up a Word doc and start documenting
your findings, which helps you prove your worth as a DBA. And for every
SQL authentication user in that list, try logging in with a blank password.
In your Blitz document, if SA includes Builtin\Administrators, list the
server's local administrators.
*/
/*
Let's review some server-level security & configuration settings.
*/
EXEC dbo.sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
EXEC dbo.sp_configure
/*
Look for anything that's been changed from the default value.
What? You don't know the defaults by heart? Well, me neither.
In SSMS, go into the Object Explorer, then right-click on the server name.
Click Reports, Standard Reports, Server Dashboard, and then expand the
section Non Default Configuration Options. It'll show everything that
deviates from the defaults.
Below, I set advanced options off, but that's just for demo purposes.
You can leave that on if you like.
EXEC dbo.sp_configure 'show advanced options', 0
GO
RECONFIGURE
GO
*/
/*
Check for startup stored procedures. These live in the master database, and
they run automatically when SQL Server starts up. They're sometimes left
behind by ambitious auditors or evil employees.
For more information about startup stored procs, read:
http://www.mssqltips.com/tip.asp?tip=1574
*/
USE master
GO
SELECT *
FROM master.INFORMATION_SCHEMA.ROUTINES
WHERE OBJECTPROPERTY(OBJECT_ID(ROUTINE_NAME),'ExecIsStartup') = 1
/*
SQL Server 2008 & above only - is auditing enabled? If so, it might be
writing to an audit path that will fill up, or the server might be set to
stop if the file path isn't available. Let's see if there's any audits.
For a video explaining the SQL Server auditing options, check out:
http://sqlserverpedia.com/blog/sql-server-2008/guest-podcast-auditing-your-database-server/
*/
SELECT * FROM sys.dm_server_audit_status
/*
Server settings can be made outside of sp_configure too. The easiest way
to check out the service settings are to go into Start, Programs,
Microsoft SQL Server, Configuration Tools, SQL Server Configuration Manager.
Go there now, and drill into SQL Server Services, then right-click on each
service and hit Properties. The advanced properties for the SQL Server
service itself can hide some startup parameters.
Next, check Instant File Initialization. Take a note of the service account
SQL Server is using, and then run secpol.msc. Go into Local Policy, User
Rights Assignment, Perform Volume Maintenance Tasks. Double-click on that
and add the SQL Server service account. This lets SQL Server grow data
files instantly. For more info:
http://www.sqlskills.com/blogs/kimberly/post/Instant-Initialization-What-Why-and-How.aspx
There's a few more server-level things I like to check, but I use the SSMS
GUI. Go into Server Objects, and check out what's under Endpoints, Linked
Servers, Resource Governor, and Triggers. If any of these objects exist, you
want to research to find out what they're being used for.
*/
SELECT * FROM sys.endpoints WHERE type <> 2
SELECT * FROM sys.resource_governor_configuration
SELECT * FROM sys.server_triggers
SELECT * FROM sys.servers
/*
Then go into Management, Resource Governor - that's another landmine.
Now as long as we're in the SSMS GUI, let's set up Database Mail. It's
possible to script that out, but I like the GUI for that since I often use
wildly different parameters for different clients or departments.
Once Database Mail is set up, the below script will test it.
*/
EXEC msdb.dbo.sp_send_dbmail
@recipients = 'email@domain.com',
@body = @@SERVERNAME,
@subject = 'Testing SQL Server Database Mail - see body for server name';
GO
/*
After enabling Database Mail, the below script sets up a default set of
notifications for problems. In this section, replace these strings:
- 'The Database Administrator' - your name goes here
- 'YourEmailAddress@Hotmail.com' - your email
- '8005551212@cingularme.com' - your phone/pager's email address
*/
USE [msdb]
GO
EXEC msdb.dbo.sp_add_operator @name=N'The Database Administrator',
@enabled=1,
@weekday_pager_start_time=0,
@weekday_pager_end_time=235959,
@saturday_pager_start_time=0,
@saturday_pager_end_time=235959,
@sunday_pager_start_time=0,
@sunday_pager_end_time=235959,
@pager_days=127,
@email_address=N'YourEmailAddress@Hotmail.com',
@pager_address=N'8005551212@cingularme.com',
@category_name=N'[Uncategorized]'
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 016',
@message_id=0,
@severity=16,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 016', @operator_name=N'The Database Administrator', @notification_method = 7
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 017',
@message_id=0,
@severity=17,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 017', @operator_name=N'The Database Administrator', @notification_method = 7
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 018',
@message_id=0,
@severity=18,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 018', @operator_name=N'The Database Administrator', @notification_method = 7
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 019',
@message_id=0,
@severity=19,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 019', @operator_name=N'The Database Administrator', @notification_method = 7
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 020',
@message_id=0,
@severity=20,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 020', @operator_name=N'The Database Administrator', @notification_method = 7
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 021',
@message_id=0,
@severity=21,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 021', @operator_name=N'The Database Administrator', @notification_method = 7
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 022',
@message_id=0,
@severity=22,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 022', @operator_name=N'The Database Administrator', @notification_method = 7
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 023',
@message_id=0,
@severity=23,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 023', @operator_name=N'The Database Administrator', @notification_method = 7
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 024',
@message_id=0,
@severity=24,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 024', @operator_name=N'The Database Administrator', @notification_method = 7
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 025',
@message_id=0,
@severity=25,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 025', @operator_name=N'The Database Administrator', @notification_method = 7
GO
/* Specific Alert for Error 825
http://www.sqlskills.com/BLOGS/PAUL/post/A-little-known-sign-of-impending-doom-error-825.aspx
*/
EXEC msdb.dbo.sp_add_alert @name=N'Error 825',
@message_id=825,
@severity=0,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Error 825', @operator_name=N'The Database Administrator', @notification_method = 7
GO
/*
A few more checks at the server level. Go into the Windows Event Logs,
and review any errors in the System and Application events. This is where
hardware-level errors can show up too, like failed hard drives.
*/
/*
I don't like any surprises in the system databases. Let's check the list of
objects in master and model. I don't want to see any rows returned from
these four queries - if there are objects in the system databases, I want to
ask why, and get them removed if possible.
*/
SELECT * FROM master.sys.tables WHERE name NOT IN ('spt_fallback_db', 'spt_fallback_dev', 'spt_fallback_usg', 'spt_monitor', 'spt_values', 'MSreplication_options')
SELECT * FROM master.sys.procedures WHERE name NOT IN ('sp_MSrepl_startup', 'sp_MScleanupmergepublisher')
SELECT * FROM model.sys.tables
SELECT * FROM model.sys.procedures
/*
Alright, we're done with the server level! Let's check databases. We could
right-click on each database and click Properties, but it can be easier to
scan across the results of sys.databases. I look for any variations - are
there some databases that have different settings than others?
*/
SELECT * FROM sys.databases
/*
SQL 2008 & above only - are any databases encrypted? Transparent Data
Encryption is all too transparent. You won't notice that databases are
encrypted if you just glance at SSMS.
If this query returns any results, you need to start asking if the certificate
has been backed up and where the password is. If you don't have both the
cert and the password to unlock it, then the database can't be restored.
*/
SELECT d.name, k.*
FROM sys.dm_database_encryption_keys k
INNER JOIN sys.databases d ON k.database_id = d.database_id
ORDER BY d.name
/*
Are any of the databases using features that are Enterprise Edition only?
If a database is using something like partitioning, compression, or
Transparent Data Encryption, then I won't be able to restore it onto a
Standard Edition server.
*/
EXEC dbo.sp_MSforeachdb 'SELECT ''[?]'' AS DatabaseName, * FROM [?].sys.dm_db_persisted_sku_features'
/*
Data files - where are they? Are any on the C drive? We want to avoid that
because if they grow, they can fill up the OS drive, and that can lead to a
very nasty crash. Let's look at where the databases live. For tips on how to move databases off the C drive:
http://support.microsoft.com/kb/224071
In the results, also check the number of data and log files for all databases.
*/
select db_name(database_id), * from sys.master_files
/*
Check for triggers in any database. I can't change these right away, but I
want to know if they're present, because it'll help me troubleshoot faster.
If I didn't know the database had triggers, I probably wouldn't think to look.
*/
EXEC dbo.sp_MSforeachdb 'SELECT ''[?]'' AS database_name, o.name AS table_name, t.* FROM [?].sys.triggers t INNER JOIN [?].sys.objects o ON t.parent_id = o.object_id'
/*
We've hit the major pain points on reliability and security. Now let's do a
little poking around in performance. Let's query the server's wait stats,
which tell us what things the server has been waiting on since the last
restart. For more about wait stats, check out:
http://sqlserverpedia.com/wiki/Wait_Types
*/
SELECT *, (wait_time_ms - signal_wait_time_ms) AS real_wait_time_ms
FROM sys.dm_os_wait_stats
ORDER BY (wait_time_ms - signal_wait_time_ms) DESC
/*
Do we have any duplicate indexes? This query from Adam Machanic checks for
indexes on exactly the same fields. You have to run it in each database
you want to check, and it's extremely fast.
*/
select
t.name as tableName,
p.*
from sys.tables as t
inner join sys.indexes as i1 on
i1.object_id = t.object_id
cross apply
(
select top 1
*
from sys.indexes as i
where
i.object_id = i1.object_id
and i.index_id > i1.index_id
and i.type_desc <> 'xml'
order by
i.index_id
) as i2
cross apply
(
select
min(a.index_id) as ind1,
min(b.index_id) as ind2
from
(
select ic.*
from sys.index_columns ic
where
ic.object_id = i1.object_id
and ic.index_id = i1.index_id
and ic.is_included_column = 0
) as a
full outer join
(
select *
from sys.index_columns as ic
where
ic.object_id = i2.object_id
and ic.index_id = i2.index_id
and ic.is_included_column = 0
) as b on
a.index_column_id = b.index_column_id
and a.column_id = b.column_id
and a.key_ordinal = b.key_ordinal
having
count(case when a.index_id is null then 1 end) = 0
and count(case when b.index_id is null then 1 end) = 0
and count(a.index_id) = count(b.index_id)
) as p
where
i1.type_desc <> 'xml'
/*
Index fragmentation is the leading cause of DBA heartburn. It's a lot like
file fragmentation, but it happens inside of the database. The below script
shows fragmented objects that might be a concern.
This is an IO-intensive operation, so start by running it in a small database.
To run it for all databases, go to the line with this:
INNER JOIN sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, N'Limited') ps
and replace it with:
INNER JOIN sys.dm_db_index_physical_stats (NULL, NULL, NULL , NULL, N'Limited') ps
But be careful - that can run VERY long on large database systems, like hours.
For more about index fragmentation, check out this page with a video:
http://sqlserverpedia.com/wiki/Index_Maintenance
*/
SELECT
db.name AS databaseName
, SCHEMA_NAME(obj.schema_id) AS schemaName
, OBJECT_NAME(ps.OBJECT_ID) AS tableName
, ps.OBJECT_ID AS objectID
, ps.index_id AS indexID
, ps.partition_number AS partitionNumber
, ps.avg_fragmentation_in_percent AS fragmentation
, ps.page_count
FROM sys.databases db
INNER JOIN sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, N'Limited') ps
ON db.database_id = ps.database_id
INNER JOIN sys.objects obj ON ps.object_id = obj.object_id
WHERE ps.index_id > 0
AND ps.page_count > 100
AND ps.avg_fragmentation_in_percent > 30
ORDER BY databaseName, schemaName, tableName
OPTION (MaxDop 1);
/*
Finally, let's go back to the users and ask questions:
- Can the business operate if this server is down?
- How many employees have to stop working if this server goes down?
- Who should I call when the server goes down?
- Is this server covered by any security or compliance regulations?
We can use their answers to build a good backup & recovery solution.
Is there anything you think should be added here? You can edit this script!
Go to the script's home page:
http://sqlserverpedia.com/wiki/Audit_a_SQL_Server_Configuration
Log into SQLServerPedia (it's free to create an account). After you've
logged in, you'll see an Edit button at the top of the page. Click that,
and you can edit this script right now to add your own contributions!
*/
Wednesday, August 3, 2011
PROCEDURE TO CHANGE PASSWORD
/****** Object: StoredProcedure [TestingSP].[prc_User_UPDATE_Password_ByID] Script Date: 08/03/2011 17:39:50 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
/*Created By : Sushil Rout
* Created Date : 03 August 2011
* Purpose : Change Password
*/
ALTER PROCEDURE [TestingSP].[prc_User_UPDATE_Password_ByID]
@intClientID INT,
@inbUserID BIGINT,
@chvOldPassword AS VARCHAR (128),
@chvNewPassword AS VARCHAR (128),
@bitUserExists BIT OUTPUT
AS
DECLARE @chvExistingPassword varchar(128)
BEGIN
SET NOCOUNT ON;
SET @bitEndUserExists = 0;
IF @inbEndUserID IS NOT NULL
SELECT @chvExistingPassword = eup.EupPwd FROM dbo.UserPwd eup
WHERE eup.EuID IN (SELECT eu.EuID FROM dbo.Users eu WHERE eu.ClID = @intClientID AND eup.EuID = @inbUserID);
IF @chvExistingPassword = @chvOldPassword
BEGIN
UPDATE dbo.UserPwd
SET EupPwd = @chvNewPassword
WHERE EuID = @inbUserID
SET @bitUserExists = 1;
RETURN 1;
END
ELSE
BEGIN
SET @bitUserExists = -1;
END
END
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
/*Created By : Sushil Rout
* Created Date : 03 August 2011
* Purpose : Change Password
*/
ALTER PROCEDURE [TestingSP].[prc_User_UPDATE_Password_ByID]
@intClientID INT,
@inbUserID BIGINT,
@chvOldPassword AS VARCHAR (128),
@chvNewPassword AS VARCHAR (128),
@bitUserExists BIT OUTPUT
AS
DECLARE @chvExistingPassword varchar(128)
BEGIN
SET NOCOUNT ON;
SET @bitEndUserExists = 0;
IF @inbEndUserID IS NOT NULL
SELECT @chvExistingPassword = eup.EupPwd FROM dbo.UserPwd eup
WHERE eup.EuID IN (SELECT eu.EuID FROM dbo.Users eu WHERE eu.ClID = @intClientID AND eup.EuID = @inbUserID);
IF @chvExistingPassword = @chvOldPassword
BEGIN
UPDATE dbo.UserPwd
SET EupPwd = @chvNewPassword
WHERE EuID = @inbUserID
SET @bitUserExists = 1;
RETURN 1;
END
ELSE
BEGIN
SET @bitUserExists = -1;
END
END
Saturday, July 30, 2011
Change Default Schema for all users
SELECT
'Alter user [' + u.name + '] with default_schema = dbo' as command,
u.name AS [Name],
'Server[@Name=' + quotename(CAST(serverproperty(N'Servername') AS sysname),'''') + ']' + '/Database[@Name=' + quotename(db_name(),'''') + ']' + '/User[@Name=' + quotename(u.name,'''') + ']' AS [Urn],
u.principal_id AS [ID],
CAST(CASE dp.state WHEN N'G' THEN 1 WHEN 'W' THEN 1 ELSE 0 END AS bit) AS [HasDBAccess],
u.create_date AS [CreateDate]
FROM
sys.database_principals AS u
LEFT OUTER JOIN sys.database_permissions AS dp ON dp.grantee_principal_id = u.principal_id and dp.type = N'CO'
WHERE
(u.type in ('U', 'S', 'G', 'C', 'K'))
and name not in ('sys','dbo','guest','INFORMATION_SCHEMA')
ORDER BY
[Name] ASC
'Alter user [' + u.name + '] with default_schema = dbo' as command,
u.name AS [Name],
'Server[@Name=' + quotename(CAST(serverproperty(N'Servername') AS sysname),'''') + ']' + '/Database[@Name=' + quotename(db_name(),'''') + ']' + '/User[@Name=' + quotename(u.name,'''') + ']' AS [Urn],
u.principal_id AS [ID],
CAST(CASE dp.state WHEN N'G' THEN 1 WHEN 'W' THEN 1 ELSE 0 END AS bit) AS [HasDBAccess],
u.create_date AS [CreateDate]
FROM
sys.database_principals AS u
LEFT OUTER JOIN sys.database_permissions AS dp ON dp.grantee_principal_id = u.principal_id and dp.type = N'CO'
WHERE
(u.type in ('U', 'S', 'G', 'C', 'K'))
and name not in ('sys','dbo','guest','INFORMATION_SCHEMA')
ORDER BY
[Name] ASC
Tuesday, July 12, 2011
SQL Server Recovery State
When you restore a backup, you can choose from 3 restore option,
1. With Recovery – Database is ready to use, and user can connect to database, user can change data inside database.
2. No Recovery – Database is not ready, there are few more backups that has to be applied to this database instance. User cannot connect to database because it is in Restoring Status. (Exception: Not considering Database Snapshots )
3. Standby / Read Only – Database is ready to use but database is in Read Only mode, user can connect to database but they cannot change data inside database.
Now, if you have more backups to apply, definitely you want to choose with No Recovery Mode option when applying all your backups, and when you apply your last / final backup, then either you can restore with Recovery or Standby option.
1. With Recovery – Database is ready to use, and user can connect to database, user can change data inside database.
2. No Recovery – Database is not ready, there are few more backups that has to be applied to this database instance. User cannot connect to database because it is in Restoring Status. (Exception: Not considering Database Snapshots )
3. Standby / Read Only – Database is ready to use but database is in Read Only mode, user can connect to database but they cannot change data inside database.
Now, if you have more backups to apply, definitely you want to choose with No Recovery Mode option when applying all your backups, and when you apply your last / final backup, then either you can restore with Recovery or Standby option.
Wednesday, May 18, 2011
Index Optimization Tips
Consider creating index on column(s) frequently used in the WHERE, ORDER BY, and GROUP BY clauses.
These column(s) are best candidates for index creating. You should analyze your queries very attentively to avoid creating not useful indexes.
*****
Keep your indexes as narrow as possible.
Because each index take up disk space try to minimize the index key's size to avoid using superfluous disk space. This reduces the number of reads required to read the index and boost overall index performance.
*****
Drop indexes that are not used.
Because each index take up disk space and slow the adding, deleting, and updating of rows, you should drop indexes that are not used. You can use Index Wizard to identify indexes that are not used in your queries.
*****
Try to create indexes on columns that have integer values rather than character values.
Because the integer values usually have less size then the characters values size (the size of the int data type is 4 bytes, the size of the bigint data type is 8 bytes), you can reduce the number of index pages which are used to store the index keys. This reduces the number of reads required to read the index and boost overall index performance.
*****
Limit the number of indexes, if your application updates data very frequently.
Because each index take up disk space and slow the adding, deleting, and updating of rows, you should create new indexes only after analyze the uses of the data, the types and frequencies of queries performed, and how your queries will use the new indexes. In many cases, the speed advantages of creating the new indexes outweigh the disadvantages of additional space used and slowly rows modification. However, avoid using redundant indexes, create them only when it is necessary. For read-only table, the number of indexes can be increased.
*****
Check that index you tried to create does not already exist.
Keep in mind that when you create primary key constraint or unique key constraints SQL Server automatically creates index on the column(s) participate in these constraints. If you specify another index name, you can create the indexes on the same column(s) again and again.
*****
Create clustered index instead of nonclustered to increase performance of the queries that return a range of values and for the queries that contain the GROUP BY or ORDER BY clauses and return the sort results.
Because every table can have only one clustered index, you should choose the column(s) for this index very carefully. Try to analyze all your queries, choose most frequently used queries and include into the clustered index only those column(s), which provide the most performance benefits from the clustered index creation.
*****
Create nonclustered indexes to increase performance of the queries that return few rows and where the index has good selectivity.
In comparison with a clustered index, which can be only one for each table, each table can have as many as 249 nonclustered indexes. However, you should consider nonclustered index creation as carefully as the clustered index, because each index take up disk space and drag on data modification.
*****
Create clustered index on column(s) that is not updated very frequently.
Because the leaf node of a nonclustered index contains a clustered index key if the table has clustered index, then every time that a column used for a clustered index is modified, all of the nonclustered indexes must also be modified.
*****
Create clustered index based on a single column that is as narrow as possibly.
Because nonclustered indexes contain a clustered index key within their leaf nodes and nonclustered indexes use the clustered index to locate data rows, creating clustered index based on a single column that is as narrow as possibly will reduce not only the size of the clustered index, but all nonclustered indexes on the table also.
*****
Avoid creating a clustered index based on an incrementing key.
For example, if a table has surrogate integer primary key declared as IDENTITY and the clustered index was created on this column, then every time data is inserted into this table, the rows will be added to the end of the table. When many rows will be added a "hot spot" can occur. A "hot spot" occurs when many queries try to read or write data in the same area at the same time. A "hot spot" results in I/O bottleneck.
Note. By default, SQL Server creates clustered index for the primary key constraint. So, in this case, you should explicitly specify NONCLUSTERED keyword to indicate that a nonclustered index is created for the primary key constraint.
*****
Create a clustered index for each table.
If you create a table without clustered index, the data rows will not be stored in any particular order. This structure is called a heap. Every time data is inserted into this table, the row will be added to the end of the table. When many rows will be added a "hot spot" can occur. To avoid "hot spot" and improve concurrency, you should create a clustered index for each table.
*****
Don't create index on column(s) which values has low selectivity.
For example, don't create an index for columns with many duplicate values, such as "Sex" column (which has only "Male" and "Female" values), because in this case the disadvantages of additional space used and slowly rows modification outweigh the speed advantages of creating a new index.
*****
If you create a composite (multi-column) index, try to order the columns in the key as to enhance selectivity, with the most selective columns to the leftmost of the key.
The order of the columns in a composite (multi-column) index is very important. This can increase the chance the index will be used.
*****
If you create a composite (multi-column) index, try to order the columns in the key so that the WHERE clauses of the frequently used queries match the column(s) that are leftmost in the index.
The order of the columns in a composite (multi-column) index is very important. The index will be used to evaluate a query only if the leftmost index key's column are specified in the WHERE clause of the query. For example, if you create composite index such as "Name, Age", then the query with the WHERE clause such as "WHERE Name = 'Alex'" will use the index, but the query with the WHERE clause such as "WHERE Age = 28" will not use the index.
*****
If you need to join several tables very frequently, consider creating index on the joined columns.
This can significantly improve performance of the queries against the joined tables.
*****
Consider creating a surrogate integer primary key (identity, for example).
Every table must have a primary key (a unique identifier for a row within a database table). A surrogate primary key is a field that has a unique value but has no actual meaning to the record itself, so users should never see or change a surrogate primary key. Some developers use surrogate primary keys, others use data fields themselves as the primary key. If a primary key consists of many data fields and has a big size, consider creating a surrogate integer primary key. This can improve performance of your queries.
*****
Consider creating the indexes on all the columns, which referenced in most frequently used queries in the WHERE clause which contains the OR operator.
If the WHERE clause in the query contains an OR operator and if any of the referenced columns in the OR clause are not indexed, then the table or clustered index scan will be made. In this case, creating the indexes on all such columns can significantly improve your queries performance.
*****
If your application will perform the same query over and over on the same table, consider creating a covering index including columns from this query.
A covering index is an index, which includes all of the columns referenced in the query. So the creating covering index can improve performance because all the data for the query is contained within the index itself and only the index pages, not the data pages, will be used to retrieve the data. Covering indexes can bring a lot of performance to a query, because it can save a huge amount of I/O operations.
*****
Use the DBCC DBREINDEX statement to rebuild all the indexes on all the tables in your database periodically (for example, one time per week at Sunday) to reduce fragmentation.
Because fragmented data can cause SQL Server to perform unnecessary data reads and the queries performance against the heavy fragmented table can be very bad, you should periodically rebuild all indexes to reduce fragmentation. Try to schedule the DBCC DBREINDEX statement during CPU idle time and slow production periods.
*****
Use the DBCC INDEXDEFRAG statement to defragment clustered and secondary indexes of the specified table or view.
The DBCC INDEXDEFRAG statement is a new SQL Server 2000 command, which was not supported in the previous versions. Unlike DBCC DBREINDEX, DBCC INDEXDEFRAG does not hold locks long term and thus will not block running queries or updates. So, try to use the DBCC INDEXDEFRAG command instead of DBCC DBREINDEX, whenever possible.
*****
Consider using the SORT_IN_TEMPDB option when you create an index and when tempdb is on a different set of disks than the user database.
The SORT_IN_TEMPDB option is a new SQL Server 2000 feature, which was not supported in the previous versions. When you create an index with the SORT_IN_TEMPDB option, SQL Server uses the tempdb database, instead of the current database, to sort data during the index creation. Using this option can reduce the time it takes to create an index, but increases the amount of disk space used to create an index.
*****
Use the SQL Server Profiler Create Trace Wizard with "Identify Scans of Large Tables" trace to determine which tables in your database may need indexes.
This trace will show which tables are being scanned by queries instead of using an index.
*****
These column(s) are best candidates for index creating. You should analyze your queries very attentively to avoid creating not useful indexes.
*****
Keep your indexes as narrow as possible.
Because each index take up disk space try to minimize the index key's size to avoid using superfluous disk space. This reduces the number of reads required to read the index and boost overall index performance.
*****
Drop indexes that are not used.
Because each index take up disk space and slow the adding, deleting, and updating of rows, you should drop indexes that are not used. You can use Index Wizard to identify indexes that are not used in your queries.
*****
Try to create indexes on columns that have integer values rather than character values.
Because the integer values usually have less size then the characters values size (the size of the int data type is 4 bytes, the size of the bigint data type is 8 bytes), you can reduce the number of index pages which are used to store the index keys. This reduces the number of reads required to read the index and boost overall index performance.
*****
Limit the number of indexes, if your application updates data very frequently.
Because each index take up disk space and slow the adding, deleting, and updating of rows, you should create new indexes only after analyze the uses of the data, the types and frequencies of queries performed, and how your queries will use the new indexes. In many cases, the speed advantages of creating the new indexes outweigh the disadvantages of additional space used and slowly rows modification. However, avoid using redundant indexes, create them only when it is necessary. For read-only table, the number of indexes can be increased.
*****
Check that index you tried to create does not already exist.
Keep in mind that when you create primary key constraint or unique key constraints SQL Server automatically creates index on the column(s) participate in these constraints. If you specify another index name, you can create the indexes on the same column(s) again and again.
*****
Create clustered index instead of nonclustered to increase performance of the queries that return a range of values and for the queries that contain the GROUP BY or ORDER BY clauses and return the sort results.
Because every table can have only one clustered index, you should choose the column(s) for this index very carefully. Try to analyze all your queries, choose most frequently used queries and include into the clustered index only those column(s), which provide the most performance benefits from the clustered index creation.
*****
Create nonclustered indexes to increase performance of the queries that return few rows and where the index has good selectivity.
In comparison with a clustered index, which can be only one for each table, each table can have as many as 249 nonclustered indexes. However, you should consider nonclustered index creation as carefully as the clustered index, because each index take up disk space and drag on data modification.
*****
Create clustered index on column(s) that is not updated very frequently.
Because the leaf node of a nonclustered index contains a clustered index key if the table has clustered index, then every time that a column used for a clustered index is modified, all of the nonclustered indexes must also be modified.
*****
Create clustered index based on a single column that is as narrow as possibly.
Because nonclustered indexes contain a clustered index key within their leaf nodes and nonclustered indexes use the clustered index to locate data rows, creating clustered index based on a single column that is as narrow as possibly will reduce not only the size of the clustered index, but all nonclustered indexes on the table also.
*****
Avoid creating a clustered index based on an incrementing key.
For example, if a table has surrogate integer primary key declared as IDENTITY and the clustered index was created on this column, then every time data is inserted into this table, the rows will be added to the end of the table. When many rows will be added a "hot spot" can occur. A "hot spot" occurs when many queries try to read or write data in the same area at the same time. A "hot spot" results in I/O bottleneck.
Note. By default, SQL Server creates clustered index for the primary key constraint. So, in this case, you should explicitly specify NONCLUSTERED keyword to indicate that a nonclustered index is created for the primary key constraint.
*****
Create a clustered index for each table.
If you create a table without clustered index, the data rows will not be stored in any particular order. This structure is called a heap. Every time data is inserted into this table, the row will be added to the end of the table. When many rows will be added a "hot spot" can occur. To avoid "hot spot" and improve concurrency, you should create a clustered index for each table.
*****
Don't create index on column(s) which values has low selectivity.
For example, don't create an index for columns with many duplicate values, such as "Sex" column (which has only "Male" and "Female" values), because in this case the disadvantages of additional space used and slowly rows modification outweigh the speed advantages of creating a new index.
*****
If you create a composite (multi-column) index, try to order the columns in the key as to enhance selectivity, with the most selective columns to the leftmost of the key.
The order of the columns in a composite (multi-column) index is very important. This can increase the chance the index will be used.
*****
If you create a composite (multi-column) index, try to order the columns in the key so that the WHERE clauses of the frequently used queries match the column(s) that are leftmost in the index.
The order of the columns in a composite (multi-column) index is very important. The index will be used to evaluate a query only if the leftmost index key's column are specified in the WHERE clause of the query. For example, if you create composite index such as "Name, Age", then the query with the WHERE clause such as "WHERE Name = 'Alex'" will use the index, but the query with the WHERE clause such as "WHERE Age = 28" will not use the index.
*****
If you need to join several tables very frequently, consider creating index on the joined columns.
This can significantly improve performance of the queries against the joined tables.
*****
Consider creating a surrogate integer primary key (identity, for example).
Every table must have a primary key (a unique identifier for a row within a database table). A surrogate primary key is a field that has a unique value but has no actual meaning to the record itself, so users should never see or change a surrogate primary key. Some developers use surrogate primary keys, others use data fields themselves as the primary key. If a primary key consists of many data fields and has a big size, consider creating a surrogate integer primary key. This can improve performance of your queries.
*****
Consider creating the indexes on all the columns, which referenced in most frequently used queries in the WHERE clause which contains the OR operator.
If the WHERE clause in the query contains an OR operator and if any of the referenced columns in the OR clause are not indexed, then the table or clustered index scan will be made. In this case, creating the indexes on all such columns can significantly improve your queries performance.
*****
If your application will perform the same query over and over on the same table, consider creating a covering index including columns from this query.
A covering index is an index, which includes all of the columns referenced in the query. So the creating covering index can improve performance because all the data for the query is contained within the index itself and only the index pages, not the data pages, will be used to retrieve the data. Covering indexes can bring a lot of performance to a query, because it can save a huge amount of I/O operations.
*****
Use the DBCC DBREINDEX statement to rebuild all the indexes on all the tables in your database periodically (for example, one time per week at Sunday) to reduce fragmentation.
Because fragmented data can cause SQL Server to perform unnecessary data reads and the queries performance against the heavy fragmented table can be very bad, you should periodically rebuild all indexes to reduce fragmentation. Try to schedule the DBCC DBREINDEX statement during CPU idle time and slow production periods.
*****
Use the DBCC INDEXDEFRAG statement to defragment clustered and secondary indexes of the specified table or view.
The DBCC INDEXDEFRAG statement is a new SQL Server 2000 command, which was not supported in the previous versions. Unlike DBCC DBREINDEX, DBCC INDEXDEFRAG does not hold locks long term and thus will not block running queries or updates. So, try to use the DBCC INDEXDEFRAG command instead of DBCC DBREINDEX, whenever possible.
*****
Consider using the SORT_IN_TEMPDB option when you create an index and when tempdb is on a different set of disks than the user database.
The SORT_IN_TEMPDB option is a new SQL Server 2000 feature, which was not supported in the previous versions. When you create an index with the SORT_IN_TEMPDB option, SQL Server uses the tempdb database, instead of the current database, to sort data during the index creation. Using this option can reduce the time it takes to create an index, but increases the amount of disk space used to create an index.
*****
Use the SQL Server Profiler Create Trace Wizard with "Identify Scans of Large Tables" trace to determine which tables in your database may need indexes.
This trace will show which tables are being scanned by queries instead of using an index.
*****
Thursday, May 12, 2011
EXPORT
CREATE PROCEDURE ExportData_In_TextFile_UsingSQLCMD
@TableName Varchar(255), --To Get Table Name
@FilePath VARCHAR(1000), --To Get Valid FilePath
@Separator Varchar(1), --To Get Field Separator
@Result INT = 0 OUTPUT, --To get error no if any error ocurred during execution
@ErrDesc VARCHAR(500) OUTPUT --To get error description if any error ocurred during execution
AS
DECLARE
@SQLCmd Varchar(8000),
@Columns Varchar(4000),
@DBName Varchar(255)
SET NOCOUNT ON
BEGIN TRY
IF @TableName='' OR @TableName IS NULL
Begin
SET @Result = -1
SET @ErrDesc = 'Table Name can not be null'
RETURN
End
IF @FilePath='' OR @FilePath IS NULL
Begin
SET @Result = -1
SET @ErrDesc = 'File Path can not be null'
RETURN
End
IF @Separator IS Null
SET @Separator=','
Set @DBName=DB_NAME()
--To collect the columns information from the schema view
SELECT @Columns= COALESCE(@Columns + ',', '') + '''' + Column_Name + ''''
From Information_Schema.columns Where Table_Name=@TableName
--Export the data in text file
SET @SQLCmd ='sqlcmd -d ' + @DBName + ' -Q "set nocount on;Select ' + @Columns + ';Select * From ' + @TableName + '" -o "' + @FilePath + '" -W -s "' + @Separator + '" -h -1'
EXEC master..xp_cmdshell @SQLCmd
END TRY
BEGIN CATCH
SET @Result = Error_Number()
SET @ErrDesc = Error_Message()
RETURN
END CATCH
SET NOCOUNT OFF
SET @Result = 0
RETURN 0
@TableName Varchar(255), --To Get Table Name
@FilePath VARCHAR(1000), --To Get Valid FilePath
@Separator Varchar(1), --To Get Field Separator
@Result INT = 0 OUTPUT, --To get error no if any error ocurred during execution
@ErrDesc VARCHAR(500) OUTPUT --To get error description if any error ocurred during execution
AS
DECLARE
@SQLCmd Varchar(8000),
@Columns Varchar(4000),
@DBName Varchar(255)
SET NOCOUNT ON
BEGIN TRY
IF @TableName='' OR @TableName IS NULL
Begin
SET @Result = -1
SET @ErrDesc = 'Table Name can not be null'
RETURN
End
IF @FilePath='' OR @FilePath IS NULL
Begin
SET @Result = -1
SET @ErrDesc = 'File Path can not be null'
RETURN
End
IF @Separator IS Null
SET @Separator=','
Set @DBName=DB_NAME()
--To collect the columns information from the schema view
SELECT @Columns= COALESCE(@Columns + ',', '') + '''' + Column_Name + ''''
From Information_Schema.columns Where Table_Name=@TableName
--Export the data in text file
SET @SQLCmd ='sqlcmd -d ' + @DBName + ' -Q "set nocount on;Select ' + @Columns + ';Select * From ' + @TableName + '" -o "' + @FilePath + '" -W -s "' + @Separator + '" -h -1'
EXEC master..xp_cmdshell @SQLCmd
END TRY
BEGIN CATCH
SET @Result = Error_Number()
SET @ErrDesc = Error_Message()
RETURN
END CATCH
SET NOCOUNT OFF
SET @Result = 0
RETURN 0
INDEX DEFRAGMENTATION SCRIPT
Index Defragmentation is one of the most important DBA tasks. This will significantly improve query performance. When you perform any DML operation (INSERT, UPDATE, or DELETE statements) table fragmentation can occur. If you use predefined maintenance plan it will take much server resource and time. Here is a custom stored procedure.
If you want to determine the level of fragmentation, you can use the SYS.DM_DB_INDEX_PHYSICAL_STATS statement. The SYS.DM_DB_INDEX_PHYSICAL_STATS DMV displays fragmentation information for the data and indexes of the specified object.
Here I use 3 conditions :
1. Fragmentation >=30 AND PAGES>1000 then rebuild
2. Fragmentation between 15 to 29 AND PAGES>1000 then reorganize&updatestatistics
3. If the above two conditions are false then update the statistics
Before you run the procedure create the tables provided for history propose
Click the following URL Index Architecture By Gail Shaw-->http://www.sqlservercentral.com/articles/Indexing/68439/
Note : This Index Defragmentation script only works for SQL server 2005 and sql server 2008.
You can also get defragmentation script for SQL server 2000 here.
http://www.sqlserverblogforum.blogspot.com/2011/03/index-defragmentation-script-for-sql.html
---------------
-- For SQL-2005/2008
USE MSDB;
go
CREATE TABLE [dbo].[dba_defrag_maintenance_history]
(
[db_name] [SYSNAME] NOT NULL,
[table_name] [SYSNAME] NOT NULL,
[index_name] [SYSNAME] NOT NULL,
[frag] [FLOAT] NULL,
[page] [INT] NULL,
[action_taken] [VARCHAR](35) NULL,
[date] [DATETIME] NULL DEFAULT (GETDATE())
)
go
--Archive the data's in master DB
USE MASTER;
go
CREATE TABLE [dbo].[dba_defrag_maintenance_history]
(
[db_name] [SYSNAME] NOT NULL,
[table_name] [SYSNAME] NOT NULL,
[index_name] [SYSNAME] NOT NULL,
[frag] [FLOAT] NULL,
[page] [INT] NULL,
[action_taken] [VARCHAR](35) NULL,
[date] [DATETIME] NULL DEFAULT (GETDATE())
)
go
USE msdb
go
CREATE PROC [dbo].[indexdefragmentation]@p_dbname SYSNAME
/*
Summary: Remove the Index Fragmentation to improve the query performance
Contact: Muthukkumaran Kaliyamoorhty SQL DBA
Description: This Sproc will take the fragmentation details and do four kinds of work.
1. Check the fragmentation greater than 30% and pages greater than 1000 then rebuild
2. Check the fragmentation between 15% to 29% and pages greater than 1000 then reorganize
3. Check the fragmentation between 15% to 29% and pages greater than 1000 and page level lock disabled then rebuild
4. Update the statistics if the three conditions is false
ChangeLog:
Date Coder Description
2011-11-23 Muthukkumaran Kaliyamoorhty created
*************************All the SQL keywords should be written in upper case*************************
*/
AS
BEGIN
SET NOCOUNT ON
DECLARE
@db_name SYSNAME,
@tab_name SYSNAME,
@ind_name VARCHAR(500),
@schema_name SYSNAME,
@frag FLOAT,
@pages INT,
@min_id INT,
@max_id INT
SET @db_name=@p_dbname
--------------------------------------------------------------------------------------------------------------------------------------
--inserting the Fragmentation details
--------------------------------------------------------------------------------------------------------------------------------------
CREATE TABLE #tempfrag
(
id INT IDENTITY,
table_name SYSNAME,
index_name VARCHAR(500),
frag FLOAT,
pages INT,
schema_name SYSNAME
)
EXEC ('USE ['+@db_name+'];
INSERT INTO #tempfrag (table_name,index_name,frag,pages,schema_name)
SELECT OBJECT_NAME(F.OBJECT_ID) obj,i.name ind,
f.avg_fragmentation_in_percent,
f.page_count,table_schema
FROM SYS.DM_DB_INDEX_PHYSICAL_STATS (DB_ID(),NULL,NULL,NULL,NULL) F
JOIN SYS.INDEXES I
ON(F.OBJECT_ID=I.OBJECT_ID)AND i.index_id=f.index_id
JOIN INFORMATION_SCHEMA.TABLES S
ON (s.table_name=OBJECT_NAME(F.OBJECT_ID))
--WHERE INDEX_ID<> 0
AND f.database_id=DB_ID()
AND OBJECTPROPERTY(I.OBJECT_ID,''ISSYSTEMTABLE'')=0'
)
SELECT @min_id=MIN(ID)FROM #tempfrag
SELECT @max_id=MAX(ID)FROM #tempfrag
TRUNCATE TABLE msdb.dbo.dba_defrag_maintenance_history
WHILE (@min_id<=@max_id)
BEGIN
SELECT
@tab_name=table_name,
@schema_name=schema_name,
@ind_name=index_name ,
@frag=frag ,
@pages=pages
FROM #tempfrag WHERE id = @min_id
--------------------------------------------------------------------------------------------------------------------------------------
--Check the fragmentation greater than 30% and pages greater than 1000 then rebuild
--------------------------------------------------------------------------------------------------------------------------------------
IF (@ind_name IS NOT NULL)
BEGIN
IF (@frag>=30 AND @pages>1000)
BEGIN
EXEC ('USE ['+@db_name+'];ALTER INDEX ['+@ind_name+'] ON ['+@schema_name+'].['+@tab_name +'] REBUILD ')
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'REBUILD',GETDATE())
END
--------------------------------------------------------------------------------------------------------------------------------------
--Check the fragmentation between 15% to 29% and pages greater than 1000 then reorganize
--------------------------------------------------------------------------------------------------------------------------------------
ELSE IF((@frag BETWEEN 15 AND 29) AND @pages>1000 )
BEGIN
BEGIN TRY
EXEC ('USE ['+@db_name+'];ALTER INDEX ['+@ind_name+'] ON ['+@schema_name+'].['+@tab_name +'] REORGANIZE ')
EXEC ('USE ['+@db_name+'];UPDATE STATISTICS ['+@schema_name+'].['+@tab_name+'] (['+@ind_name+']) ' )
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'REORGANIZE & UPDATESTATS',GETDATE())
END TRY
BEGIN CATCH
--------------------------------------------------------------------------------------------------------------------------------------
--Check the fragmentation between 15% to 29% and pages greater than 1000 and page level
--lock disabled then rebuild
--------------------------------------------------------------------------------------------------------------------------------------
IF ERROR_NUMBER()=2552
EXEC ('USE ['+@db_name+'];ALTER INDEX ['+@ind_name+'] ON ['+@schema_name+'].['+@tab_name +'] REBUILD ')
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'PLLD_REBUILD',GETDATE())
END CATCH
END
--------------------------------------------------------------------------------------------------------------------------------------
--Update the statistics for all indexes if the first three conditions is false
--------------------------------------------------------------------------------------------------------------------------------------
ELSE
BEGIN
EXEC ('USE ['+@db_name+'];UPDATE STATISTICS ['+@schema_name+'].['+@tab_name+'] (['+@ind_name+']) ' )
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'UPDATESTATS',GETDATE())
END
END
ELSE
BEGIN
--------------------------------------------------------------------------------------------------------------------------------------
--Update the statistics for all tables if the first three conditions is false
--------------------------------------------------------------------------------------------------------------------------------------
EXEC ('USE ['+@db_name+'];UPDATE STATISTICS ['+@schema_name+'].['+@tab_name+']')
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,'HEAP',@frag,@pages,'UPDATESTATS',GETDATE())
END
SET @min_id=@min_id+1
END
DROP TABLE #tempfrag
INSERT INTO master.dbo.dba_defrag_maintenance_history
SELECT * FROM msdb.dbo.dba_defrag_maintenance_history
END
If you want to determine the level of fragmentation, you can use the SYS.DM_DB_INDEX_PHYSICAL_STATS statement. The SYS.DM_DB_INDEX_PHYSICAL_STATS DMV displays fragmentation information for the data and indexes of the specified object.
Here I use 3 conditions :
1. Fragmentation >=30 AND PAGES>1000 then rebuild
2. Fragmentation between 15 to 29 AND PAGES>1000 then reorganize&updatestatistics
3. If the above two conditions are false then update the statistics
Before you run the procedure create the tables provided for history propose
Click the following URL Index Architecture By Gail Shaw-->http://www.sqlservercentral.com/articles/Indexing/68439/
Note : This Index Defragmentation script only works for SQL server 2005 and sql server 2008.
You can also get defragmentation script for SQL server 2000 here.
http://www.sqlserverblogforum.blogspot.com/2011/03/index-defragmentation-script-for-sql.html
---------------
-- For SQL-2005/2008
USE MSDB;
go
CREATE TABLE [dbo].[dba_defrag_maintenance_history]
(
[db_name] [SYSNAME] NOT NULL,
[table_name] [SYSNAME] NOT NULL,
[index_name] [SYSNAME] NOT NULL,
[frag] [FLOAT] NULL,
[page] [INT] NULL,
[action_taken] [VARCHAR](35) NULL,
[date] [DATETIME] NULL DEFAULT (GETDATE())
)
go
--Archive the data's in master DB
USE MASTER;
go
CREATE TABLE [dbo].[dba_defrag_maintenance_history]
(
[db_name] [SYSNAME] NOT NULL,
[table_name] [SYSNAME] NOT NULL,
[index_name] [SYSNAME] NOT NULL,
[frag] [FLOAT] NULL,
[page] [INT] NULL,
[action_taken] [VARCHAR](35) NULL,
[date] [DATETIME] NULL DEFAULT (GETDATE())
)
go
USE msdb
go
CREATE PROC [dbo].[indexdefragmentation]@p_dbname SYSNAME
/*
Summary: Remove the Index Fragmentation to improve the query performance
Contact: Muthukkumaran Kaliyamoorhty SQL DBA
Description: This Sproc will take the fragmentation details and do four kinds of work.
1. Check the fragmentation greater than 30% and pages greater than 1000 then rebuild
2. Check the fragmentation between 15% to 29% and pages greater than 1000 then reorganize
3. Check the fragmentation between 15% to 29% and pages greater than 1000 and page level lock disabled then rebuild
4. Update the statistics if the three conditions is false
ChangeLog:
Date Coder Description
2011-11-23 Muthukkumaran Kaliyamoorhty created
*************************All the SQL keywords should be written in upper case*************************
*/
AS
BEGIN
SET NOCOUNT ON
DECLARE
@db_name SYSNAME,
@tab_name SYSNAME,
@ind_name VARCHAR(500),
@schema_name SYSNAME,
@frag FLOAT,
@pages INT,
@min_id INT,
@max_id INT
SET @db_name=@p_dbname
--------------------------------------------------------------------------------------------------------------------------------------
--inserting the Fragmentation details
--------------------------------------------------------------------------------------------------------------------------------------
CREATE TABLE #tempfrag
(
id INT IDENTITY,
table_name SYSNAME,
index_name VARCHAR(500),
frag FLOAT,
pages INT,
schema_name SYSNAME
)
EXEC ('USE ['+@db_name+'];
INSERT INTO #tempfrag (table_name,index_name,frag,pages,schema_name)
SELECT OBJECT_NAME(F.OBJECT_ID) obj,i.name ind,
f.avg_fragmentation_in_percent,
f.page_count,table_schema
FROM SYS.DM_DB_INDEX_PHYSICAL_STATS (DB_ID(),NULL,NULL,NULL,NULL) F
JOIN SYS.INDEXES I
ON(F.OBJECT_ID=I.OBJECT_ID)AND i.index_id=f.index_id
JOIN INFORMATION_SCHEMA.TABLES S
ON (s.table_name=OBJECT_NAME(F.OBJECT_ID))
--WHERE INDEX_ID<> 0
AND f.database_id=DB_ID()
AND OBJECTPROPERTY(I.OBJECT_ID,''ISSYSTEMTABLE'')=0'
)
SELECT @min_id=MIN(ID)FROM #tempfrag
SELECT @max_id=MAX(ID)FROM #tempfrag
TRUNCATE TABLE msdb.dbo.dba_defrag_maintenance_history
WHILE (@min_id<=@max_id)
BEGIN
SELECT
@tab_name=table_name,
@schema_name=schema_name,
@ind_name=index_name ,
@frag=frag ,
@pages=pages
FROM #tempfrag WHERE id = @min_id
--------------------------------------------------------------------------------------------------------------------------------------
--Check the fragmentation greater than 30% and pages greater than 1000 then rebuild
--------------------------------------------------------------------------------------------------------------------------------------
IF (@ind_name IS NOT NULL)
BEGIN
IF (@frag>=30 AND @pages>1000)
BEGIN
EXEC ('USE ['+@db_name+'];ALTER INDEX ['+@ind_name+'] ON ['+@schema_name+'].['+@tab_name +'] REBUILD ')
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'REBUILD',GETDATE())
END
--------------------------------------------------------------------------------------------------------------------------------------
--Check the fragmentation between 15% to 29% and pages greater than 1000 then reorganize
--------------------------------------------------------------------------------------------------------------------------------------
ELSE IF((@frag BETWEEN 15 AND 29) AND @pages>1000 )
BEGIN
BEGIN TRY
EXEC ('USE ['+@db_name+'];ALTER INDEX ['+@ind_name+'] ON ['+@schema_name+'].['+@tab_name +'] REORGANIZE ')
EXEC ('USE ['+@db_name+'];UPDATE STATISTICS ['+@schema_name+'].['+@tab_name+'] (['+@ind_name+']) ' )
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'REORGANIZE & UPDATESTATS',GETDATE())
END TRY
BEGIN CATCH
--------------------------------------------------------------------------------------------------------------------------------------
--Check the fragmentation between 15% to 29% and pages greater than 1000 and page level
--lock disabled then rebuild
--------------------------------------------------------------------------------------------------------------------------------------
IF ERROR_NUMBER()=2552
EXEC ('USE ['+@db_name+'];ALTER INDEX ['+@ind_name+'] ON ['+@schema_name+'].['+@tab_name +'] REBUILD ')
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'PLLD_REBUILD',GETDATE())
END CATCH
END
--------------------------------------------------------------------------------------------------------------------------------------
--Update the statistics for all indexes if the first three conditions is false
--------------------------------------------------------------------------------------------------------------------------------------
ELSE
BEGIN
EXEC ('USE ['+@db_name+'];UPDATE STATISTICS ['+@schema_name+'].['+@tab_name+'] (['+@ind_name+']) ' )
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'UPDATESTATS',GETDATE())
END
END
ELSE
BEGIN
--------------------------------------------------------------------------------------------------------------------------------------
--Update the statistics for all tables if the first three conditions is false
--------------------------------------------------------------------------------------------------------------------------------------
EXEC ('USE ['+@db_name+'];UPDATE STATISTICS ['+@schema_name+'].['+@tab_name+']')
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,'HEAP',@frag,@pages,'UPDATESTATS',GETDATE())
END
SET @min_id=@min_id+1
END
DROP TABLE #tempfrag
INSERT INTO master.dbo.dba_defrag_maintenance_history
SELECT * FROM msdb.dbo.dba_defrag_maintenance_history
END
Wednesday, May 11, 2011
How to get all MSSQL database columns names, data types and length
SELECT SysObjects.[Name] as TableName,
SysColumns.[Name] as ColumnName,
SysTypes.[Name] AS [Name],
SysColumns.[Length] AS [Length]
FROM
SysObjects INNER JOIN SysColumns
ON SysObjects.[Id] = SysColumns.[Id]
INNER JOIN SysTypes
ON SysTypes.[xtype] = SysColumns.[xtype]
WHERE SysObjects.[type] = 'U'
AND SysTypes.[Name] <> 'sysname'
ORDER BY SysObjects.[Name]
SysColumns.[Name] as ColumnName,
SysTypes.[Name] AS [Name],
SysColumns.[Length] AS [Length]
FROM
SysObjects INNER JOIN SysColumns
ON SysObjects.[Id] = SysColumns.[Id]
INNER JOIN SysTypes
ON SysTypes.[xtype] = SysColumns.[xtype]
WHERE SysObjects.[type] = 'U'
AND SysTypes.[Name] <> 'sysname'
ORDER BY SysObjects.[Name]
Tuesday, May 10, 2011
SQL Server Best Practices for Tuning
Tuning is more of an art than a science and the best way to learn how is to do it. The following points are some recommended best practices for tuning your applications:
Tuning an application is an effort of multiple teams, including front-end programmers, database developers, network and database administrators, middle-tier developers and other professionals. Some improvements can also be accomplished by upgrading the server and client hardware. Ensure that your programs are the definite cause of the sub-standard performance before spending much time and effort optimizing code.
Determine what level of performance users consider optimal. Also try to identify the most and less critical queries.
Identify the bottleneck in your code. Only a single poorly written code module can ruin the performance of the entire application.
If possible, try to optimize your database design before modifying code. If the design is sub-optimal issues will keep cropping up. Build a data model appropriate for your application and use the most efficient data types available. If the same application is used for data-entry and reporting, try splitting it in two separate applications. Also consider partitioning your tables or using distributed partitioned views to reduce the number of rows affected by your queries.
Research the current indexing strategy and try to improve it. Try to place the clustered index on a column (or a combination of columns) that has a range of distinct values and is rarely updated. Avoid clustered indexes on monotonically increasing columns. Try to use non-clustered indexes to "cover" important queries. Try to use short index keys when possible, except when creating covering indexes.
If you have appropriate indexes but they don't seem to produce the expected results examine their fragmentation level. Also ensure that statistics stay up to date. Use the index tuning wizard and study the textual and graphical query execution plans to determine the cause of inefficient queries.
Learn how the optimizer works. Examine and test various join types and table join order; use optimizer hints to override the query optimizer's behavior if necessary. Move all database maintenance tasks to the periods of limited user activity.
Try to avoid sub-queries in your programs; solutions with joins usually outperform sub-queries. Use UPDATE… FROM and DELETE … FROM extensions of Transact-SQL to eliminate sub-queries when updating and deleting data using multiple tables.
Consider using UNION ALL instead of UNION when appropriate.
Evaluate the impact of triggers and constraints on your performance. At times your programs might be fully optimized but a poorly written trigger on one of the affected tables can bring the server to its knees. Constraints that cannot take advantage of helpful indexes might impact performance as well.
Try to avoid using SELECT … INTO unless you are the only user in the database or the operation takes a minimal amount of time. Consider replacing all references to SELECT … INTO with INSERT … SELECT.
Use SET NOCOUNT ON in all of your code modules to reduce the amount of information going from the server to clients and reduce the network load.
When using built-in or user-defined functions the optimizer cannot take advantage of the index if index key is being referenced in a function. Try rewriting your queries so that the index key is not affected.
When possible try to rewrite the queries using local variables as stored procedures with input parameters. The optimizer is more likely to choose the appropriate execution plan with parameters.
When appropriate use temporary tables to reduce the number of rows affected by the queries. If temporary tables are joined to other temporary or permanent tables consider building an index on the temporary table.
Optimize loops - move all tasks that do not need to be executed repeatedly outside of the loop.
Do not use cursors unless absolutely necessary. Transact-SQL is a set based language and is not optimized for processing one row at a time.
Be aware that updates in-place are more efficient than a DELETE followed by an INSERT. Try to design your database such that in-place updates are likely to occur.
Tuning an application is an effort of multiple teams, including front-end programmers, database developers, network and database administrators, middle-tier developers and other professionals. Some improvements can also be accomplished by upgrading the server and client hardware. Ensure that your programs are the definite cause of the sub-standard performance before spending much time and effort optimizing code.
Determine what level of performance users consider optimal. Also try to identify the most and less critical queries.
Identify the bottleneck in your code. Only a single poorly written code module can ruin the performance of the entire application.
If possible, try to optimize your database design before modifying code. If the design is sub-optimal issues will keep cropping up. Build a data model appropriate for your application and use the most efficient data types available. If the same application is used for data-entry and reporting, try splitting it in two separate applications. Also consider partitioning your tables or using distributed partitioned views to reduce the number of rows affected by your queries.
Research the current indexing strategy and try to improve it. Try to place the clustered index on a column (or a combination of columns) that has a range of distinct values and is rarely updated. Avoid clustered indexes on monotonically increasing columns. Try to use non-clustered indexes to "cover" important queries. Try to use short index keys when possible, except when creating covering indexes.
If you have appropriate indexes but they don't seem to produce the expected results examine their fragmentation level. Also ensure that statistics stay up to date. Use the index tuning wizard and study the textual and graphical query execution plans to determine the cause of inefficient queries.
Learn how the optimizer works. Examine and test various join types and table join order; use optimizer hints to override the query optimizer's behavior if necessary. Move all database maintenance tasks to the periods of limited user activity.
Try to avoid sub-queries in your programs; solutions with joins usually outperform sub-queries. Use UPDATE… FROM and DELETE … FROM extensions of Transact-SQL to eliminate sub-queries when updating and deleting data using multiple tables.
Consider using UNION ALL instead of UNION when appropriate.
Evaluate the impact of triggers and constraints on your performance. At times your programs might be fully optimized but a poorly written trigger on one of the affected tables can bring the server to its knees. Constraints that cannot take advantage of helpful indexes might impact performance as well.
Try to avoid using SELECT … INTO unless you are the only user in the database or the operation takes a minimal amount of time. Consider replacing all references to SELECT … INTO with INSERT … SELECT.
Use SET NOCOUNT ON in all of your code modules to reduce the amount of information going from the server to clients and reduce the network load.
When using built-in or user-defined functions the optimizer cannot take advantage of the index if index key is being referenced in a function. Try rewriting your queries so that the index key is not affected.
When possible try to rewrite the queries using local variables as stored procedures with input parameters. The optimizer is more likely to choose the appropriate execution plan with parameters.
When appropriate use temporary tables to reduce the number of rows affected by the queries. If temporary tables are joined to other temporary or permanent tables consider building an index on the temporary table.
Optimize loops - move all tasks that do not need to be executed repeatedly outside of the loop.
Do not use cursors unless absolutely necessary. Transact-SQL is a set based language and is not optimized for processing one row at a time.
Be aware that updates in-place are more efficient than a DELETE followed by an INSERT. Try to design your database such that in-place updates are likely to occur.
Tables and Index with number of days statistics being old
SELECT OBJECT_NAME(A.object_id) AS Object_Name, A.name AS index_name, STATS_DATE(A.OBJECT_ID, index_id) AS StatsUpdated ,
DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate()) DaysOld
FROM sys.indexes A
INNER JOIN sys.tables B ON A.object_id = B.object_id
WHERE A.name IS NOT NULL
ORDER BY DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate()) DESC
DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate()) DaysOld
FROM sys.indexes A
INNER JOIN sys.tables B ON A.object_id = B.object_id
WHERE A.name IS NOT NULL
ORDER BY DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate()) DESC
Friday, May 6, 2011
Problem
I had a production problem recently, the transaction log grew 1 GB every minute and filled up the entire drive. The error message I got was: "The transaction log for database 'MyDB' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases. My SQL Server is clustered and the issue happened after a node failover occurred. In this tip I cover steps that I took to resolve this problem.
Solution
The message I got was the transaction log drive is full and users are unable to access the application, DBAs never want to hear this! The usual DBA question is "what was changed"?
My first priority was to fix the problem immediately, so users can access the application and then do root cause analysis. For the database in question, I changed the database recovery model to SIMPLE. Then I shrank the transaction log file to a reasonable size (note: you can't shrink it below its original size). After these changes, the log growth stopped, but this still did not tell me what the issue was. Something to note is that changing the recovery model to SIMPLE is not possible if Mirroring or Log Shipping is setup unless you remove these first, but these can be reconfigured later, the important thing is to stop the bleeding ASAP.
Other options would have been to create another transaction log file on a different drive that had plenty of space or to clean up files on the drive that was running out of space. But in my case the file was growing so rapidly that I needed to change the recovery model so the growth would stop until this issue was resolved.
Below are the steps that I took to check the size, change the recovery model and shrink the log file.
SELECT recovery_model_desc FROM sys.databases WHERE name = 'MyDB' --Before
ALTER DATABASE MyDB SET recovery simple
SELECT recovery_model_desc FROM sys.databases WHERE name = 'MyDB' --After
EXEC xp_fixeddrives --Check free drive space
EXEC sp_helpdb MyDB -- Note the size of the log before shrink
DBCC shrinkfile(MyDB_log, 1024) -- shrink log to 1 GB
EXEC sp_helpdb MyDB -- Note the size of the log after shrink
EXEC xp_fixeddrives -- Check free drive space
First aid is complete, database is up and users are able to use the application again.
Now I need to find out what was causing the log to grow so fast, fix the issue, put the database back in FULL recovery mode, do a full backup and make sure my log backups are working correctly.
The drive now has enough space. I put the database in Full recovery mode again to see if I could catch the offending transaction and I monitored drive space and transaction log size. I immediately noticed the transaction log growing about 1 GB per minute. I then executed the block of code below to find out what may be causing the log growth.
DBCC opentran --get the spid and feed it into the next query
DECLARE @handle BINARY(20)
DECLARE @SPID INT
SET @SPID = 100 -- 100 was the spid from the dbcc opentran
SELECT @handle = sql_handle
FROM MASTER..sysprocesses
WHERE spid = @SPID
SELECT [text] FROM ::fn_get_sql(@handle)
GO
The culprit was a stored procedure that was showing up every time I executed the above block of code, although the SPID changed each time. I looked into the stored procedure which was not very complicated, but does a bunch of deletes and inserts based on some condition. This procedure was called by an automated process every minute. What happens when a cluster node fail-over happens? The cache is flushed and all the query plans are gone. Since there has not been any issue with this procedure in the past, my first option was to recompile the procedure. Viola, it worked! The transaction log stopped growing so fast and I didn't see this procedure in the open transactions from the above code. So this simple stored procedure that worked fine suddenly caused a major production issue causing the application to shut down and the fix was as simple as a recompile. Quite interesting.
My next step was to start a full database backup ASAP. While the full database backup was running I kicked off transaction log backups every 5 minutes and they seemed to work fine. I changed the transaction log backup to its normal schedule. After the full backup was completed, I reviewed the sql log, drive space, transaction log size, open transactions and everything seemed fine. If it is a cluster, you could try failing back to the other node and see if that fixes the problem, which in my case was the need for a recompile. In a cluster, the storage is shared by the nodes, so there is no benefit from a storage point of view.
After this, I researched what I could have done better in this situation. I knew that the scripts I used were from SQL Server 2000. SQL Server 2005/2008 DMVs would have helped me to find the offending transaction. Another eventful day in a DBAs life. Feels good to have resolved the issue and at the same time,
I had a production problem recently, the transaction log grew 1 GB every minute and filled up the entire drive. The error message I got was: "The transaction log for database 'MyDB' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases. My SQL Server is clustered and the issue happened after a node failover occurred. In this tip I cover steps that I took to resolve this problem.
Solution
The message I got was the transaction log drive is full and users are unable to access the application, DBAs never want to hear this! The usual DBA question is "what was changed"?
My first priority was to fix the problem immediately, so users can access the application and then do root cause analysis. For the database in question, I changed the database recovery model to SIMPLE. Then I shrank the transaction log file to a reasonable size (note: you can't shrink it below its original size). After these changes, the log growth stopped, but this still did not tell me what the issue was. Something to note is that changing the recovery model to SIMPLE is not possible if Mirroring or Log Shipping is setup unless you remove these first, but these can be reconfigured later, the important thing is to stop the bleeding ASAP.
Other options would have been to create another transaction log file on a different drive that had plenty of space or to clean up files on the drive that was running out of space. But in my case the file was growing so rapidly that I needed to change the recovery model so the growth would stop until this issue was resolved.
Below are the steps that I took to check the size, change the recovery model and shrink the log file.
SELECT recovery_model_desc FROM sys.databases WHERE name = 'MyDB' --Before
ALTER DATABASE MyDB SET recovery simple
SELECT recovery_model_desc FROM sys.databases WHERE name = 'MyDB' --After
EXEC xp_fixeddrives --Check free drive space
EXEC sp_helpdb MyDB -- Note the size of the log before shrink
DBCC shrinkfile(MyDB_log, 1024) -- shrink log to 1 GB
EXEC sp_helpdb MyDB -- Note the size of the log after shrink
EXEC xp_fixeddrives -- Check free drive space
First aid is complete, database is up and users are able to use the application again.
Now I need to find out what was causing the log to grow so fast, fix the issue, put the database back in FULL recovery mode, do a full backup and make sure my log backups are working correctly.
The drive now has enough space. I put the database in Full recovery mode again to see if I could catch the offending transaction and I monitored drive space and transaction log size. I immediately noticed the transaction log growing about 1 GB per minute. I then executed the block of code below to find out what may be causing the log growth.
DBCC opentran --get the spid and feed it into the next query
DECLARE @handle BINARY(20)
DECLARE @SPID INT
SET @SPID = 100 -- 100 was the spid from the dbcc opentran
SELECT @handle = sql_handle
FROM MASTER..sysprocesses
WHERE spid = @SPID
SELECT [text] FROM ::fn_get_sql(@handle)
GO
The culprit was a stored procedure that was showing up every time I executed the above block of code, although the SPID changed each time. I looked into the stored procedure which was not very complicated, but does a bunch of deletes and inserts based on some condition. This procedure was called by an automated process every minute. What happens when a cluster node fail-over happens? The cache is flushed and all the query plans are gone. Since there has not been any issue with this procedure in the past, my first option was to recompile the procedure. Viola, it worked! The transaction log stopped growing so fast and I didn't see this procedure in the open transactions from the above code. So this simple stored procedure that worked fine suddenly caused a major production issue causing the application to shut down and the fix was as simple as a recompile. Quite interesting.
My next step was to start a full database backup ASAP. While the full database backup was running I kicked off transaction log backups every 5 minutes and they seemed to work fine. I changed the transaction log backup to its normal schedule. After the full backup was completed, I reviewed the sql log, drive space, transaction log size, open transactions and everything seemed fine. If it is a cluster, you could try failing back to the other node and see if that fixes the problem, which in my case was the need for a recompile. In a cluster, the storage is shared by the nodes, so there is no benefit from a storage point of view.
After this, I researched what I could have done better in this situation. I knew that the scripts I used were from SQL Server 2000. SQL Server 2005/2008 DMVs would have helped me to find the offending transaction. Another eventful day in a DBAs life. Feels good to have resolved the issue and at the same time,
Saturday, April 30, 2011
SQL SERVER – Find Statistics Update Date – Update Statistics
SELECT name AS index_name,
STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated
FROM sys.indexes
WHERE OBJECT_ID = OBJECT_ID('Sunlight')
GO
GO
UPDATE STATISTICS HumanResources.Department
WITH FULLSCAN
GO
STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated
FROM sys.indexes
WHERE OBJECT_ID = OBJECT_ID('Sunlight')
GO
GO
UPDATE STATISTICS HumanResources.Department
WITH FULLSCAN
GO
Monday, March 21, 2011
Move All MDF - LDF Files to a new location
--Move All MDF - LDF Files to a new location
--write by Sushil Kumar Rout
CREATE PROCEDURE MoveAllDatabaseFile
@FileType CHAR (3),
@fullpath VARCHAR(500)
AS
IF @FileType not in ('MDF','LDF')
BEGIN
SELECT '@FileType must be MDF or LDF and @fullpath should be like N''C:\DatabaseFiles\'' the script was terminated' as
'ERROR'
SELECT 'example of right syntax - exec MoveAllDatabaseFile ''MDF'',N''C:\DatabaseFiles\''' as 'Example Script'
return
END
declare @dbname as varchar (100)
declare @dbfilename as varchar (100)
declare @runalter as varchar(500)
--Case LDF
IF @FileType = 'LDF'
BEGIN
DECLARE CRS_db CURSOR LOCAL FAST_FORWARD
FOR
SELECT sysdb.name AS 'DBName',masterfiles.name as 'FileName'
FROM sys.master_files masterfiles,sys.sysdatabases sysdb
WHERE masterfiles.database_id = sysdb.dbid
AND masterfiles.type_desc = 'LOG' and masterfiles.database_id > 4 --not take system DB's
END
--Case MDF
ELSE
BEGIN
DECLARE CRS_db CURSOR LOCAL FAST_FORWARD
FOR
SELECT sysdb.name as 'DBName',masterfiles.name as 'FileName'
FROM sys.master_files masterfiles,sys.sysdatabases sysdb
WHERE masterfiles.database_id = sysdb.dbid
AND masterfiles.type_desc = 'ROWS' and masterfiles.database_id > 4 --not take system DB's
END
--Start execute MDF or LDF
OPEN CRS_db
FETCH NEXT FROM CRS_db INTO @dbname,@dbfilename
WHILE @@FETCH_STATUS = 0
BEGIN
SET @runalter = 'Alter database ' + @dbname + ' modify file (name = ' + @dbfilename + ' , filename = N'''+ @fullpath +
@dbfilename + '.' + @FileType + ''')'
select (@runalter)
FETCH NEXT FROM CRS_db INTO @dbname,@dbfilename
END
CLOSE CRS_db
DEALLOCATE CRS_db
--write by Sushil Kumar Rout
CREATE PROCEDURE MoveAllDatabaseFile
@FileType CHAR (3),
@fullpath VARCHAR(500)
AS
IF @FileType not in ('MDF','LDF')
BEGIN
SELECT '@FileType must be MDF or LDF and @fullpath should be like N''C:\DatabaseFiles\'' the script was terminated' as
'ERROR'
SELECT 'example of right syntax - exec MoveAllDatabaseFile ''MDF'',N''C:\DatabaseFiles\''' as 'Example Script'
return
END
declare @dbname as varchar (100)
declare @dbfilename as varchar (100)
declare @runalter as varchar(500)
--Case LDF
IF @FileType = 'LDF'
BEGIN
DECLARE CRS_db CURSOR LOCAL FAST_FORWARD
FOR
SELECT sysdb.name AS 'DBName',masterfiles.name as 'FileName'
FROM sys.master_files masterfiles,sys.sysdatabases sysdb
WHERE masterfiles.database_id = sysdb.dbid
AND masterfiles.type_desc = 'LOG' and masterfiles.database_id > 4 --not take system DB's
END
--Case MDF
ELSE
BEGIN
DECLARE CRS_db CURSOR LOCAL FAST_FORWARD
FOR
SELECT sysdb.name as 'DBName',masterfiles.name as 'FileName'
FROM sys.master_files masterfiles,sys.sysdatabases sysdb
WHERE masterfiles.database_id = sysdb.dbid
AND masterfiles.type_desc = 'ROWS' and masterfiles.database_id > 4 --not take system DB's
END
--Start execute MDF or LDF
OPEN CRS_db
FETCH NEXT FROM CRS_db INTO @dbname,@dbfilename
WHILE @@FETCH_STATUS = 0
BEGIN
SET @runalter = 'Alter database ' + @dbname + ' modify file (name = ' + @dbfilename + ' , filename = N'''+ @fullpath +
@dbfilename + '.' + @FileType + ''')'
select (@runalter)
FETCH NEXT FROM CRS_db INTO @dbname,@dbfilename
END
CLOSE CRS_db
DEALLOCATE CRS_db
Monday, March 7, 2011
Find Used Index
SELECT o.name Object_Name,
SCHEMA_NAME(o.schema_id) Schema_name,
i.name Index_name,
i.Type_Desc,
s.user_seeks,
s.user_scans,
s.user_lookups,
s.user_updates
FROM sys.objects AS o
JOIN sys.indexes AS i
ON o.object_id = i.object_id
JOIN
sys.dm_db_index_usage_stats AS s
ON i.object_id = s.object_id
AND i.index_id = s.index_id
WHERE o.type = 'u'
-- Clustered and Non-Clustered indexes
AND i.type IN (1, 2)
-- Indexes that have been updated by not used
AND(s.user_seeks > 0 or s.user_scans > 0 or s.user_lookups > 0 );
SCHEMA_NAME(o.schema_id) Schema_name,
i.name Index_name,
i.Type_Desc,
s.user_seeks,
s.user_scans,
s.user_lookups,
s.user_updates
FROM sys.objects AS o
JOIN sys.indexes AS i
ON o.object_id = i.object_id
JOIN
sys.dm_db_index_usage_stats AS s
ON i.object_id = s.object_id
AND i.index_id = s.index_id
WHERE o.type = 'u'
-- Clustered and Non-Clustered indexes
AND i.type IN (1, 2)
-- Indexes that have been updated by not used
AND(s.user_seeks > 0 or s.user_scans > 0 or s.user_lookups > 0 );
Find UnUsed Index
SELECT o.name Object_Name,
i.name Index_name,
i.Type_Desc
FROM sys.objects AS o
JOIN sys.indexes AS i
ON o.object_id = i.object_id
LEFT OUTER JOIN
sys.dm_db_index_usage_stats AS s
ON i.object_id = s.object_id
AND i.index_id = s.index_id
WHERE o.type = 'u'
-- Clustered and Non-Clustered indexes
AND i.type IN (1, 2)
-- Indexes without stats
AND (s.index_id IS NULL) OR
-- Indexes that have been updated by not used
(s.user_seeks = 0 AND s.user_scans = 0 AND s.user_lookups = 0 );
i.name Index_name,
i.Type_Desc
FROM sys.objects AS o
JOIN sys.indexes AS i
ON o.object_id = i.object_id
LEFT OUTER JOIN
sys.dm_db_index_usage_stats AS s
ON i.object_id = s.object_id
AND i.index_id = s.index_id
WHERE o.type = 'u'
-- Clustered and Non-Clustered indexes
AND i.type IN (1, 2)
-- Indexes without stats
AND (s.index_id IS NULL) OR
-- Indexes that have been updated by not used
(s.user_seeks = 0 AND s.user_scans = 0 AND s.user_lookups = 0 );
Thursday, February 3, 2011
Retrieve Database Object permission scripts and Role members scripts
-- ROLE MEMBERS
SELECT mp.name AS MemberName, rp.name AS RoleName, 'EXEC sp_addrolemember N''' + rp.name + ''', N''' + mp.name + '''' AS AssignSQL,
'EXEC sp_droprolemember N''' + rp.name + ''', N''' + mp.name + '''' AS RemoveSQL
FROM sys.database_role_members a
INNER JOIN sys.database_principals rp ON rp.principal_id = a.role_principal_id
INNER JOIN sys.database_principals AS mp ON mp.principal_id = a.member_principal_id
-- PERMISSIONS
SELECT d.class, d.class_desc, p.name AS UserName, s.name AS GrantedObject,
StateDesc + ' ' + PermissionName + ' ON SCHEMA::[' + s.name + '] TO [' + p.name + '];' AS ApplySQL,
'REVOKE ' + PermissionName + ' ON SCHEMA::[' + s.name + '] TO [' + p.name + '];' AS RemoveSQL
FROM sys.database_permissions d
INNER JOIN sys.database_principals p ON d.grantee_principal_id = p.principal_id
INNER JOIN sys.schemas AS s ON s.schema_id = d.major_id
CROSS APPLY (SELECT state_desc COLLATE SQL_Latin1_General_CP1_CI_AS AS StateDesc,
permission_name COLLATE SQL_Latin1_General_CP1_CI_AS AS PermissionName) ca
WHERE d.class = 3 /*'SCHEMA'*/
UNION ALL
SELECT d.class, d.class_desc, p.name AS UserName, DB_NAME() AS GrantedObject, StateDesc + ' ' + PermissionName + ' TO [' + p.name + '];' AS ApplySQL,
'REVOKE ' + PermissionName + ' TO [' + p.name + '];' AS RemoveSQL
FROM sys.database_permissions d
INNER JOIN sys.database_principals p ON d.grantee_principal_id = p.principal_id
CROSS APPLY (SELECT state_desc COLLATE SQL_Latin1_General_CP1_CI_AS AS StateDesc,
permission_name COLLATE SQL_Latin1_General_CP1_CI_AS AS PermissionName) ca
WHERE d.class = 0 /*DATABASE*/
UNION ALL
SELECT d.class, d.class_desc, p.name AS UserName, '[' + s.name + '].[' + o.name + ']' + ISNULL(' ([' + co.NAME + ']) ', '') AS GrantedObject,
StateDesc + ' ' + PermissionName + ' ON [' + s.name + '].[' + o.name + '] ' + ISNULL('([' + co.NAME + ']) ', '') + 'TO [' + p.name + '];' AS ApplySQL,
'REVOKE ' + PermissionName + ' ON [' + s.name + '].[' + o.name + '] ' + ISNULL('([' + co.NAME + ']) ', '') + 'TO [' + p.name + '];' AS RemoveSQL
FROM sys.database_permissions d
INNER JOIN sys.database_principals p ON d.grantee_principal_id = p.principal_id
INNER JOIN sys.objects AS o ON o.object_id = d.major_id
INNER JOIN sys.schemas AS s ON s.schema_id = o.schema_id
LEFT OUTER JOIN sys.columns co ON co.object_id = o.object_id
AND co.column_id = d.minor_id
CROSS APPLY (SELECT state_desc COLLATE SQL_Latin1_General_CP1_CI_AS AS StateDesc,
permission_name COLLATE SQL_Latin1_General_CP1_CI_AS AS PermissionName) ca
WHERE d.class = 1 /*OBJECT OR COLUMN*/
SELECT mp.name AS MemberName, rp.name AS RoleName, 'EXEC sp_addrolemember N''' + rp.name + ''', N''' + mp.name + '''' AS AssignSQL,
'EXEC sp_droprolemember N''' + rp.name + ''', N''' + mp.name + '''' AS RemoveSQL
FROM sys.database_role_members a
INNER JOIN sys.database_principals rp ON rp.principal_id = a.role_principal_id
INNER JOIN sys.database_principals AS mp ON mp.principal_id = a.member_principal_id
-- PERMISSIONS
SELECT d.class, d.class_desc, p.name AS UserName, s.name AS GrantedObject,
StateDesc + ' ' + PermissionName + ' ON SCHEMA::[' + s.name + '] TO [' + p.name + '];' AS ApplySQL,
'REVOKE ' + PermissionName + ' ON SCHEMA::[' + s.name + '] TO [' + p.name + '];' AS RemoveSQL
FROM sys.database_permissions d
INNER JOIN sys.database_principals p ON d.grantee_principal_id = p.principal_id
INNER JOIN sys.schemas AS s ON s.schema_id = d.major_id
CROSS APPLY (SELECT state_desc COLLATE SQL_Latin1_General_CP1_CI_AS AS StateDesc,
permission_name COLLATE SQL_Latin1_General_CP1_CI_AS AS PermissionName) ca
WHERE d.class = 3 /*'SCHEMA'*/
UNION ALL
SELECT d.class, d.class_desc, p.name AS UserName, DB_NAME() AS GrantedObject, StateDesc + ' ' + PermissionName + ' TO [' + p.name + '];' AS ApplySQL,
'REVOKE ' + PermissionName + ' TO [' + p.name + '];' AS RemoveSQL
FROM sys.database_permissions d
INNER JOIN sys.database_principals p ON d.grantee_principal_id = p.principal_id
CROSS APPLY (SELECT state_desc COLLATE SQL_Latin1_General_CP1_CI_AS AS StateDesc,
permission_name COLLATE SQL_Latin1_General_CP1_CI_AS AS PermissionName) ca
WHERE d.class = 0 /*DATABASE*/
UNION ALL
SELECT d.class, d.class_desc, p.name AS UserName, '[' + s.name + '].[' + o.name + ']' + ISNULL(' ([' + co.NAME + ']) ', '') AS GrantedObject,
StateDesc + ' ' + PermissionName + ' ON [' + s.name + '].[' + o.name + '] ' + ISNULL('([' + co.NAME + ']) ', '') + 'TO [' + p.name + '];' AS ApplySQL,
'REVOKE ' + PermissionName + ' ON [' + s.name + '].[' + o.name + '] ' + ISNULL('([' + co.NAME + ']) ', '') + 'TO [' + p.name + '];' AS RemoveSQL
FROM sys.database_permissions d
INNER JOIN sys.database_principals p ON d.grantee_principal_id = p.principal_id
INNER JOIN sys.objects AS o ON o.object_id = d.major_id
INNER JOIN sys.schemas AS s ON s.schema_id = o.schema_id
LEFT OUTER JOIN sys.columns co ON co.object_id = o.object_id
AND co.column_id = d.minor_id
CROSS APPLY (SELECT state_desc COLLATE SQL_Latin1_General_CP1_CI_AS AS StateDesc,
permission_name COLLATE SQL_Latin1_General_CP1_CI_AS AS PermissionName) ca
WHERE d.class = 1 /*OBJECT OR COLUMN*/
Monday, January 24, 2011
SQL SERVER – Difference Between Index Rebuild and Index Reorganize Explained with T-SQL Script
Index Rebuild : This process drops the existing Index and Recreates the index.
USE AdventureWorks;
GO
ALTER INDEX ALL ON Production.Product REBUILD
GO
Index Reorganize : This process physically reorganizes the leaf nodes of the index.
USE AdventureWorks;
GO
ALTER INDEX ALL ON Production.Product REORGANIZE
GO
Recommendation: Index should be rebuild when index fragmentation is great than 40%. Index should be reorganized when index fragmentation is between 10% to 40%. Index rebuilding process uses more CPU and it locks the database resources. SQL Server development version and Enterprise version has option ONLINE, which can be turned on when Index is rebuilt. ONLINE option will keep index available during the rebuilding.
USE AdventureWorks;
GO
ALTER INDEX ALL ON Production.Product REBUILD
GO
Index Reorganize : This process physically reorganizes the leaf nodes of the index.
USE AdventureWorks;
GO
ALTER INDEX ALL ON Production.Product REORGANIZE
GO
Recommendation: Index should be rebuild when index fragmentation is great than 40%. Index should be reorganized when index fragmentation is between 10% to 40%. Index rebuilding process uses more CPU and it locks the database resources. SQL Server development version and Enterprise version has option ONLINE, which can be turned on when Index is rebuilt. ONLINE option will keep index available during the rebuilding.
Monday, January 10, 2011
INDEX Fragmentation Report - More than 60%
SELECT db_name(ps.database_id) AS DBName, S.name AS Schemaname,
object_name(ps.OBJECT_ID) AS Tablename,
Index_Description = CASE
WHEN ps.index_id = 1 THEN 'Clustered Index'
WHEN ps.index_id <> 1 THEN 'Non-Clustered Index'
END,
b.name AS Indexname,
ROUND(ps.avg_fragmentation_in_percent,0,1) AS 'Fragmentation%',
SUM(page_count*8) AS 'IndexSizeKB',
ps.page_count
FROM sys.dm_db_index_physical_stats (DB_ID(),NULL,NULL,NULL,NULL) AS ps
INNER JOIN sys.indexes AS b ON ps.object_id = b.object_id AND ps.index_id = b.index_id AND b.index_id <> 0 -- heap not required
INNER JOIN sys.objects AS O ON O.object_id=b.object_id AND O.type='U' AND O.is_ms_shipped=0 -- only user tables
INNER JOIN sys.schemas AS S ON S.schema_Id=O.schema_id
WHERE ps.database_id = DB_ID() AND ps.avg_fragmentation_in_percent > 60 -- Indexes having more than 60% fragmentation
GROUP BY db_name(ps.database_id),S.name,object_name(ps.OBJECT_ID),CASE WHEN ps.index_id = 1 THEN 'Clustered Index' WHEN ps.index_id <> 1 THEN 'Non-Clustered Index' END,b.name,ROUND(ps.avg_fragmentation_in_percent,0,1),ps.avg_fragmentation_in_percent,ps.page_count
ORDER BY ps.avg_fragmentation_in_percent DESC
object_name(ps.OBJECT_ID) AS Tablename,
Index_Description = CASE
WHEN ps.index_id = 1 THEN 'Clustered Index'
WHEN ps.index_id <> 1 THEN 'Non-Clustered Index'
END,
b.name AS Indexname,
ROUND(ps.avg_fragmentation_in_percent,0,1) AS 'Fragmentation%',
SUM(page_count*8) AS 'IndexSizeKB',
ps.page_count
FROM sys.dm_db_index_physical_stats (DB_ID(),NULL,NULL,NULL,NULL) AS ps
INNER JOIN sys.indexes AS b ON ps.object_id = b.object_id AND ps.index_id = b.index_id AND b.index_id <> 0 -- heap not required
INNER JOIN sys.objects AS O ON O.object_id=b.object_id AND O.type='U' AND O.is_ms_shipped=0 -- only user tables
INNER JOIN sys.schemas AS S ON S.schema_Id=O.schema_id
WHERE ps.database_id = DB_ID() AND ps.avg_fragmentation_in_percent > 60 -- Indexes having more than 60% fragmentation
GROUP BY db_name(ps.database_id),S.name,object_name(ps.OBJECT_ID),CASE WHEN ps.index_id = 1 THEN 'Clustered Index' WHEN ps.index_id <> 1 THEN 'Non-Clustered Index' END,b.name,ROUND(ps.avg_fragmentation_in_percent,0,1),ps.avg_fragmentation_in_percent,ps.page_count
ORDER BY ps.avg_fragmentation_in_percent DESC
Subscribe to:
Posts (Atom)