Consider creating index on column(s) frequently used in the WHERE, ORDER BY, and GROUP BY clauses.
These column(s) are best candidates for index creating. You should analyze your queries very attentively to avoid creating not useful indexes.
*****
Keep your indexes as narrow as possible.
Because each index take up disk space try to minimize the index key's size to avoid using superfluous disk space. This reduces the number of reads required to read the index and boost overall index performance.
*****
Drop indexes that are not used.
Because each index take up disk space and slow the adding, deleting, and updating of rows, you should drop indexes that are not used. You can use Index Wizard to identify indexes that are not used in your queries.
*****
Try to create indexes on columns that have integer values rather than character values.
Because the integer values usually have less size then the characters values size (the size of the int data type is 4 bytes, the size of the bigint data type is 8 bytes), you can reduce the number of index pages which are used to store the index keys. This reduces the number of reads required to read the index and boost overall index performance.
*****
Limit the number of indexes, if your application updates data very frequently.
Because each index take up disk space and slow the adding, deleting, and updating of rows, you should create new indexes only after analyze the uses of the data, the types and frequencies of queries performed, and how your queries will use the new indexes. In many cases, the speed advantages of creating the new indexes outweigh the disadvantages of additional space used and slowly rows modification. However, avoid using redundant indexes, create them only when it is necessary. For read-only table, the number of indexes can be increased.
*****
Check that index you tried to create does not already exist.
Keep in mind that when you create primary key constraint or unique key constraints SQL Server automatically creates index on the column(s) participate in these constraints. If you specify another index name, you can create the indexes on the same column(s) again and again.
*****
Create clustered index instead of nonclustered to increase performance of the queries that return a range of values and for the queries that contain the GROUP BY or ORDER BY clauses and return the sort results.
Because every table can have only one clustered index, you should choose the column(s) for this index very carefully. Try to analyze all your queries, choose most frequently used queries and include into the clustered index only those column(s), which provide the most performance benefits from the clustered index creation.
*****
Create nonclustered indexes to increase performance of the queries that return few rows and where the index has good selectivity.
In comparison with a clustered index, which can be only one for each table, each table can have as many as 249 nonclustered indexes. However, you should consider nonclustered index creation as carefully as the clustered index, because each index take up disk space and drag on data modification.
*****
Create clustered index on column(s) that is not updated very frequently.
Because the leaf node of a nonclustered index contains a clustered index key if the table has clustered index, then every time that a column used for a clustered index is modified, all of the nonclustered indexes must also be modified.
*****
Create clustered index based on a single column that is as narrow as possibly.
Because nonclustered indexes contain a clustered index key within their leaf nodes and nonclustered indexes use the clustered index to locate data rows, creating clustered index based on a single column that is as narrow as possibly will reduce not only the size of the clustered index, but all nonclustered indexes on the table also.
*****
Avoid creating a clustered index based on an incrementing key.
For example, if a table has surrogate integer primary key declared as IDENTITY and the clustered index was created on this column, then every time data is inserted into this table, the rows will be added to the end of the table. When many rows will be added a "hot spot" can occur. A "hot spot" occurs when many queries try to read or write data in the same area at the same time. A "hot spot" results in I/O bottleneck.
Note. By default, SQL Server creates clustered index for the primary key constraint. So, in this case, you should explicitly specify NONCLUSTERED keyword to indicate that a nonclustered index is created for the primary key constraint.
*****
Create a clustered index for each table.
If you create a table without clustered index, the data rows will not be stored in any particular order. This structure is called a heap. Every time data is inserted into this table, the row will be added to the end of the table. When many rows will be added a "hot spot" can occur. To avoid "hot spot" and improve concurrency, you should create a clustered index for each table.
*****
Don't create index on column(s) which values has low selectivity.
For example, don't create an index for columns with many duplicate values, such as "Sex" column (which has only "Male" and "Female" values), because in this case the disadvantages of additional space used and slowly rows modification outweigh the speed advantages of creating a new index.
*****
If you create a composite (multi-column) index, try to order the columns in the key as to enhance selectivity, with the most selective columns to the leftmost of the key.
The order of the columns in a composite (multi-column) index is very important. This can increase the chance the index will be used.
*****
If you create a composite (multi-column) index, try to order the columns in the key so that the WHERE clauses of the frequently used queries match the column(s) that are leftmost in the index.
The order of the columns in a composite (multi-column) index is very important. The index will be used to evaluate a query only if the leftmost index key's column are specified in the WHERE clause of the query. For example, if you create composite index such as "Name, Age", then the query with the WHERE clause such as "WHERE Name = 'Alex'" will use the index, but the query with the WHERE clause such as "WHERE Age = 28" will not use the index.
*****
If you need to join several tables very frequently, consider creating index on the joined columns.
This can significantly improve performance of the queries against the joined tables.
*****
Consider creating a surrogate integer primary key (identity, for example).
Every table must have a primary key (a unique identifier for a row within a database table). A surrogate primary key is a field that has a unique value but has no actual meaning to the record itself, so users should never see or change a surrogate primary key. Some developers use surrogate primary keys, others use data fields themselves as the primary key. If a primary key consists of many data fields and has a big size, consider creating a surrogate integer primary key. This can improve performance of your queries.
*****
Consider creating the indexes on all the columns, which referenced in most frequently used queries in the WHERE clause which contains the OR operator.
If the WHERE clause in the query contains an OR operator and if any of the referenced columns in the OR clause are not indexed, then the table or clustered index scan will be made. In this case, creating the indexes on all such columns can significantly improve your queries performance.
*****
If your application will perform the same query over and over on the same table, consider creating a covering index including columns from this query.
A covering index is an index, which includes all of the columns referenced in the query. So the creating covering index can improve performance because all the data for the query is contained within the index itself and only the index pages, not the data pages, will be used to retrieve the data. Covering indexes can bring a lot of performance to a query, because it can save a huge amount of I/O operations.
*****
Use the DBCC DBREINDEX statement to rebuild all the indexes on all the tables in your database periodically (for example, one time per week at Sunday) to reduce fragmentation.
Because fragmented data can cause SQL Server to perform unnecessary data reads and the queries performance against the heavy fragmented table can be very bad, you should periodically rebuild all indexes to reduce fragmentation. Try to schedule the DBCC DBREINDEX statement during CPU idle time and slow production periods.
*****
Use the DBCC INDEXDEFRAG statement to defragment clustered and secondary indexes of the specified table or view.
The DBCC INDEXDEFRAG statement is a new SQL Server 2000 command, which was not supported in the previous versions. Unlike DBCC DBREINDEX, DBCC INDEXDEFRAG does not hold locks long term and thus will not block running queries or updates. So, try to use the DBCC INDEXDEFRAG command instead of DBCC DBREINDEX, whenever possible.
*****
Consider using the SORT_IN_TEMPDB option when you create an index and when tempdb is on a different set of disks than the user database.
The SORT_IN_TEMPDB option is a new SQL Server 2000 feature, which was not supported in the previous versions. When you create an index with the SORT_IN_TEMPDB option, SQL Server uses the tempdb database, instead of the current database, to sort data during the index creation. Using this option can reduce the time it takes to create an index, but increases the amount of disk space used to create an index.
*****
Use the SQL Server Profiler Create Trace Wizard with "Identify Scans of Large Tables" trace to determine which tables in your database may need indexes.
This trace will show which tables are being scanned by queries instead of using an index.
*****
Wednesday, May 18, 2011
Thursday, May 12, 2011
EXPORT
CREATE PROCEDURE ExportData_In_TextFile_UsingSQLCMD
@TableName Varchar(255), --To Get Table Name
@FilePath VARCHAR(1000), --To Get Valid FilePath
@Separator Varchar(1), --To Get Field Separator
@Result INT = 0 OUTPUT, --To get error no if any error ocurred during execution
@ErrDesc VARCHAR(500) OUTPUT --To get error description if any error ocurred during execution
AS
DECLARE
@SQLCmd Varchar(8000),
@Columns Varchar(4000),
@DBName Varchar(255)
SET NOCOUNT ON
BEGIN TRY
IF @TableName='' OR @TableName IS NULL
Begin
SET @Result = -1
SET @ErrDesc = 'Table Name can not be null'
RETURN
End
IF @FilePath='' OR @FilePath IS NULL
Begin
SET @Result = -1
SET @ErrDesc = 'File Path can not be null'
RETURN
End
IF @Separator IS Null
SET @Separator=','
Set @DBName=DB_NAME()
--To collect the columns information from the schema view
SELECT @Columns= COALESCE(@Columns + ',', '') + '''' + Column_Name + ''''
From Information_Schema.columns Where Table_Name=@TableName
--Export the data in text file
SET @SQLCmd ='sqlcmd -d ' + @DBName + ' -Q "set nocount on;Select ' + @Columns + ';Select * From ' + @TableName + '" -o "' + @FilePath + '" -W -s "' + @Separator + '" -h -1'
EXEC master..xp_cmdshell @SQLCmd
END TRY
BEGIN CATCH
SET @Result = Error_Number()
SET @ErrDesc = Error_Message()
RETURN
END CATCH
SET NOCOUNT OFF
SET @Result = 0
RETURN 0
@TableName Varchar(255), --To Get Table Name
@FilePath VARCHAR(1000), --To Get Valid FilePath
@Separator Varchar(1), --To Get Field Separator
@Result INT = 0 OUTPUT, --To get error no if any error ocurred during execution
@ErrDesc VARCHAR(500) OUTPUT --To get error description if any error ocurred during execution
AS
DECLARE
@SQLCmd Varchar(8000),
@Columns Varchar(4000),
@DBName Varchar(255)
SET NOCOUNT ON
BEGIN TRY
IF @TableName='' OR @TableName IS NULL
Begin
SET @Result = -1
SET @ErrDesc = 'Table Name can not be null'
RETURN
End
IF @FilePath='' OR @FilePath IS NULL
Begin
SET @Result = -1
SET @ErrDesc = 'File Path can not be null'
RETURN
End
IF @Separator IS Null
SET @Separator=','
Set @DBName=DB_NAME()
--To collect the columns information from the schema view
SELECT @Columns= COALESCE(@Columns + ',', '') + '''' + Column_Name + ''''
From Information_Schema.columns Where Table_Name=@TableName
--Export the data in text file
SET @SQLCmd ='sqlcmd -d ' + @DBName + ' -Q "set nocount on;Select ' + @Columns + ';Select * From ' + @TableName + '" -o "' + @FilePath + '" -W -s "' + @Separator + '" -h -1'
EXEC master..xp_cmdshell @SQLCmd
END TRY
BEGIN CATCH
SET @Result = Error_Number()
SET @ErrDesc = Error_Message()
RETURN
END CATCH
SET NOCOUNT OFF
SET @Result = 0
RETURN 0
INDEX DEFRAGMENTATION SCRIPT
Index Defragmentation is one of the most important DBA tasks. This will significantly improve query performance. When you perform any DML operation (INSERT, UPDATE, or DELETE statements) table fragmentation can occur. If you use predefined maintenance plan it will take much server resource and time. Here is a custom stored procedure.
If you want to determine the level of fragmentation, you can use the SYS.DM_DB_INDEX_PHYSICAL_STATS statement. The SYS.DM_DB_INDEX_PHYSICAL_STATS DMV displays fragmentation information for the data and indexes of the specified object.
Here I use 3 conditions :
1. Fragmentation >=30 AND PAGES>1000 then rebuild
2. Fragmentation between 15 to 29 AND PAGES>1000 then reorganize&updatestatistics
3. If the above two conditions are false then update the statistics
Before you run the procedure create the tables provided for history propose
Click the following URL Index Architecture By Gail Shaw-->http://www.sqlservercentral.com/articles/Indexing/68439/
Note : This Index Defragmentation script only works for SQL server 2005 and sql server 2008.
You can also get defragmentation script for SQL server 2000 here.
http://www.sqlserverblogforum.blogspot.com/2011/03/index-defragmentation-script-for-sql.html
---------------
-- For SQL-2005/2008
USE MSDB;
go
CREATE TABLE [dbo].[dba_defrag_maintenance_history]
(
[db_name] [SYSNAME] NOT NULL,
[table_name] [SYSNAME] NOT NULL,
[index_name] [SYSNAME] NOT NULL,
[frag] [FLOAT] NULL,
[page] [INT] NULL,
[action_taken] [VARCHAR](35) NULL,
[date] [DATETIME] NULL DEFAULT (GETDATE())
)
go
--Archive the data's in master DB
USE MASTER;
go
CREATE TABLE [dbo].[dba_defrag_maintenance_history]
(
[db_name] [SYSNAME] NOT NULL,
[table_name] [SYSNAME] NOT NULL,
[index_name] [SYSNAME] NOT NULL,
[frag] [FLOAT] NULL,
[page] [INT] NULL,
[action_taken] [VARCHAR](35) NULL,
[date] [DATETIME] NULL DEFAULT (GETDATE())
)
go
USE msdb
go
CREATE PROC [dbo].[indexdefragmentation]@p_dbname SYSNAME
/*
Summary: Remove the Index Fragmentation to improve the query performance
Contact: Muthukkumaran Kaliyamoorhty SQL DBA
Description: This Sproc will take the fragmentation details and do four kinds of work.
1. Check the fragmentation greater than 30% and pages greater than 1000 then rebuild
2. Check the fragmentation between 15% to 29% and pages greater than 1000 then reorganize
3. Check the fragmentation between 15% to 29% and pages greater than 1000 and page level lock disabled then rebuild
4. Update the statistics if the three conditions is false
ChangeLog:
Date Coder Description
2011-11-23 Muthukkumaran Kaliyamoorhty created
*************************All the SQL keywords should be written in upper case*************************
*/
AS
BEGIN
SET NOCOUNT ON
DECLARE
@db_name SYSNAME,
@tab_name SYSNAME,
@ind_name VARCHAR(500),
@schema_name SYSNAME,
@frag FLOAT,
@pages INT,
@min_id INT,
@max_id INT
SET @db_name=@p_dbname
--------------------------------------------------------------------------------------------------------------------------------------
--inserting the Fragmentation details
--------------------------------------------------------------------------------------------------------------------------------------
CREATE TABLE #tempfrag
(
id INT IDENTITY,
table_name SYSNAME,
index_name VARCHAR(500),
frag FLOAT,
pages INT,
schema_name SYSNAME
)
EXEC ('USE ['+@db_name+'];
INSERT INTO #tempfrag (table_name,index_name,frag,pages,schema_name)
SELECT OBJECT_NAME(F.OBJECT_ID) obj,i.name ind,
f.avg_fragmentation_in_percent,
f.page_count,table_schema
FROM SYS.DM_DB_INDEX_PHYSICAL_STATS (DB_ID(),NULL,NULL,NULL,NULL) F
JOIN SYS.INDEXES I
ON(F.OBJECT_ID=I.OBJECT_ID)AND i.index_id=f.index_id
JOIN INFORMATION_SCHEMA.TABLES S
ON (s.table_name=OBJECT_NAME(F.OBJECT_ID))
--WHERE INDEX_ID<> 0
AND f.database_id=DB_ID()
AND OBJECTPROPERTY(I.OBJECT_ID,''ISSYSTEMTABLE'')=0'
)
SELECT @min_id=MIN(ID)FROM #tempfrag
SELECT @max_id=MAX(ID)FROM #tempfrag
TRUNCATE TABLE msdb.dbo.dba_defrag_maintenance_history
WHILE (@min_id<=@max_id)
BEGIN
SELECT
@tab_name=table_name,
@schema_name=schema_name,
@ind_name=index_name ,
@frag=frag ,
@pages=pages
FROM #tempfrag WHERE id = @min_id
--------------------------------------------------------------------------------------------------------------------------------------
--Check the fragmentation greater than 30% and pages greater than 1000 then rebuild
--------------------------------------------------------------------------------------------------------------------------------------
IF (@ind_name IS NOT NULL)
BEGIN
IF (@frag>=30 AND @pages>1000)
BEGIN
EXEC ('USE ['+@db_name+'];ALTER INDEX ['+@ind_name+'] ON ['+@schema_name+'].['+@tab_name +'] REBUILD ')
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'REBUILD',GETDATE())
END
--------------------------------------------------------------------------------------------------------------------------------------
--Check the fragmentation between 15% to 29% and pages greater than 1000 then reorganize
--------------------------------------------------------------------------------------------------------------------------------------
ELSE IF((@frag BETWEEN 15 AND 29) AND @pages>1000 )
BEGIN
BEGIN TRY
EXEC ('USE ['+@db_name+'];ALTER INDEX ['+@ind_name+'] ON ['+@schema_name+'].['+@tab_name +'] REORGANIZE ')
EXEC ('USE ['+@db_name+'];UPDATE STATISTICS ['+@schema_name+'].['+@tab_name+'] (['+@ind_name+']) ' )
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'REORGANIZE & UPDATESTATS',GETDATE())
END TRY
BEGIN CATCH
--------------------------------------------------------------------------------------------------------------------------------------
--Check the fragmentation between 15% to 29% and pages greater than 1000 and page level
--lock disabled then rebuild
--------------------------------------------------------------------------------------------------------------------------------------
IF ERROR_NUMBER()=2552
EXEC ('USE ['+@db_name+'];ALTER INDEX ['+@ind_name+'] ON ['+@schema_name+'].['+@tab_name +'] REBUILD ')
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'PLLD_REBUILD',GETDATE())
END CATCH
END
--------------------------------------------------------------------------------------------------------------------------------------
--Update the statistics for all indexes if the first three conditions is false
--------------------------------------------------------------------------------------------------------------------------------------
ELSE
BEGIN
EXEC ('USE ['+@db_name+'];UPDATE STATISTICS ['+@schema_name+'].['+@tab_name+'] (['+@ind_name+']) ' )
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'UPDATESTATS',GETDATE())
END
END
ELSE
BEGIN
--------------------------------------------------------------------------------------------------------------------------------------
--Update the statistics for all tables if the first three conditions is false
--------------------------------------------------------------------------------------------------------------------------------------
EXEC ('USE ['+@db_name+'];UPDATE STATISTICS ['+@schema_name+'].['+@tab_name+']')
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,'HEAP',@frag,@pages,'UPDATESTATS',GETDATE())
END
SET @min_id=@min_id+1
END
DROP TABLE #tempfrag
INSERT INTO master.dbo.dba_defrag_maintenance_history
SELECT * FROM msdb.dbo.dba_defrag_maintenance_history
END
If you want to determine the level of fragmentation, you can use the SYS.DM_DB_INDEX_PHYSICAL_STATS statement. The SYS.DM_DB_INDEX_PHYSICAL_STATS DMV displays fragmentation information for the data and indexes of the specified object.
Here I use 3 conditions :
1. Fragmentation >=30 AND PAGES>1000 then rebuild
2. Fragmentation between 15 to 29 AND PAGES>1000 then reorganize&updatestatistics
3. If the above two conditions are false then update the statistics
Before you run the procedure create the tables provided for history propose
Click the following URL Index Architecture By Gail Shaw-->http://www.sqlservercentral.com/articles/Indexing/68439/
Note : This Index Defragmentation script only works for SQL server 2005 and sql server 2008.
You can also get defragmentation script for SQL server 2000 here.
http://www.sqlserverblogforum.blogspot.com/2011/03/index-defragmentation-script-for-sql.html
---------------
-- For SQL-2005/2008
USE MSDB;
go
CREATE TABLE [dbo].[dba_defrag_maintenance_history]
(
[db_name] [SYSNAME] NOT NULL,
[table_name] [SYSNAME] NOT NULL,
[index_name] [SYSNAME] NOT NULL,
[frag] [FLOAT] NULL,
[page] [INT] NULL,
[action_taken] [VARCHAR](35) NULL,
[date] [DATETIME] NULL DEFAULT (GETDATE())
)
go
--Archive the data's in master DB
USE MASTER;
go
CREATE TABLE [dbo].[dba_defrag_maintenance_history]
(
[db_name] [SYSNAME] NOT NULL,
[table_name] [SYSNAME] NOT NULL,
[index_name] [SYSNAME] NOT NULL,
[frag] [FLOAT] NULL,
[page] [INT] NULL,
[action_taken] [VARCHAR](35) NULL,
[date] [DATETIME] NULL DEFAULT (GETDATE())
)
go
USE msdb
go
CREATE PROC [dbo].[indexdefragmentation]@p_dbname SYSNAME
/*
Summary: Remove the Index Fragmentation to improve the query performance
Contact: Muthukkumaran Kaliyamoorhty SQL DBA
Description: This Sproc will take the fragmentation details and do four kinds of work.
1. Check the fragmentation greater than 30% and pages greater than 1000 then rebuild
2. Check the fragmentation between 15% to 29% and pages greater than 1000 then reorganize
3. Check the fragmentation between 15% to 29% and pages greater than 1000 and page level lock disabled then rebuild
4. Update the statistics if the three conditions is false
ChangeLog:
Date Coder Description
2011-11-23 Muthukkumaran Kaliyamoorhty created
*************************All the SQL keywords should be written in upper case*************************
*/
AS
BEGIN
SET NOCOUNT ON
DECLARE
@db_name SYSNAME,
@tab_name SYSNAME,
@ind_name VARCHAR(500),
@schema_name SYSNAME,
@frag FLOAT,
@pages INT,
@min_id INT,
@max_id INT
SET @db_name=@p_dbname
--------------------------------------------------------------------------------------------------------------------------------------
--inserting the Fragmentation details
--------------------------------------------------------------------------------------------------------------------------------------
CREATE TABLE #tempfrag
(
id INT IDENTITY,
table_name SYSNAME,
index_name VARCHAR(500),
frag FLOAT,
pages INT,
schema_name SYSNAME
)
EXEC ('USE ['+@db_name+'];
INSERT INTO #tempfrag (table_name,index_name,frag,pages,schema_name)
SELECT OBJECT_NAME(F.OBJECT_ID) obj,i.name ind,
f.avg_fragmentation_in_percent,
f.page_count,table_schema
FROM SYS.DM_DB_INDEX_PHYSICAL_STATS (DB_ID(),NULL,NULL,NULL,NULL) F
JOIN SYS.INDEXES I
ON(F.OBJECT_ID=I.OBJECT_ID)AND i.index_id=f.index_id
JOIN INFORMATION_SCHEMA.TABLES S
ON (s.table_name=OBJECT_NAME(F.OBJECT_ID))
--WHERE INDEX_ID<> 0
AND f.database_id=DB_ID()
AND OBJECTPROPERTY(I.OBJECT_ID,''ISSYSTEMTABLE'')=0'
)
SELECT @min_id=MIN(ID)FROM #tempfrag
SELECT @max_id=MAX(ID)FROM #tempfrag
TRUNCATE TABLE msdb.dbo.dba_defrag_maintenance_history
WHILE (@min_id<=@max_id)
BEGIN
SELECT
@tab_name=table_name,
@schema_name=schema_name,
@ind_name=index_name ,
@frag=frag ,
@pages=pages
FROM #tempfrag WHERE id = @min_id
--------------------------------------------------------------------------------------------------------------------------------------
--Check the fragmentation greater than 30% and pages greater than 1000 then rebuild
--------------------------------------------------------------------------------------------------------------------------------------
IF (@ind_name IS NOT NULL)
BEGIN
IF (@frag>=30 AND @pages>1000)
BEGIN
EXEC ('USE ['+@db_name+'];ALTER INDEX ['+@ind_name+'] ON ['+@schema_name+'].['+@tab_name +'] REBUILD ')
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'REBUILD',GETDATE())
END
--------------------------------------------------------------------------------------------------------------------------------------
--Check the fragmentation between 15% to 29% and pages greater than 1000 then reorganize
--------------------------------------------------------------------------------------------------------------------------------------
ELSE IF((@frag BETWEEN 15 AND 29) AND @pages>1000 )
BEGIN
BEGIN TRY
EXEC ('USE ['+@db_name+'];ALTER INDEX ['+@ind_name+'] ON ['+@schema_name+'].['+@tab_name +'] REORGANIZE ')
EXEC ('USE ['+@db_name+'];UPDATE STATISTICS ['+@schema_name+'].['+@tab_name+'] (['+@ind_name+']) ' )
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'REORGANIZE & UPDATESTATS',GETDATE())
END TRY
BEGIN CATCH
--------------------------------------------------------------------------------------------------------------------------------------
--Check the fragmentation between 15% to 29% and pages greater than 1000 and page level
--lock disabled then rebuild
--------------------------------------------------------------------------------------------------------------------------------------
IF ERROR_NUMBER()=2552
EXEC ('USE ['+@db_name+'];ALTER INDEX ['+@ind_name+'] ON ['+@schema_name+'].['+@tab_name +'] REBUILD ')
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'PLLD_REBUILD',GETDATE())
END CATCH
END
--------------------------------------------------------------------------------------------------------------------------------------
--Update the statistics for all indexes if the first three conditions is false
--------------------------------------------------------------------------------------------------------------------------------------
ELSE
BEGIN
EXEC ('USE ['+@db_name+'];UPDATE STATISTICS ['+@schema_name+'].['+@tab_name+'] (['+@ind_name+']) ' )
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,@ind_name,@frag,@pages,'UPDATESTATS',GETDATE())
END
END
ELSE
BEGIN
--------------------------------------------------------------------------------------------------------------------------------------
--Update the statistics for all tables if the first three conditions is false
--------------------------------------------------------------------------------------------------------------------------------------
EXEC ('USE ['+@db_name+'];UPDATE STATISTICS ['+@schema_name+'].['+@tab_name+']')
INSERT INTO msdb.dbo.dba_defrag_maintenance_history
VALUES (@db_name,@tab_name,'HEAP',@frag,@pages,'UPDATESTATS',GETDATE())
END
SET @min_id=@min_id+1
END
DROP TABLE #tempfrag
INSERT INTO master.dbo.dba_defrag_maintenance_history
SELECT * FROM msdb.dbo.dba_defrag_maintenance_history
END
Wednesday, May 11, 2011
How to get all MSSQL database columns names, data types and length
SELECT SysObjects.[Name] as TableName,
SysColumns.[Name] as ColumnName,
SysTypes.[Name] AS [Name],
SysColumns.[Length] AS [Length]
FROM
SysObjects INNER JOIN SysColumns
ON SysObjects.[Id] = SysColumns.[Id]
INNER JOIN SysTypes
ON SysTypes.[xtype] = SysColumns.[xtype]
WHERE SysObjects.[type] = 'U'
AND SysTypes.[Name] <> 'sysname'
ORDER BY SysObjects.[Name]
SysColumns.[Name] as ColumnName,
SysTypes.[Name] AS [Name],
SysColumns.[Length] AS [Length]
FROM
SysObjects INNER JOIN SysColumns
ON SysObjects.[Id] = SysColumns.[Id]
INNER JOIN SysTypes
ON SysTypes.[xtype] = SysColumns.[xtype]
WHERE SysObjects.[type] = 'U'
AND SysTypes.[Name] <> 'sysname'
ORDER BY SysObjects.[Name]
Tuesday, May 10, 2011
SQL Server Best Practices for Tuning
Tuning is more of an art than a science and the best way to learn how is to do it. The following points are some recommended best practices for tuning your applications:
Tuning an application is an effort of multiple teams, including front-end programmers, database developers, network and database administrators, middle-tier developers and other professionals. Some improvements can also be accomplished by upgrading the server and client hardware. Ensure that your programs are the definite cause of the sub-standard performance before spending much time and effort optimizing code.
Determine what level of performance users consider optimal. Also try to identify the most and less critical queries.
Identify the bottleneck in your code. Only a single poorly written code module can ruin the performance of the entire application.
If possible, try to optimize your database design before modifying code. If the design is sub-optimal issues will keep cropping up. Build a data model appropriate for your application and use the most efficient data types available. If the same application is used for data-entry and reporting, try splitting it in two separate applications. Also consider partitioning your tables or using distributed partitioned views to reduce the number of rows affected by your queries.
Research the current indexing strategy and try to improve it. Try to place the clustered index on a column (or a combination of columns) that has a range of distinct values and is rarely updated. Avoid clustered indexes on monotonically increasing columns. Try to use non-clustered indexes to "cover" important queries. Try to use short index keys when possible, except when creating covering indexes.
If you have appropriate indexes but they don't seem to produce the expected results examine their fragmentation level. Also ensure that statistics stay up to date. Use the index tuning wizard and study the textual and graphical query execution plans to determine the cause of inefficient queries.
Learn how the optimizer works. Examine and test various join types and table join order; use optimizer hints to override the query optimizer's behavior if necessary. Move all database maintenance tasks to the periods of limited user activity.
Try to avoid sub-queries in your programs; solutions with joins usually outperform sub-queries. Use UPDATE… FROM and DELETE … FROM extensions of Transact-SQL to eliminate sub-queries when updating and deleting data using multiple tables.
Consider using UNION ALL instead of UNION when appropriate.
Evaluate the impact of triggers and constraints on your performance. At times your programs might be fully optimized but a poorly written trigger on one of the affected tables can bring the server to its knees. Constraints that cannot take advantage of helpful indexes might impact performance as well.
Try to avoid using SELECT … INTO unless you are the only user in the database or the operation takes a minimal amount of time. Consider replacing all references to SELECT … INTO with INSERT … SELECT.
Use SET NOCOUNT ON in all of your code modules to reduce the amount of information going from the server to clients and reduce the network load.
When using built-in or user-defined functions the optimizer cannot take advantage of the index if index key is being referenced in a function. Try rewriting your queries so that the index key is not affected.
When possible try to rewrite the queries using local variables as stored procedures with input parameters. The optimizer is more likely to choose the appropriate execution plan with parameters.
When appropriate use temporary tables to reduce the number of rows affected by the queries. If temporary tables are joined to other temporary or permanent tables consider building an index on the temporary table.
Optimize loops - move all tasks that do not need to be executed repeatedly outside of the loop.
Do not use cursors unless absolutely necessary. Transact-SQL is a set based language and is not optimized for processing one row at a time.
Be aware that updates in-place are more efficient than a DELETE followed by an INSERT. Try to design your database such that in-place updates are likely to occur.
Tuning an application is an effort of multiple teams, including front-end programmers, database developers, network and database administrators, middle-tier developers and other professionals. Some improvements can also be accomplished by upgrading the server and client hardware. Ensure that your programs are the definite cause of the sub-standard performance before spending much time and effort optimizing code.
Determine what level of performance users consider optimal. Also try to identify the most and less critical queries.
Identify the bottleneck in your code. Only a single poorly written code module can ruin the performance of the entire application.
If possible, try to optimize your database design before modifying code. If the design is sub-optimal issues will keep cropping up. Build a data model appropriate for your application and use the most efficient data types available. If the same application is used for data-entry and reporting, try splitting it in two separate applications. Also consider partitioning your tables or using distributed partitioned views to reduce the number of rows affected by your queries.
Research the current indexing strategy and try to improve it. Try to place the clustered index on a column (or a combination of columns) that has a range of distinct values and is rarely updated. Avoid clustered indexes on monotonically increasing columns. Try to use non-clustered indexes to "cover" important queries. Try to use short index keys when possible, except when creating covering indexes.
If you have appropriate indexes but they don't seem to produce the expected results examine their fragmentation level. Also ensure that statistics stay up to date. Use the index tuning wizard and study the textual and graphical query execution plans to determine the cause of inefficient queries.
Learn how the optimizer works. Examine and test various join types and table join order; use optimizer hints to override the query optimizer's behavior if necessary. Move all database maintenance tasks to the periods of limited user activity.
Try to avoid sub-queries in your programs; solutions with joins usually outperform sub-queries. Use UPDATE… FROM and DELETE … FROM extensions of Transact-SQL to eliminate sub-queries when updating and deleting data using multiple tables.
Consider using UNION ALL instead of UNION when appropriate.
Evaluate the impact of triggers and constraints on your performance. At times your programs might be fully optimized but a poorly written trigger on one of the affected tables can bring the server to its knees. Constraints that cannot take advantage of helpful indexes might impact performance as well.
Try to avoid using SELECT … INTO unless you are the only user in the database or the operation takes a minimal amount of time. Consider replacing all references to SELECT … INTO with INSERT … SELECT.
Use SET NOCOUNT ON in all of your code modules to reduce the amount of information going from the server to clients and reduce the network load.
When using built-in or user-defined functions the optimizer cannot take advantage of the index if index key is being referenced in a function. Try rewriting your queries so that the index key is not affected.
When possible try to rewrite the queries using local variables as stored procedures with input parameters. The optimizer is more likely to choose the appropriate execution plan with parameters.
When appropriate use temporary tables to reduce the number of rows affected by the queries. If temporary tables are joined to other temporary or permanent tables consider building an index on the temporary table.
Optimize loops - move all tasks that do not need to be executed repeatedly outside of the loop.
Do not use cursors unless absolutely necessary. Transact-SQL is a set based language and is not optimized for processing one row at a time.
Be aware that updates in-place are more efficient than a DELETE followed by an INSERT. Try to design your database such that in-place updates are likely to occur.
Tables and Index with number of days statistics being old
SELECT OBJECT_NAME(A.object_id) AS Object_Name, A.name AS index_name, STATS_DATE(A.OBJECT_ID, index_id) AS StatsUpdated ,
DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate()) DaysOld
FROM sys.indexes A
INNER JOIN sys.tables B ON A.object_id = B.object_id
WHERE A.name IS NOT NULL
ORDER BY DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate()) DESC
DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate()) DaysOld
FROM sys.indexes A
INNER JOIN sys.tables B ON A.object_id = B.object_id
WHERE A.name IS NOT NULL
ORDER BY DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate()) DESC
Friday, May 6, 2011
Problem
I had a production problem recently, the transaction log grew 1 GB every minute and filled up the entire drive. The error message I got was: "The transaction log for database 'MyDB' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases. My SQL Server is clustered and the issue happened after a node failover occurred. In this tip I cover steps that I took to resolve this problem.
Solution
The message I got was the transaction log drive is full and users are unable to access the application, DBAs never want to hear this! The usual DBA question is "what was changed"?
My first priority was to fix the problem immediately, so users can access the application and then do root cause analysis. For the database in question, I changed the database recovery model to SIMPLE. Then I shrank the transaction log file to a reasonable size (note: you can't shrink it below its original size). After these changes, the log growth stopped, but this still did not tell me what the issue was. Something to note is that changing the recovery model to SIMPLE is not possible if Mirroring or Log Shipping is setup unless you remove these first, but these can be reconfigured later, the important thing is to stop the bleeding ASAP.
Other options would have been to create another transaction log file on a different drive that had plenty of space or to clean up files on the drive that was running out of space. But in my case the file was growing so rapidly that I needed to change the recovery model so the growth would stop until this issue was resolved.
Below are the steps that I took to check the size, change the recovery model and shrink the log file.
SELECT recovery_model_desc FROM sys.databases WHERE name = 'MyDB' --Before
ALTER DATABASE MyDB SET recovery simple
SELECT recovery_model_desc FROM sys.databases WHERE name = 'MyDB' --After
EXEC xp_fixeddrives --Check free drive space
EXEC sp_helpdb MyDB -- Note the size of the log before shrink
DBCC shrinkfile(MyDB_log, 1024) -- shrink log to 1 GB
EXEC sp_helpdb MyDB -- Note the size of the log after shrink
EXEC xp_fixeddrives -- Check free drive space
First aid is complete, database is up and users are able to use the application again.
Now I need to find out what was causing the log to grow so fast, fix the issue, put the database back in FULL recovery mode, do a full backup and make sure my log backups are working correctly.
The drive now has enough space. I put the database in Full recovery mode again to see if I could catch the offending transaction and I monitored drive space and transaction log size. I immediately noticed the transaction log growing about 1 GB per minute. I then executed the block of code below to find out what may be causing the log growth.
DBCC opentran --get the spid and feed it into the next query
DECLARE @handle BINARY(20)
DECLARE @SPID INT
SET @SPID = 100 -- 100 was the spid from the dbcc opentran
SELECT @handle = sql_handle
FROM MASTER..sysprocesses
WHERE spid = @SPID
SELECT [text] FROM ::fn_get_sql(@handle)
GO
The culprit was a stored procedure that was showing up every time I executed the above block of code, although the SPID changed each time. I looked into the stored procedure which was not very complicated, but does a bunch of deletes and inserts based on some condition. This procedure was called by an automated process every minute. What happens when a cluster node fail-over happens? The cache is flushed and all the query plans are gone. Since there has not been any issue with this procedure in the past, my first option was to recompile the procedure. Viola, it worked! The transaction log stopped growing so fast and I didn't see this procedure in the open transactions from the above code. So this simple stored procedure that worked fine suddenly caused a major production issue causing the application to shut down and the fix was as simple as a recompile. Quite interesting.
My next step was to start a full database backup ASAP. While the full database backup was running I kicked off transaction log backups every 5 minutes and they seemed to work fine. I changed the transaction log backup to its normal schedule. After the full backup was completed, I reviewed the sql log, drive space, transaction log size, open transactions and everything seemed fine. If it is a cluster, you could try failing back to the other node and see if that fixes the problem, which in my case was the need for a recompile. In a cluster, the storage is shared by the nodes, so there is no benefit from a storage point of view.
After this, I researched what I could have done better in this situation. I knew that the scripts I used were from SQL Server 2000. SQL Server 2005/2008 DMVs would have helped me to find the offending transaction. Another eventful day in a DBAs life. Feels good to have resolved the issue and at the same time,
I had a production problem recently, the transaction log grew 1 GB every minute and filled up the entire drive. The error message I got was: "The transaction log for database 'MyDB' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases. My SQL Server is clustered and the issue happened after a node failover occurred. In this tip I cover steps that I took to resolve this problem.
Solution
The message I got was the transaction log drive is full and users are unable to access the application, DBAs never want to hear this! The usual DBA question is "what was changed"?
My first priority was to fix the problem immediately, so users can access the application and then do root cause analysis. For the database in question, I changed the database recovery model to SIMPLE. Then I shrank the transaction log file to a reasonable size (note: you can't shrink it below its original size). After these changes, the log growth stopped, but this still did not tell me what the issue was. Something to note is that changing the recovery model to SIMPLE is not possible if Mirroring or Log Shipping is setup unless you remove these first, but these can be reconfigured later, the important thing is to stop the bleeding ASAP.
Other options would have been to create another transaction log file on a different drive that had plenty of space or to clean up files on the drive that was running out of space. But in my case the file was growing so rapidly that I needed to change the recovery model so the growth would stop until this issue was resolved.
Below are the steps that I took to check the size, change the recovery model and shrink the log file.
SELECT recovery_model_desc FROM sys.databases WHERE name = 'MyDB' --Before
ALTER DATABASE MyDB SET recovery simple
SELECT recovery_model_desc FROM sys.databases WHERE name = 'MyDB' --After
EXEC xp_fixeddrives --Check free drive space
EXEC sp_helpdb MyDB -- Note the size of the log before shrink
DBCC shrinkfile(MyDB_log, 1024) -- shrink log to 1 GB
EXEC sp_helpdb MyDB -- Note the size of the log after shrink
EXEC xp_fixeddrives -- Check free drive space
First aid is complete, database is up and users are able to use the application again.
Now I need to find out what was causing the log to grow so fast, fix the issue, put the database back in FULL recovery mode, do a full backup and make sure my log backups are working correctly.
The drive now has enough space. I put the database in Full recovery mode again to see if I could catch the offending transaction and I monitored drive space and transaction log size. I immediately noticed the transaction log growing about 1 GB per minute. I then executed the block of code below to find out what may be causing the log growth.
DBCC opentran --get the spid and feed it into the next query
DECLARE @handle BINARY(20)
DECLARE @SPID INT
SET @SPID = 100 -- 100 was the spid from the dbcc opentran
SELECT @handle = sql_handle
FROM MASTER..sysprocesses
WHERE spid = @SPID
SELECT [text] FROM ::fn_get_sql(@handle)
GO
The culprit was a stored procedure that was showing up every time I executed the above block of code, although the SPID changed each time. I looked into the stored procedure which was not very complicated, but does a bunch of deletes and inserts based on some condition. This procedure was called by an automated process every minute. What happens when a cluster node fail-over happens? The cache is flushed and all the query plans are gone. Since there has not been any issue with this procedure in the past, my first option was to recompile the procedure. Viola, it worked! The transaction log stopped growing so fast and I didn't see this procedure in the open transactions from the above code. So this simple stored procedure that worked fine suddenly caused a major production issue causing the application to shut down and the fix was as simple as a recompile. Quite interesting.
My next step was to start a full database backup ASAP. While the full database backup was running I kicked off transaction log backups every 5 minutes and they seemed to work fine. I changed the transaction log backup to its normal schedule. After the full backup was completed, I reviewed the sql log, drive space, transaction log size, open transactions and everything seemed fine. If it is a cluster, you could try failing back to the other node and see if that fixes the problem, which in my case was the need for a recompile. In a cluster, the storage is shared by the nodes, so there is no benefit from a storage point of view.
After this, I researched what I could have done better in this situation. I knew that the scripts I used were from SQL Server 2000. SQL Server 2005/2008 DMVs would have helped me to find the offending transaction. Another eventful day in a DBAs life. Feels good to have resolved the issue and at the same time,
Subscribe to:
Posts (Atom)