Make sure that the database being restored is from the same Product level as the server you are restoring to. I once faced an issue when trying to restore a database from an SP4 to a non-SP4 database server.
1. Detach the corrupted MDF from Enterprise Manager.
2. Save the corrupted MDF to a safe location.
3. Now create a new database with the same name (the location must be the same as the “corrupted MDF and LDF”.
4. Stop the SQL Server service.
5. Now replace this MDF and LDF (new database) with the “corrupted MDF and LDF”.
6. Start SQL Server.
7. Launch Enterprise Manager. Now you should be able to see the database in suspect mode.
8. Launch Query Analyzer and select the master database. Issue the following commands in this order:
sp_configure ‘allow updates’, 1
go
reconfigure with override
go
9. You can now make changes to the system catalog tables.
10. Now to check the status of our database. Issue the following command
select status from sysdatabases where name=’your database name’ (replace with your database name)
11. Now execute the following command:
update sysdatabases
set status = 32768 where name=’your database name’
12. Next restart SQL Server Service.
13. You will now be in Emergency Mode.
14. Now create a destination recovery database, called dbrecover and try to push data from the corrupt database to the new dbrecover database using DTS.
15. Now issue the following undocumented command:
DBCC rebuild_log (‘your database name’, ‘new log filename with path’)
16. After this successful execution, you will need to get into your database, so issue the following commands:
use master
go
sp_dboption ‘your database name’,‘single_user’,’true’
go
DBCC checkdb(‘your database name’,repair_allow_data_loss)
go
17. Finally set your database back to a normal status by issuing the following command:
use master
go
update sysdatabases
set status=0 where name=’your database name’
go
18. Now you can see that your database is back online!
19. As a precautionary measure, please disable updates the system catalogs immediately by issuing the following command:
use master
go
sp_configure ‘allow updates’,0
go
Reconfigure with override
go
This is how one recovers a corrupted mdf database and brings it back online.
If you found this article useful, please visit our sponsors who keep us going...
Tuesday, July 27, 2010
Tuesday, July 20, 2010
Rename Database
ALTER DATABASE yafnet SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
SP_RENAMEDB yafnet,BballersForum
ALTER DATABASE BballersForum SET MULTI_USER
GO
GO
SP_RENAMEDB yafnet,BballersForum
ALTER DATABASE BballersForum SET MULTI_USER
GO
Friday, July 16, 2010
If you are having difficulty tuning the performance of a poorly performing query that has one or more JOINs,
check to see if they query plan created by the query optimizer is using a hash join. When the query optimizer is asked to join two tables that don't have appropriate indexes, it will often perform a hash join.
A hash join is resource intensive (especially CPU and I/O) and can slow the performance of your join. If the query in question is run often, you should consider adding appropriate indexes. For example, if you are joining column1 in table1 to column5 in table2, then column1 in table1 and column5 in table2 need to have indexes.
Once indexes are added to the appropriate columns used in the joins in your query, the query optimizer will most likely be able to use these indexes, performing a nested-loop join instead of a hash join, and performance will improve.
A hash join is resource intensive (especially CPU and I/O) and can slow the performance of your join. If the query in question is run often, you should consider adding appropriate indexes. For example, if you are joining column1 in table1 to column5 in table2, then column1 in table1 and column5 in table2 need to have indexes.
Once indexes are added to the appropriate columns used in the joins in your query, the query optimizer will most likely be able to use these indexes, performing a nested-loop join instead of a hash join, and performance will improve.
Tips for Rebuilding Indexes
Periodically (daily, weekly, or monthly) perform a database reorganization on all the indexes on all the tables in your database. This will rebuild the indexes so that the data is no longer fragmented. Fragmented data can cause SQL Server to perform unnecessary data reads, slowing down SQL Server's performance. It will also update column statistics.
If you do a reorganization on a table with a clustered index, any non-clustered indexes on that same table will automatically be rebuilt.
Database reorganizations can be done using the Maintenance Wizard, or by running your own custom script via the SQL Server Agent (see below).
The DBCC DBREINDEX command will not automatically rebuild all of the indexes on all the tables in a database; it can only work on one table at a time. But if you run the following script, you can index all the tables in a database with ease:
--Script to automatically reindex all tables in a database
USE DatabaseName --Enter the name of the database you want to reindex
DECLARE @TableName varchar(255)
DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = 'base table'
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
DBCC DBREINDEX(@TableName,' ',90)
FETCH NEXT FROM TableCursor INTO @TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
The script will automatically reindex every index in every table of any database you select, and provide a fill factor of 90%. You can substitute any number appropriate for the fill factor in the above script.
When DBCC DBREINDEX is used to rebuild indexes, keep in mind that as the indexes on a specific table are being rebuilt, that the table becomes unavailable for use by your users.
For example, when a non-clustered index is rebuilt, a shared table lock is put on the table, preventing all but SELECT operations to be performed on it. When a clustered index is rebuilt, an exclusive table lock is put on the table, preventing any table access by your users. Because of this, you should only run this command when users don't need access to the tables being reorganized. [7.0, 2000, 2005] Updated 7-24-2006
*****
When you create or rebuild an index, you can specify a fill factor, which is the amount the data pages in the index that are filled. A fill factor of 100 means that each index page is 100% full, a fill factor of 50% means each index page is 50% full. If you create a clustered index that has a fill factor of 100, and it is not based on a monotonically increasing key, that means that each time a record is inserted (or perhaps updated), page splits may occur because there is no room for the data in the existing pages. Numerous page splits can slow down SQL Server's performance.
Here's an example: Assume that you have just created a new index on a table with the default fill factor. When SQL Server creates the index, it places the index on contiguous physical pages, which allows optimal I/O access because the data can be read sequentially. But as the table grows and changes with INSERTS, UPDATES, and DELETES, page splitting occurs. When pages split, SQL Server must allocate new pages elsewhere on the disk, and these new pages are not contiguous with the original physical pages. Because of this, random I/O, not sequential I/O access must be used to gather the data, which is much slower, to access the index pages.
So what is the ideal fill factor? It depends on the ratio of reads to writes that your application makes to your SQL Server tables. As a rule of thumb, follow these guidelines:
* Low Update Tables (100-1 read to write ratio): 100% fill factor
* High Update Tables (where writes exceed reads): 50%-70% fill factor
* Everything In-Between: 80%-90% fill factor.
You may have to experiment to find the optimum fill factor for your particular application. Don't assume that a low fill factor is always better than a high fill factor. While page splits will be reduced with a low fill factor, it also increases the number of pages that have to be read by SQL Server during queries, which reduces performance. And not only is I/O overhead increased with a too low of fill factor, it also affects your buffer cache. As data pages are moved in from disk to the buffer, the entire page (including empty space) is moved to the buffer. So the lower the fill factor, the more pages that have to be moved into SQL Serve's buffer, which means there is less room for other important data pages to reside at the same time, which can reduce performance.
If you don't specify a fill factor, the default fill factor is 0, which means the same as a 100% fill factor, (the leaf pages of the index are filled 100%, but there is some room left on intermediate index pages). In most cases, this default value is not a good choice, especially for clustered indexes. [6.5, 7.0, 2000, 2005] Updated 7-24-2006
*****
If you find that your transaction log grows to an unacceptable size when you run DBCC REINDEX, you can minimize this growth by switching from the Full Recovery mode to the Bulk-Logged mode before you reindex, and when done, switch back. This will significantly reduce the size of the transaction log growth. [2000, 2005] Updated 7-24-2006
*****
If you have a table that has a clustered index on a monotonically increasing or decreasing primary key, and if the table is not subject in UPDATEs or if it has no VARCHAR columns, then the ideal fill factor for the table is 100. This is because such a table will normally not experience any page splits. Because of this, there is no point in leaving any room in the index for page splits. And because the fill factor is 100, SQL Server will require fewer I/Os to read the data in the table, and performance will be boosted. [7.0, 2000, 2005] Updated 7-24-2006
*****
If you are not sure what to make the fill factor for your indexes, your first step is to determine the ratio of disk writes to reads. The way to do this is to use these two counters: Physical Disk Object: % Disk Read Time and Physical Disk Object: % Write Time. When you run both counters on an array, you should get a good feel for what percentage of your I/O are reads and writes. You will want to run this over a period of time representative of your typical server load. If your percentage writes greatly exceeds the percentage of reads, then a lower fill factor is called for. If your percentage of reads greatly exceeds the percentage of writes, then a higher fill factor is called for.
Another Performance Monitor counter you can use to help you select the ideal fill factor for your environment is the SQL Server Access Methods: Pages Splits/Sec. This counter measures the number of page splits that are occurring in SQL Server every second. For best performance, you will want this counter to be as low as possible, as page splits incur extra server overhead, hurting performance. If this number is relatively high, then you may need to lower the fill factor in order to prevent new page splits. If this counter is very low, then the fill factor you have is fine, or it could be a little too low. You won't know unless you increase the fill factor and watch the results.
Ideally, you want a fill factor that prevents excessive page splits, but not so low as to increase the size of the database, which in turn can reduce read performance because of all the extra data pages that need to be read.
Once you know the ratio of disk write to reads, you now have the information you need to help you determine an optimum fill factor for your indexes. [6.5, 7.0, 2000, 2005] Updated 7-24-2006
*****
If you want to determine the level of fragmentation of your indexes due to page splitting, you can run the DBCC SHOWCONTIG command. Since this command requires you to know the ID of both the table and index being analyzed, you may want to run the following script:
--Script to identify table fragmentation
--Declare variables
DECLARE
@ID int,
@IndexID int,
@IndexName varchar(128)
--Set the table and index to be examined
SELECT @IndexName = 'index_name' --enter name of index
SET @ID = OBJECT_ID('table_name') --enter name of table
--Get the Index Values
SELECT @IndexID = IndID
FROM sysindexes
WHERE id = @ID AND name = @IndexName
--Display the fragmentation
DBCC SHOWCONTIG (@id, @IndexID)
While the DBCC SHOWCONTIG command provides several measurements, the key one is Scan Density. This figure should be as close to 100% as possible. If the scan density is less than 75%, then you may want to reindex the tables in your database, and also increase the fill factor if you are finding that the current fill factor you are using is not appropriate. [6.5, 7.0, 2000] Updated 7-24-2006
*****
Here's a script that is used to create DBCC SHOWCONFIG commands for all of the indexes in one or more tables. Once you run this script, it will produce for you a DBCC SHOWCONFIG statement for each index, which you can then run to find out about the level of fragmentation of your indexes. This script is especially handy if you don't know the names of the indexes in your tables (which is most of the time).
SELECT 'dbcc showcontig (' +
CONVERT(varchar(20),i.id) + ',' + -- table id
CONVERT(varchar(20),i.indid) + ') -- ' + -- index id
object_name(i.id) + '.' + -- table name
i.name -- index name
from sysobjects o
inner join sysindexes i
on (o.id = i.id)
where o.type = 'U'
and i.indid < 2
and
i.id = object_id(o.name)
ORDER BY
object_name(i.id), i.indid
Once you run this script, the output will be DBCC SHOWCONFIG statements for each of the tables(s) and index(es). This output can then be cut and pasted into Query Analyzer or Management Studio and run, which produces a DBCC SHOWCONFIG result for every index for every table you specified. [7.0, 2000, 2005] Updated 11-1-2005 Tip contributed by Jeff M. Belina and Steven R. Morrow
If you do a reorganization on a table with a clustered index, any non-clustered indexes on that same table will automatically be rebuilt.
Database reorganizations can be done using the Maintenance Wizard, or by running your own custom script via the SQL Server Agent (see below).
The DBCC DBREINDEX command will not automatically rebuild all of the indexes on all the tables in a database; it can only work on one table at a time. But if you run the following script, you can index all the tables in a database with ease:
--Script to automatically reindex all tables in a database
USE DatabaseName --Enter the name of the database you want to reindex
DECLARE @TableName varchar(255)
DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = 'base table'
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
DBCC DBREINDEX(@TableName,' ',90)
FETCH NEXT FROM TableCursor INTO @TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
The script will automatically reindex every index in every table of any database you select, and provide a fill factor of 90%. You can substitute any number appropriate for the fill factor in the above script.
When DBCC DBREINDEX is used to rebuild indexes, keep in mind that as the indexes on a specific table are being rebuilt, that the table becomes unavailable for use by your users.
For example, when a non-clustered index is rebuilt, a shared table lock is put on the table, preventing all but SELECT operations to be performed on it. When a clustered index is rebuilt, an exclusive table lock is put on the table, preventing any table access by your users. Because of this, you should only run this command when users don't need access to the tables being reorganized. [7.0, 2000, 2005] Updated 7-24-2006
*****
When you create or rebuild an index, you can specify a fill factor, which is the amount the data pages in the index that are filled. A fill factor of 100 means that each index page is 100% full, a fill factor of 50% means each index page is 50% full. If you create a clustered index that has a fill factor of 100, and it is not based on a monotonically increasing key, that means that each time a record is inserted (or perhaps updated), page splits may occur because there is no room for the data in the existing pages. Numerous page splits can slow down SQL Server's performance.
Here's an example: Assume that you have just created a new index on a table with the default fill factor. When SQL Server creates the index, it places the index on contiguous physical pages, which allows optimal I/O access because the data can be read sequentially. But as the table grows and changes with INSERTS, UPDATES, and DELETES, page splitting occurs. When pages split, SQL Server must allocate new pages elsewhere on the disk, and these new pages are not contiguous with the original physical pages. Because of this, random I/O, not sequential I/O access must be used to gather the data, which is much slower, to access the index pages.
So what is the ideal fill factor? It depends on the ratio of reads to writes that your application makes to your SQL Server tables. As a rule of thumb, follow these guidelines:
* Low Update Tables (100-1 read to write ratio): 100% fill factor
* High Update Tables (where writes exceed reads): 50%-70% fill factor
* Everything In-Between: 80%-90% fill factor.
You may have to experiment to find the optimum fill factor for your particular application. Don't assume that a low fill factor is always better than a high fill factor. While page splits will be reduced with a low fill factor, it also increases the number of pages that have to be read by SQL Server during queries, which reduces performance. And not only is I/O overhead increased with a too low of fill factor, it also affects your buffer cache. As data pages are moved in from disk to the buffer, the entire page (including empty space) is moved to the buffer. So the lower the fill factor, the more pages that have to be moved into SQL Serve's buffer, which means there is less room for other important data pages to reside at the same time, which can reduce performance.
If you don't specify a fill factor, the default fill factor is 0, which means the same as a 100% fill factor, (the leaf pages of the index are filled 100%, but there is some room left on intermediate index pages). In most cases, this default value is not a good choice, especially for clustered indexes. [6.5, 7.0, 2000, 2005] Updated 7-24-2006
*****
If you find that your transaction log grows to an unacceptable size when you run DBCC REINDEX, you can minimize this growth by switching from the Full Recovery mode to the Bulk-Logged mode before you reindex, and when done, switch back. This will significantly reduce the size of the transaction log growth. [2000, 2005] Updated 7-24-2006
*****
If you have a table that has a clustered index on a monotonically increasing or decreasing primary key, and if the table is not subject in UPDATEs or if it has no VARCHAR columns, then the ideal fill factor for the table is 100. This is because such a table will normally not experience any page splits. Because of this, there is no point in leaving any room in the index for page splits. And because the fill factor is 100, SQL Server will require fewer I/Os to read the data in the table, and performance will be boosted. [7.0, 2000, 2005] Updated 7-24-2006
*****
If you are not sure what to make the fill factor for your indexes, your first step is to determine the ratio of disk writes to reads. The way to do this is to use these two counters: Physical Disk Object: % Disk Read Time and Physical Disk Object: % Write Time. When you run both counters on an array, you should get a good feel for what percentage of your I/O are reads and writes. You will want to run this over a period of time representative of your typical server load. If your percentage writes greatly exceeds the percentage of reads, then a lower fill factor is called for. If your percentage of reads greatly exceeds the percentage of writes, then a higher fill factor is called for.
Another Performance Monitor counter you can use to help you select the ideal fill factor for your environment is the SQL Server Access Methods: Pages Splits/Sec. This counter measures the number of page splits that are occurring in SQL Server every second. For best performance, you will want this counter to be as low as possible, as page splits incur extra server overhead, hurting performance. If this number is relatively high, then you may need to lower the fill factor in order to prevent new page splits. If this counter is very low, then the fill factor you have is fine, or it could be a little too low. You won't know unless you increase the fill factor and watch the results.
Ideally, you want a fill factor that prevents excessive page splits, but not so low as to increase the size of the database, which in turn can reduce read performance because of all the extra data pages that need to be read.
Once you know the ratio of disk write to reads, you now have the information you need to help you determine an optimum fill factor for your indexes. [6.5, 7.0, 2000, 2005] Updated 7-24-2006
*****
If you want to determine the level of fragmentation of your indexes due to page splitting, you can run the DBCC SHOWCONTIG command. Since this command requires you to know the ID of both the table and index being analyzed, you may want to run the following script:
--Script to identify table fragmentation
--Declare variables
DECLARE
@ID int,
@IndexID int,
@IndexName varchar(128)
--Set the table and index to be examined
SELECT @IndexName = 'index_name' --enter name of index
SET @ID = OBJECT_ID('table_name') --enter name of table
--Get the Index Values
SELECT @IndexID = IndID
FROM sysindexes
WHERE id = @ID AND name = @IndexName
--Display the fragmentation
DBCC SHOWCONTIG (@id, @IndexID)
While the DBCC SHOWCONTIG command provides several measurements, the key one is Scan Density. This figure should be as close to 100% as possible. If the scan density is less than 75%, then you may want to reindex the tables in your database, and also increase the fill factor if you are finding that the current fill factor you are using is not appropriate. [6.5, 7.0, 2000] Updated 7-24-2006
*****
Here's a script that is used to create DBCC SHOWCONFIG commands for all of the indexes in one or more tables. Once you run this script, it will produce for you a DBCC SHOWCONFIG statement for each index, which you can then run to find out about the level of fragmentation of your indexes. This script is especially handy if you don't know the names of the indexes in your tables (which is most of the time).
SELECT 'dbcc showcontig (' +
CONVERT(varchar(20),i.id) + ',' + -- table id
CONVERT(varchar(20),i.indid) + ') -- ' + -- index id
object_name(i.id) + '.' + -- table name
i.name -- index name
from sysobjects o
inner join sysindexes i
on (o.id = i.id)
where o.type = 'U'
and i.indid < 2
and
i.id = object_id(o.name)
ORDER BY
object_name(i.id), i.indid
Once you run this script, the output will be DBCC SHOWCONFIG statements for each of the tables(s) and index(es). This output can then be cut and pasted into Query Analyzer or Management Studio and run, which produces a DBCC SHOWCONFIG result for every index for every table you specified. [7.0, 2000, 2005] Updated 11-1-2005 Tip contributed by Jeff M. Belina and Steven R. Morrow
Saturday, July 10, 2010
CREATE SQL SERVER SCHEMA
-------------------------------------------------------
CREATE SCHEMA Testing AUTHORIZATION dbo;
GO
CREATE SCHEMA TestingSP AUTHORIZATION dbo;
-------------------------------------------------------
CREATE LOGIN Testing
WITH PASSWORD = 'Testing@123';
USE MarketShiksha;
CREATE USER Testing FOR LOGIN Testing;
GO
---------------------------------------------------------
ALTER SCHEMA
SELECT 'ALTER SCHEMA Testing TRANSFER ' + SCHEMA_NAME(o.[schema_id]) + '.' + o.[name]
FROM sys.objects o WHERE o.[type] = 'U';
-------------------------------------------------------------------
CREATE SCHEMA Testing AUTHORIZATION dbo;
GO
CREATE SCHEMA TestingSP AUTHORIZATION dbo;
-------------------------------------------------------
CREATE LOGIN Testing
WITH PASSWORD = 'Testing@123';
USE MarketShiksha;
CREATE USER Testing FOR LOGIN Testing;
GO
---------------------------------------------------------
ALTER SCHEMA
SELECT 'ALTER SCHEMA Testing TRANSFER ' + SCHEMA_NAME(o.[schema_id]) + '.' + o.[name]
FROM sys.objects o WHERE o.[type] = 'U';
-------------------------------------------------------------------
SQL Sever Paging Stored Procedure
CREATE PROCEDURE prc_MediaGallery_SELECT_All
@insPageNo AS INT,
@insPageSize AS INT,
@intTotalCount AS INT out
AS
BEGIN
SET NOCOUNT ON;
DECLARE @intPageCount INT
BEGIN TRY
SELECT @intTotalCount = COUNT(MgId)
FROM TheSikhTV.MediaGallery
SET @intPageCount = @intTotalCount/@insPageSize;
IF (@intTotalCount%@insPageSize<>0)
SET @intPageCount= @intPageCount+1;
IF (@insPageNo>@intPageCount)
SET @insPageNo=@intPageCount;
SELECT TOP(@insPageSize) *
FROM (
SELECT ROW_NUMBER() OVER ( ORDER BY CreatedDate DESC) AS RowNum,
MgId,MgImageName,MgImagePath,CONVERT (Varchar (20),CreatedDate,106)as CreatedDate1,ModifiedDate
FROM TheSikhTV.MediaGallery
) A
WHERE A.RowNum > (@insPageSize * (@insPageNo - 1));
RETURN 1;
END TRY
BEGIN CATCH
RETURN -1;
END CATCH
END
@insPageNo AS INT,
@insPageSize AS INT,
@intTotalCount AS INT out
AS
BEGIN
SET NOCOUNT ON;
DECLARE @intPageCount INT
BEGIN TRY
SELECT @intTotalCount = COUNT(MgId)
FROM TheSikhTV.MediaGallery
SET @intPageCount = @intTotalCount/@insPageSize;
IF (@intTotalCount%@insPageSize<>0)
SET @intPageCount= @intPageCount+1;
IF (@insPageNo>@intPageCount)
SET @insPageNo=@intPageCount;
SELECT TOP(@insPageSize) *
FROM (
SELECT ROW_NUMBER() OVER ( ORDER BY CreatedDate DESC) AS RowNum,
MgId,MgImageName,MgImagePath,CONVERT (Varchar (20),CreatedDate,106)as CreatedDate1,ModifiedDate
FROM TheSikhTV.MediaGallery
) A
WHERE A.RowNum > (@insPageSize * (@insPageNo - 1));
RETURN 1;
END TRY
BEGIN CATCH
RETURN -1;
END CATCH
END
Subscribe to:
Posts (Atom)