Wednesday, November 13, 2013

Linked Server setup with Oracle

Linked Server Setup with Oracle in SQL Server:

Suppose you want to access data from oracle or other database products using T-SQL via SQL server database Engine then you need to configure Linked Server.

Here I show an example to setup linked server with oracle.

First install oracle client on machine where you want to configure linked server.
Go to [Oracle Client Installation Directory]\product\11.2.0\]dbhome_1\NETWORK\ADMIN
Open tnsnames.ora file.(It is Network configuration file for oracle)
Add TNS setting in this file as Example here

PROD =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS =
        (PROTOCOL = TCP)
        (HOST = 192.168.10.10)
        (PORT = 1521)
      )
    )
    (CONNECT_DATA =
      (SERVICE_NAME = PROD)
    )
  )

See more details of tnsnames.ora file.

Test oracle connection:

Open cmd
Enter SQLPLUS
Enter USERNAME:  username@PROD
Enter PASSWORD: password
Type any select query to confirm connection.

Now come on SQL Server.
In object explorer, Expand Server Object tree Node and see linked server

Right click on linked server and select New Linked Server and Enter required information

Select Security option and enter credentials


Click Ok.
If all things are right linked server configured now.

Now Test your linked server.
Run query as
SELECT a.*
FROM OPENQUERY(linkedservername,'SELECT *from oracledatabasename.table_or_view') a;
Basic Syntax of OPENQUERY

OPENQUERY ( linked_server ,'query' )


Using OpenQuery you can INSERT,DELETE , UPDATE,DELETE.


Example Select Query:
SELECT a.*
FROM OPENQUERY(PRODNEW,'SELECT *from Apps.customer_details') a;

Example INSERT Query:

INSERT OPENQUERY (PRODNEW, ‘SELECT ID,name FROM Apps.customer_details’)
VALUES ('C121’,’Uma Shankar');

Example UPDATE Query:

UPDATE OPENQUERY (PRODNEW, 'SELECT name FROM Apps.customer_details WHERE id = ''C121''')
SET name = 'Uma Shankar Patel';

Example DELETE Query:

DELETE OPENQUERY (PRODNEW, 'SELECT ID FROM Apps.customer_details WHERE ID = ''C121''');


This query fetch data from oracle via SQL server database engine.
It is the little about linked server.I have learned this when my client require to develop a web site that fetch data from sql server but customer data of client comes from oracle.

  • Insert data from Oracle to SQL Server over linked server 


INSERT INTO sqldatabasename.dbo.tablename

(
col1,
col2,
col3
)
SELECT a.*
FROM OPENQUERY(likedservaername,'SELECT
col1,
col2,
col3
from oracletablename') a;

  • Insert data from SQL server to Oracle over linked server 
INSERT INTO OPENQUERY(linkedservername,'select
col1,
col2,
col3                   
from oracletablename')

   SELECT
   col1,
   col2,
   col3
  FROM sqldatabase.dbo.tablename;

  • Delete data from Oracle over linked server
DELETE OPENQUERY (linkedserver, 'Select *from oracetablename');   

In the same way you can create linked server with oracle,mysql and other rdbms and can use above sample query as required. 


Update oracle table from SQL Server


  • UPDATE u
    SET u.oraclefieldname = ‘value’
    from
    openquery(likedservername,'select *from oracletable') as u
    where u.hdr_id = @rid

Friday, October 4, 2013

Convert INT to Date Format


CREATE FUNCTION [dbo].[udf_Convert_INT_Date]
 (
 @Date_In INT
 )
RETURNS datetime
AS
BEGIN
   DECLARE @date_out datetime
   SET @date_out = CONVERT(datetime, CAST(@date_in AS CHAR(8)), 101)

   RETURN @date_out
END

 --Execute

 SELECT DateKey,
 CONVERT(VARCHAR(25),AdventureWorksDW2008R2.dbo.udf_convert_int_date
 (DateKey),103) AS [ConvertedDate]
FROM dbo.DimDate 

Thursday, September 26, 2013

Who is viewing reports in SQL Server 2012 Reporting Services

In previous versions, SQL Server 2012 Reporting Services provides us with a number of tables and views to let us see who has accessed what report and when they accessed the report.
The T-SQL query below selects from the ExecutionLog2 view in the ReportServer database.

USE ReportServer;
GO
SELECT el2.username,el2.InstanceName, 
el2.ReportPath,el2.TimeStart, 
el2.TimeEnd,el2.[Status],
isnull(el2.Parameters, 'N/A') as Parameters 
FROM ExecutionLog2 el2
GO
 
The first 13 rows output from the T-SQL code are shown below. As you can see in this image, we can see who viewed which report, when they viewed the report, and which parameters if any were passed to the report. This query itself could be used within an SSRS report and then published to the report server.
The first 13 rows output from the T-SQL code are shown below.
Using a query similar to the one shown below, we can look at the TimeEnd column to determine when the reports are being viewed. This will help us gain insight into when our processes that are generating the data behind the reports should complete. We can also look at the results of this query to determine behavioral patterns of the report users.
 
USE ReportServer;
GO
SELECT username, convert(varchar(25),TimeEnd, 120) as AccessTime
FROM ExecutionLog2
WHERE status='rsSuccess'
AND username='STAFF\user01' 
AND ReportPath='/Sales/YTDSalesByProductCategory'
ORDER BY AccessTime desc
GO


We can look at the TimeEnd column to determine when the reports are being viewed.
The query shown below will return the report access counts per user and report for the current month.
 
USE ReportServer;
GO
SELECT username, ReportPath, count(*) as ViewCount
FROM ExecutionLog2
WHERE status='rsSuccess'
AND TimeEnd>=DATEADD(mm, DATEDIFF(mm, 0, GETDATE()), 0)
GROUP BY username, ReportPath
ORDER BY username, ViewCount desc
GO


The query shown below will return the report access counts per user and report for the current month.

Wednesday, September 25, 2013

Determine the Space Used by Each Table in a SQL Server Database

SET NOCOUNT ON
DECLARE @table_name VARCHAR(500)
DECLARE @SpaceUsed TABLE (    
    tablename sysname,
    row_count INT,
    reserved VARCHAR(50),
    data VARCHAR(50),
    index_size VARCHAR(50),
    unused VARCHAR(50)
)
DECLARE curTables CURSOR FOR 
SELECT s.name + ‘.’ + t.name  
FROM sys.tables t  INNER JOIN sys.schemas s
    ON t.schema_id = s.schema_id
OPEN curTables
FETCH NEXT FROM curTables INTO @table_name
WHILE @@FETCH_STATUS = 0 
BEGIN  
    SET @table_name = REPLACE(REPLACE(@table_name, ‘[',''), ']‘, ”)
    — make sure the object exists before calling sp_spacedused
    IF EXISTS(SELECT OBJECT_ID FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(@table_name))
    BEGIN
        INSERT INTO @SpaceUsed EXEC sp_spaceused @table_name, false
    END
    FETCH NEXT FROM curTables INTO @table_name
END
CLOSE curTables
DEALLOCATE curTables
– Compute the total size of each table and add the schemaname to the result set
SELECT    s.name as schemaname,
        su.tablename,
        su.row_count,
        su.data,
        su.index_size,
        su.unused,
        CONVERT(decimal, CONVERT(bigint, REPLACE(su.data, ‘ KB’, ”)) +
            CONVERT(bigint, REPLACE(su.index_size, ‘ KB’, ”)) +
            CONVERT(bigint, REPLACE(su.unused, ‘ KB’, ”))) / 1000 AS [total (MB)]
FROM    @SpaceUsed su INNER JOIN sys.tables t
            ON t.name = su.tablename
        INNER JOIN sys.schemas s
            ON t.schema_id = s.schema_id
ORDER BY 
        [total (MB)] DESC,
        schemaname,
        tablename

Tuesday, August 27, 2013

What is Copy Only Backup?

The copy-only backup is actually a very important tool in all production DBA’s arsenal. A copy only backup is essentially a ‘snapshot’ of the database at that point in time that is completely independent from any sequencing of backups used for a restore.
Why is it important some might ask, well, let me explain. Almost all production SQL environments have some sort of back up strategy which includes some combination of full backups, differential backups and/or Transactional log backups. For instance, a full backup is taken on a database every weekend. This creates a starting reference point for a restore.  After the full backup, differential backups are taken nightly with additional transaction log (TLog) backups taken at specified intervals throughout the day to create what is called a log chain. All of these differential and TLog backups are tied directly to the preceding full backup. A Log Sequence Number (LSN) is created to keep all the backups in order. When a restore is required, the full backup must be restored with all the proper differential and TLog backups. If one is missing, the restore will fail. The important thing here is that after a full backup, all of the differential and TLog backups following it are tied directly to that Full backup.
So, assuming we are all very organized and keep all of our backups automated and documented properly, we should be able to perform a point in time restore without issue. But what happens if one of your co-workers takes an AdHoc full backup of a database to restore to a development environment without telling anyone. Later that day you need to restore the database to a point in time after the AdHoc back up was taken by your co-worker? The restore will fail since the Log Chain was broke by the AdHoc full backup and the LSN’s are no longer in sync with the original full backup. Basically, any TLog backup taken after the AdHoc backup will now be associated to that backup.  In this case, had the person simply taken the AdHoc backup using the Copy-Only option, there wouldn’t have been any issues with the restore since the chain of backups wouldn’t have been broken.
So how do we take a copy-only backup of a database? It is as simple as just adding a ‘with’  statement to the backup command as seen below.
– Create full backup with Copy Only option
BACKUP DATABASE MyDataBase
TO DISK = ‘D:MyDatabase_CopyOnly.bak’
WITH COPY_ONLY
GO
The copy-only feature has been available since SQL 2005, but had to be written in the T-SQL backup statement. In SQL2008, the Copy-Only feature is available in the GUI for ease of use.
Keep in mind that the copy only feature is not available for Differential backups and the option in the GUI will actually be ‘greyed’ out.  The copy-only feature should always
be considered for any AdHoc backup that is not part of the normal backup schedule in your environment.

Thursday, August 22, 2013

SQL Server Isolation Levels By Example

Isolation levels in SQL Server control the way locking works between transactions.

SQL Server 2008 supports the following isolation levels
  • Read Uncommitted
  • Read Committed (The default)
  • Repeatable Read
  • Serializable
  • Snapshot
Before I run through each of these in detail you may want to create a new database to run the examples, run the following script on the new database to create the sample data.Note : You’ll also want to drop the IsolationTests table and re-run this script before each example to reset the data.
CREATE TABLE IsolationTests
(
    Id INT IDENTITY,
    Col1 INT,
    Col2 INT,
    Col3 INTupdate te
)
INSERT INTO IsolationTests(Col1,Col2,Col3)
SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
Also before we go any further it is important to understand these two terms….
  1. Dirty Reads – This is when you read uncommitted data, when doing this there is no guarantee that data read will ever be committed meaning the data could well be bad.
  2. Phantom Reads – This is when data that you are working with has been changed by another transaction since you first read it in. This means subsequent reads of this data in the same transaction could well be different.

Read Uncommitted

This is the lowest isolation level there is. Read uncommitted causes no shared locks to be requested which allows you to read data that is currently being modified in other transactions. It also allows other transactions to modify data that you are reading.
As you can probably imagine this can cause some unexpected results in a variety of different ways. For example data returned by the select could be in a half way state if an update was running in another transaction causing some of your rows to come back with the updated values and some not to.
To see read uncommitted in action lets run Query1 in one tab of Management Studio and then quickly run Query2 in another tab before Query1 completes.
Query1
BEGIN TRAN
UPDATE IsolationTests SET Col1 = 2
--Simulate having some intensive processing here with a wait
WAITFOR DELAY '00:00:10'
ROLLBACK
Query2
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT * FROM IsolationTests
Notice that Query2 will not wait for Query1 to finish, also more importantly Query2 returns dirty data. Remember Query1 rolls back all its changes however Query2 has returned the data anyway, this is because it didn’t wait for all the other transactions with exclusive locks on this data it just returned what was there at the time.
There is a syntactic shortcut for querying data using the read uncommitted isolation level by using the NOLOCK table hint. You could change the above Query2 to look like this and it would do the exact same thing.
SELECT * FROM IsolationTests WITH(NOLOCK)

Read Committed

This is the default isolation level and means selects will only return committed data. Select statements will issue shared lock requests against data you’re querying this causes you to wait if another transaction already has an exclusive lock on that data. Once you have your shared lock any other transactions trying to modify that data will request an exclusive lock and be made to wait until your Read Committed transaction finishes.
You can see an example of a read transaction waiting for a modify transaction to complete before returning the data by running the following Queries in separate tabs as you did with Read Uncommitted.
Query1
BEGIN TRAN
UPDATE Tests SET Col1 = 2
--Simulate having some intensive processing here with a wait
WAITFOR DELAY '00:00:10'
ROLLBACK
Query2
SELECT * FROM IsolationTests
Notice how Query2 waited for the first transaction to complete before returning and also how the data returned is the data we started off with as Query1 did a rollback. The reason no isolation level was specified is because Read Committed is the default isolation level for SQL Server. If you want to check what isolation level you are running under you can run “DBCC useroptions”. Remember isolation levels are Connection/Transaction specific so different queries on the same database are often run under different isolation levels.

Repeatable Read

This is similar to Read Committed but with the additional guarantee that if you issue the same select twice in a transaction you will get the same results both times. It does this by holding on to the shared locks it obtains on the records it reads until the end of the transaction, This means any transactions that try to modify these records are forced to wait for the read transaction to complete.
As before run Query1 then while its running run Query2
Query1
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
SELECT * FROM IsolationTests
WAITFOR DELAY '00:00:10'
SELECT * FROM IsolationTests
ROLLBACK
Query2
UPDATE IsolationTests SET Col1 = -1
Notice that Query1 returns the same data for both selects even though you ran a query to modify the data before the second select ran. This is because the Update query was forced to wait for Query1 to finish due to the exclusive locks that were opened as you specified Repeatable Read.
If you rerun the above Queries but change Query1 to Read Committed you will notice the two selects return different data and that Query2 does not wait for Query1 to finish.
One last thing to know about Repeatable Read is that the data can change between 2 queries if more records are added. Repeatable Read guarantees records queried by a previous select will not be changed or deleted, it does not stop new records being inserted so it is still very possible to get Phantom Reads at this isolation level.

Serializable

This isolation level takes Repeatable Read and adds the guarantee that no new data will be added eradicating the chance of getting Phantom Reads. It does this by placing range locks on the queried data. This causes any other transactions trying to modify or insert data touched on by this transaction to wait until it has finished.
You know the drill by now run these queries side by side…
Query1
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT * FROM IsolationTests
WAITFOR DELAY '00:00:10'
SELECT * FROM IsolationTests
ROLLBACK
Query2
INSERT INTO IsolationTests(Col1,Col2,Col3)
VALUES (100,100,100)
You’ll see that the insert in Query2 waits for Query1 to complete before it runs eradicating the chance of a phantom read. If you change the isolation level in Query1 to repeatable read, you’ll see the insert no longer gets blocked and the two select statements in Query1 return a different amount of rows.

Snapshot

This provides the same guarantees as serializable. So what’s the difference? Well it’s more in the way it works, using snapshot doesn’t block other queries from inserting or updating the data touched by the snapshot transaction. Instead row versioning is used so when data is changed the old version is kept in tempdb so existing transactions will see the version without the change. When all transactions that started before the changes are complete the previous row version is removed from tempdb. This means that even if another transaction has made changes you will always get the same results as you did the first time in that transaction.
So on the plus side your not blocking anyone else from modifying the data whilst you run your transaction but…. You’re using extra resources on the SQL Server to hold multiple versions of your changes.
To use the snapshot isolation level you need to enable it on the database by running the following command
ALTER DATABASE IsolationTests
SET ALLOW_SNAPSHOT_ISOLATION ON
If you rerun the examples from serializable but change the isolation level to snapshot you will notice that you still get the same data returned but Query2 no longer waits for Query1 to complete.

Summary

You should now have a good idea how each of the different isolation levels work. You can see how the higher the level you use the less concurrency you are offering and the more blocking you bring to the table. You should always try to use the lowest isolation level you can which is usually read committed.