Friday 18 December 2015

SQL Server Basic Performance Issue Part-1

SQL Server performance can be degrade by few factors and here we will investigate some of the common areas and look at some of the tools that helps to identify issues and review how to fix these performance issues.

Following topics will be covered:
1. Blocking
2. Deadlocks
3. I/O, CPU and Memory
4. Query Tuning Bookmark Lookups
5. Query Tuning Index Scans


1. Blocking
There are several ways you can identify the blocking in SQL Server:
a) sp_who2
Execute sp_who2 in SQL Server query analyser.

b) DMVs
SELECT session_id, command, blocking_session_id, wait_type, wait_time, wait_resource, t.TEXT
FROM sys.dm_exec_requests 
CROSS apply sys.dm_exec_sql_text(sql_handle) AS t
WHERE session_id > 50 
AND blocking_session_id > 0
UNION
SELECT session_id, '', '', '', '', '', t.TEXT
FROM sys.dm_exec_connections 
CROSS apply sys.dm_exec_sql_text(most_recent_sql_handle) AS t
WHERE session_id IN (SELECT blocking_session_id 
                    FROM sys.dm_exec_requests 
                    WHERE blocking_session_id > 0)

c) Report - All Blocking Transactions
Another option is to use the built in reports in SSMS. Right click on the SQL Server instance name and select Reports > Standard Reports > Activity - All Block Transactions.

d) Activity Monitor
In SSMS, right click on the SQL Server instance name and select Activity Monitor. In the Processes section you will see information similar to below. Here we can see similar information as sp_who2, but we can also see the Wait Time, Wait Type and also the resource that SPID 60 is waiting for.



Monday 30 November 2015

SQL Server Reporting Services Migration

Migrate Reporting Services Database to another Instance for 2008/2012

Let suppose there are two SQL Server Instances:
SQLServerA
SQLServerB
Here we are migrating Reports from
Source Server - SQLServerA to Target Server - SQLServerB

1). Backup the encryption key and the RS Databases - ReportServer & ReportServerTempdb database from SQLServerA
2). Stop the reporting services in SQLServerB
3). Restore these databases on SQLServerB on with target reporting database name (ReportServerTempdb & ReportServer)
4). Start reporting services on SQLServerB
5). Reset the database connection to ReportingServices on the target machine using Microsoft Reporting Services Configuration Manager
6). Restore the encryption key on SQLServerB
---Restore the encryption key from the backup which you have taken in step 1

After once you open the URL of target server, you might get an error stated -
Scale-out deployment configuration error:
This is because when doing step6 the old server will be added for scale-out deployment on the target machine. If the source and target machine are using different licenses of Reporting Services you might encounter issues that some features are not supported when migrating to a less featured sql server license.
The feature: “Scale-out deployment” is not supported in this edition of Reporting Services. (rsOperationNotSupported)
Normally that should not impose a problem since you would be able to remove the old server from scale-out deploment from the list in Microsoft Reporting Services Configuration Manager.

Solution:
7). On the SQLServerA
Run this command in Query analyser

For SQL2008/2008R2
SELECT * from ReportServer.dbo.Keys
For SQL2012
SELECT * from ReportServer2012.dbo.Keys
and make note of the InstallationId value for the non-null record
8). On ServerB server,
Run this command in Query analyser

For SQL2008/2008R2
SELECT * from ReportServer.dbo.Keys
For SQL2012
SELECT * from ReportServer2012.dbo.Keys
and you should see 3 records or more. One null record, and other records that have values in the MachineName field (these should be the old and new servers name). The InstallationId value from previous step should be in there with the old server's name
9). On the SQLServerB server, delete the record that matches the old server's InstallationId.
for example in this case:
Run this command in Query analyser
DELETE FROM [ReportServer].[dbo].[Keys]
WHERE MachineName = 'SQLServerA'

How to configure URL:
Default URL:
https://servername/Reports/Pages/Folder.aspx

Like in ServerA for default instance:
https://ServerA/Reports/Pages/Folder.aspx

Named instance like ServerA\Dev
http:// ServerB/Reports_Dev/Pages/Folder.aspx

restart the reporting services later and check your reporting services.


Monday 23 November 2015

SQL Server Master DB log backup not allowed

SYMPTOMS

If you create a Database Maintenance Plan for all the system databases or if you select the master database and you select the Back up the transaction log as part of the maintenance plan option, the Backup transaction log step for the master database fails with this error message:
SQLMAINT.EXE Process Exit Code: 1 (Failed)
Msg 22029, Sev 16: sqlmaint.exe failed. [SQLSTATE 42000]

Backup can not be performed on database 'master'. This sub task is ignored.


CAUSE

Only full database backups of the master database is allowed, the tlog backup is not allowed even if the Recovery modal is set to Full.

The master database should be kept as the default in the Simple Recovery model. You must create a separate maintenance plan for the master database and not backup the transaction log in the maintenance plan. Create additional maintenance plans as needed for other databases for which you want to backup the transaction log.

If you want, you can set the msdb database to the Full Recovery model and perform transaction log backups. If the recovery model for msdb is set to "simple" the backup transaction log step in the maintenance plan fails with the error message shown in the "Summary" section.


Monday 12 October 2015

SQL Server Lock Pages in Memory

How to enable "Lock pages in Memory" at the Windows level?

1. On the Start menu, click Run. In the Open box, type gpedit.msc. The Group Policy dialog box opens.
2. On the Group Policy console, expand Computer Configuration, and then expand Windows Settings.
3. Expand Security Settings, and then expand Local Policies.
4. Select the User Rights Assignment folder. The policies will be displayed in the details pane.
5. In the pane, double-click Lock pages in memory.
6. In the Local Security Policy Setting dialog box, click Add.
7. In the Select Users or Groups dialog box, add the account that runs the SQL Server Service.


8. Restart the machine for these changes to take effect.

If you are running an Enterprise or Developer Edition of SQL Server 2005 or 2008, please stop here. The rest of the steps are for the Standard Edition Only.

Ensure that the build of SQL Server 2008 is Cumulative Update 2 on Service Pack 1. Preferably, run the "select @@version" command against the SQL Server Instance and verify that the build is 10.00.2714.00. In case of SQL Server 2005, please verify that the build is 9.00.4226.
9. Now set the Trace Flag 845 as a startup trace flag. This can be done by adding
-T845 to the startup parameters from the SQL Server Configuration Manager. Please refer to the screenshot below:

SQLConfigManager
10. Restart the SQL Server Service.
11. Verify that the following message is written in the SQL Server ErrorLog file at startup:
Using locked pages for buffer pool


Note: Enabling Locked Pages may have a negative performance impact on your system performance. Please consider all potential effects before you use this option. Enabling this trace flag on the Enterprise Edition or the Developer Edition has no effect.

Monday 5 October 2015

SQL Server Checkpoint Mystery

SQL Server Checkpoint Mystery
Checkpoint is the hidden Mystery in SQL Server,Thanks to Paul Randal for very good explanation.

Whenever a checkpoint occurs either auto/manual/indirect/direct following set of operations are occurred for the database:

1. All dirty data file pages for the database are written to disk (all pages that have changed in memory since they were read from disk or since the last checkpoint), regardless of the state of the transaction that made the change.
2. Before a page is written to disk, all log records up to and including the most recent log record describing a change to that page are written to disk (yes, log records can be cached in memory too). This guarantees recovery can work and is called write-ahead logging. Log records are written to the log sequentially, and log records from multiple transactions will be interspersed in the log. The log cannot be selectively written to disk, so writing a dirty page to disk that only has a single log record affecting it may mean writing many more previous log records to disk as well.
3. Log records describing the checkpoint are generated.
4. The LSN of the checkpoint is recorded in the database boot page in the dbi_checkptLSN field.

If in the SIMPLE recovery model, the VLFs in the log are checked to see whether they can be marked inactive (called clearing or truncating the log – both of which are terrible misnomers, as nothing is either physically cleared or truncated).

Checkpoints are not really tracked in the transaction log – it just serves as a useful repository for information about which transactions are active at the time of the checkpoint. The LSN of the last checkpoint is recorded in the database boot page. This is where recovery starts, and if this page is inaccessible, the database cannot be attached, opened, or processed in any way – partly because it’s the boot page that knows whether the database was cleanly shut down or not, and partly because it’s the only place that records the LSN of the last checkpoint record. You may say, well it’s recorded in the transaction log too, but what if the log is corrupt in some way?
One area of confusion I’ve seen is that the checkpoint log records are overwritten by subsequent checkpoints. Absolutely not – once written, a log record is NEVER updated or overwritten – it will only be overwritten when the log wraps and the VLFs are re-used.. This has led to further confusion about when checkpoint information is retrievable from the log, using commands such as fn_dblog.

Now What happens if a checkpoint starts but doesn’t finish before a crash occurs? Will that checkpoint be used for crash recovery?
The answer is no, it won’t. The purpose of a checkpoint is to bring the pages in the data files up-to-date with what’s in the transaction log. When a checkpoint ends, there’s a guarantee that as of the LSN of the LOP_BEGIN_CKPT log record, all changes from log records before that point are persisted in the data files on disk. There’s no guarantee about logged changes after that point, only before it. In other words, all the log records before the LSN of the LOP_BEGIN_CKPT log record are no longer required for crash recovery, unless there’s a long running transaction that started before that LSN.

When the checkpoint ends, the boot page of the database (page 9 in file 1) is updated with the beginning LSN of the checkpoint (and then if in the SIMPLE recovery mode, any log clearing/truncating can occur).

So if a checkpoint started but didn’t end before a crash, it’s LSN would not be in the boot page and so crash recovery would start from the previous checkpoint. This is good, because an incomplete checkpoint means there’s no guarantee about which logged changes are persisted in the data files, and so crash recovery wouldn’t be able to work correctly from only starting at the beginning of the incomplete checkpoint.

How does SQL Server guarantee that there’s always one complete checkpoint in the active portion of the log, in case a crash occurs?
The answer is that log clearing/truncation of a VLF containing an LOP_BEGIN_CKPT log record cannot happen until another complete checkpoint has occurred. In other words, a complete checkpoint has to occur since the last log clearing/truncation before the next one can happen. If a checkpoint hasn’t occurred, the log_reuse_wait_desc for the database in sys.databases will return CHECKPOINT. It’s not common to see this occur, but you might see it if there’s a very long running checkpoint (e.g. a very large update on a system with a slow I/O subsystem so the flushing of data file pages takes a long time) and very frequent log backups, so two log backups occur over the time taken by a single checkpoint operation. It could also happen if you’ve messed with the sp_configure recovery interval and set it higher than the default.

How Checkpoint works for tempDB database?
A checkpoint is only done for tempdb when the tempdb log file reaches 70% full – this is to prevent the tempdb log from growing if at all possible (note that a long-running transaction can still essentially hold the log hostage and prevent it from clearing, just like in a user database).
You know very well, when a checkpoint occurs for a user database, all dirty pages for that database are flushed to disk (as well as other operations). This does not happen for tempdb. Tempdb is not recovered in the event of a crash, and so there is no need to force dirty tempdb pages to disk, except in the case where the lazywriter process (part of the buffer pool) has to make space for pages from other databases. Of course, when you issue a *manual* CHECKPOINT, all the dirty pages are flushed, but for automatic checkpoints they’re not. You can easily prove this to yourself with a large transaction inserting into a user table on tempdb, and watch the Databases/Log File(s) Used Size (KB) and Databases/Log Truncations for tempdb, when you see them go up, check to see if the number of dirty pages in tempdb has reduced – it won’t have.


Thursday 1 October 2015

SQL Server Page Type

Types of Data Pages

In SQL Server, we have different type of Pages and its consists of numbers of 8K pages.
A Page includes 8192 bytes. First 96 bytes are used for header. The rest of the space is for data.
A variable length row offset array (or slot array) is located at the end of every page and grows backwards. Count of records is saved in the header.
The size of each element in the offset array is 2 bytes. Records in a page are not sorted even though it is an index page.
If the data needs to be sorted, the offset array will be is sorted by the key of the index.

Here is the Structure of the Data Page:

There are around 17 types of page in SQL Server data file and the arrangement of the data pages is same as shown below:
     Type 1 – Data page
       - Data records in heap
       - Clustered index leaf-level
       - Location can be random

    Type 2 – Index page
       - Non-clustered index
       - Non-leave-level clustered index
       - Location can be random

    Type 3 – Text Mixed Page
       - Small LOB value(s), multiple types and rows.
       - Location can be random

    Type 4 – Text Page
       - LOB value from a single column value
       - Location can be random

    Type 7 – Sort Page
       - Temporary page for sort operation.
       - Usually tempdb, but can be in user database for online operations.
       - Location can be random

    Type 8 – GAM Page
       - Global Allocation Map, track allocation of extents.
       - One bit for each extent, if the bit is 1 then its free, otherwise its allocated.
       - The first GAM page in each file is page 2

    Type 9 – SGAM Page
       - Shared Global Allocation Map, track allocation of shared extents
       - One bit for each extent, if the bit is 1, then it has free space, otherwise its full
       - The first SGAM page in each file is page 3

    Type 10 – IAM Page
       - Index Allocation Map. Extent allocation in a GAM interval for an index or heap table.
       - Location can be random.

    Type 11 – PFS Page
       - Page Free Space. Byte map, keeps track of free space of pages
       - The first PFS is page 1 in each file.

    Type 13 – Boot Page
       - Information about the page
       - Only page 9 in file 1.
       - To review "What's on boot page?" run below commands:

         DBCC DBINFO (‘DB_Name’);
         GO
                OR
         DBCC PAGE(DB_Name, 1, 9, 3)
         GO
       - More details click here to refer Paul's Article
    
    Type 14 – Server Configuration Page (Don't know the official name)
       - Part of information returned from sp_configure.
       - It only exists in master database, file 1, page 10

    Type 15 – File Header Page
       - Information about the file.
       - It’s always page 0 every data page.
       - To review this page, run below command:

         DBCC PAGE(DB_name, 1, 0, 3)
         DBCC PAGE(DB_name, 1, 0, 3) WITH TABLERESULTS
         GO

    Type 16 – Differential Changed map
       - Extents in GAM interval have changed since last full or differential backup
       - The first Differential Changed Page is page 6 in each file

    Type 17 – Bulk Change Map
       - Extents in GAM interval modified by bulk operations since last backup
       - The first Bulk Change Map page is page 7 in each file.

Monday 28 September 2015

SQL Server Evolution from 2000 to 2016

The Evolution of the Microsoft SQL Server



SQL Server 2016


New Features/Enhancement

- Stretch Database : The idea behind this feature is certainly interesting. The upcoming stretch database feature will allow you to dynamically stretch your on-premise database to Azure. This would enable your frequently accessed or hot data to stay on-premise and your infrequently accessed cold data to be moved to the cloud.
- Always Encrypted : Always Encrypted is designed to protect data at rest or in motion. With Always Encrypted, SQL Server can perform operations on encrypted data and the encryption key can reside with the application. Encryption and decryption of data happens transparently inside the application.
- Enhancements to AlwaysOn : SQL Server 2016 will also continue to advance high availability and disaster recovery with several enhancements to AlwaysOn. The upcoming SQL Server 2016 release will enhance AlwaysOn with the ability to have up to three synchronous replicas. Additionally, it will include DTC (Distributed Transaction Coordinator) support as well as support for round-robin load balancing of the secondary replicas. There will also be support for automatic failover based on database health.
- Enhanced In-Memory OLTP : First introduced with SQL Server 2014, In-Memory OLTP will continue to mature in SQL Server 2016. Microsoft will enhance In-Memory OLTP by extending the functionality to more applications while also enhancing concurrency. This means they will be expanding the T-SQL surface area, increasing the total amount of memory supported into the terabyte range as well as supporting a greater number of parallel CPUs.

BI Features

- Revamped SQL Server Data Tools : Another welcome change in SQL Server 2016 is the re-consolidation of SQL Server Data Tools (SSDT). As Microsoft worked to supplant the popular and useful Business Development Studio (BIDS) with SQL Server Data Tools they wound up confusing almost everyone by creating not one but two versions of SQL Server Data Tools both of which needed to be downloaded separately from installing SQL Server itself. With the SQL Server 2016 release Microsoft has indicated that they intend to re-consolidation SQL Server Data Tools.

SQL Server 2014


New Features/Enhancement

- In-Memory OLTP Engine : SQL Server 2014 enables memory optimization of selected tables and stored procedures. The In-Memory OLTP engine is designed for high concurrency and uses a new optimistic concurrency control mechanism to eliminate locking delays. Microsoft states that customers can expect performance to be up to 20 times better than with SQL Server 2012 when using this new feature.
- Clusteredr ColumnStore Index : When Microsoft introduced the columnstore index in QL Server 2012, it provided improved performance for data warehousing queries. For some queries, the columnstore indexes provided a tenfold performance improvement. However, to utilize the columnstore index, the underlying table had to be read-only. SQL Server 2014 eliminates this restriction with the new updateable Columnstore Index. The SQL Server 2014 Columnstore Index must use all the columns in the table and can’t be combined with other indexes.
- Backup to Windows Azure : SQL Server 2014’s native backup supports Windows Azure integration. Although I’m not entirely convinced that I would want to depend on an Internet connection to restore my backups, on-premises SQL Server 2014 and Windows Azure virtual machine (VM) instances support backing up to Windows Azure storage. The Windows Azure backup integration is also fully built into SQL Server Management Studio (SSMS).
- AlwaysOn Enhancements : Microsoft has enhanced AlwaysOn integration by expanding the maximum number of secondary replicas from four to eight. Readable secondary replicas are now also available for read workloads, even when the primary replica is unavailable. In addition, SQL Server 2014 provides the new Add Azure Replica Wizard, which helps you create asynchronous secondary replicas in Windows Azure.
- Buffer Pool Extension : SQL Server 2014 provides a new solid state disk (SSD) integration capability that lets you use SSDs to expand the SQL Server 2014 Buffer Pool as nonvolatile RAM (NvRAM). With the new Buffer Pool Extensions feature, you can use SSD drives to expand the buffer pool in systems that have maxed out their memory. Buffer Pool Extensions can provide performance gains for read-heavy OLTP workloads.
- Backup Encryption : One welcome addition to SQL Server 2014 is the ability to encrypt database backups for at-rest data protection. SQL Server 2014 supports several encryption algorithms, including Advanced Encryption Standard (AES) 128, AES 192, AES 256, and Triple DES. You must use a certificate or an asymmetric key to perform encryption for SQL Server 2014 backups.

BI Features

- Power View for Multidimensional Models : Power View used to be limited to tabular data. However, with SQL Server 2014, Power View can now be used with multidimensional models (OLAP cubes) and can create a variety of data visualizations including tables, matrices, bubble charts, and geographical maps. Power View multidimensional models also support queries using Data Analysis Expressions (DAX). - SQL Server Data Tools : The new SQL Server Data Tools for BI (SSDT-BI) is used to create SQL Server Analysis Services (SSAS) models, SSRS reports, and SSIS packages. The new SSDT-BI supports SSAS and SSRS for SQL Server 2014 and earlier, but SSIS projects are limited to SQL Server 2014.

SQL Server 2012


New Features/Enhancement

- AlwaysOn : SQL Server AlwaysOn provides a high-availability and Disaster-recovery solution for SQL Server 2012. Click here for more details
- Non-Clustered ColumnStore Index : non-Updateable, click here to get more details
- Contained database : This is a great feature for people who have to go through pain of SQL Server database migration again and again. One of the biggest pains in migrating databases is user accounts. SQL Server user resides either in windows ADS or at SQL Server level as SQL Server users. So when we migrate SQL Server database from one server to other server these users have to be recreated again.
- User Defined Server roles : In SQL Server 2008 R2 we had the ability to create roles at database level. So you create customized roles at the database level and then assign them to users. But at the server level or instance level we did not have options of creating server roles. So if you right click on the “Server roles” you will not find any options for adding new server roles.

BI Features

- Tabular Model (SSAS) - Data Quality Service (DQS) :
- Power View :
- Cloud :

SQL Server 2008/R2


New Features/Enhancement

- Master Data Services : Master Data Services might be the most underrated feature in SQL Server 2008 R2. It provides a platform that lets you create a master definition for all the disparate data sources in your organization. Almost all large businesses have a variety of databases that are used by different applications and business units. These databases have different schema and different data meanings for what’s often the same data.
- Multiserver Management : SQL Server Utility Control Point
- Backup Encryption : Executed at backup time to prevent tampering.
- Data Compression : Fact Table size reduction and improved performance.
- File Stream : New data type VarBinary(Max) FileStream for managing binary data.
- Full Text Search : Native Indexes, thesaurus as metadata, and backup ability.
- Change Data Capture : For requirements to save or monitor historical information on changed data. Using SQL 2008 we can implement a non-invasive detection of changed records.
- Resource Governor : Very cool. Throttle the resources of users based on Memory or Processor

BI Features

- SQL Server Integration Service : Improved multiprocessor support and faster lookups.
- PowerPoint :
- Sharepoint Integration :

SQL Server 2005


New Features/Enhancement

- T-SQL (Transaction SQL) enhancements : new features including error handling via the TRY and CATCH paradigm, Common Table Expressions (CTEs), which return a record set in a statement, and the ability to shift columns to rows and vice versa with the PIVOT and UNPIVOT commands.
- Service Broker : The Service Broker handles messaging between a sender and receiver in a loosely coupled manner. A message is sent, processed and responded to, completing the transaction.
- Data encryption : SQL Server 2000 had no documented or publicly supported functions to encrypt data in a table natively. Organizations had to rely on third-party products to address this need. SQL Server 2005 has native capabilities to support encryption of data stored in user-defined databases.
- SMTP mail : Sending mail directly from SQL Server 2000 is possible, but challenging. With SQL Server 2005, Microsoft incorporates SMTP mail to improve the native mail capabilities.
- Multiple Active Result Sets (MARS) : MARS allow a persistent database connection from a single client to have more than one active request per connection. This should be a major performance improvement, allowing developers to give users new capabilities when working with SQL Server. For example, it allows multiple searches, or a search and data entry. The bottom line is that one client connection can have multiple active processes simultaneously.
- Dedicated administrator connection (DAC) : If all else fails, stop the SQL Server service or push the power button. That mentality is finished with the dedicated administrator connection. This functionality will allow a DBA to make a single diagnostic connection to SQL Server even if the server is having an issue.
- Database Mirroring : Database mirroring is an extension of the native high-availability capabilities.

BI Features

- SQL Server Integration Services (SSIS) : SSIS has replaced DTS (Data Transformation Services) as the primary ETL (Extraction, Transformation and Loading) tool and ships with SQL Server free of charge.
- Analysis Services (SSAS) and Reporting Services (SSRS)

SQL Server 2000


Refer the msdn Article for more details Click here

New Features/Enhancement

- Log Shipping
- Replication
- Clustering

BI Features

- Data Transformation Services (DTS)



Wednesday 23 September 2015

SQL Server Database Mirroring Ques & Ans

Database Mirroring Interview Questions & Answers

Q. What is default port of Database Mirroring Endpoint?
A:
5022

Q. Database Mirroring comes with which edition?
A:
SQL Server 2005 SP1. Or SQL Server 2005 RTM with trace flag 1400

Q. How to enable mirroring by Script ?
A:
– Specify the partner from the mirror server
ALTER DATABASE [AdventureWorks] SET PARTNER = N‘TCP://MirrorServer.mycompany.com:5022';

– Specify the partner from the principal server
ALTER DATABASE [AdventureWorks] SET PARTNER = N‘TCP://PrincipalServer.mycompany.com:5022';


Q. How to disable mirroring by script?
A:
ALTER DATABASE [AdventureWorks] SET PARTNER OFF

Q.How to do manual failover to Mirror when principle is working fine?
A:
ALTER DATABASE SET PARTNER FAILOVER

Q. You are getting the below error message while configuring database mirroring? Msg 1416, Level 16, State 31, Line 3. Database is not configured for database mirroring
A:
You need to restore the Full backup from principal server using With NoRecovery option and also one transactional log backup from principal server using With NoRecovery option and then start configuring mirroring

Q. Can we configure mirroring between Standard Edition & Enterprise Edition or Vice Versa?
A:
Nope its not possible, both principal and mirror should have same edition.
Refer the KB article

Q. How to bring mirror DB online if Principle is down?
A:
ALTER DATABASE SET PARTNER FORCE_SERVICE_ALLOW_DATA_LOSS
If you try to do failover like normal situation when principle is online [ALTER DATABASE SET PARTNER FAILOVER] then you will receive below error.
ALTER DATABASE SET PARTNER FAILOVER
Msg 1404, Level 16, State 10, Line 1
The command failed because the database mirror is busy. Reissue the command later.

Q. System Store Procedure to monitor Mirroring?
A:
MSDB.SYS.SP_DBMMONITORRESULTS

Q. In Database mirroring, Let suppose Primary DB log file is not getting truncated even after taking log backup. What is the issue?
A:
In this scenario it seems there is some communication issue between the Primary & Mirror database, due to following reasons:
- Network issue between Primary & Mirror Servers.
- Endpoints are not able to communicate, or could be port issue.
- Mirroring got paused or got interrupted due to unexpected reboot.


Q. In Database mirroring, Let suppose Primary DB log file is not getting truncated even after taking log backup and Mirror DB is showing as Suspended state. What action will you take to re-sync the mirroring?
A:
To fix this issue, will identify cause by checking the following things:
- Check for the Network connectivity.
- Check both DB Server endpoints are working successfully.
- Check for the Mirror DB status and if it is suspended then have to re-configure the mirroring.

Q. In Database mirroring, If we Add new FileGroup and create new file under the FileGroup. Will that change also reflect on Mirror database?
A:
Yes it will reflect the changes to mirror database as well. Just have to make sure the file path and drive of the new file we are creating, same path should be available on the mirror DB Server as well.



Monday 14 September 2015

SQL Server Saving Changes Is Not Permitted

Issue
SQL Server Issue while modifying table through SQL Server Management Studio (SSMS).
When we alter the table definition, table design and try to save through SSMS, we get an error message saying "Saving Changes is not permitted."


Solution
1. Click on Tools Tab>> Options

2. Click on Designers
Un-check option "Prevent saving changes that requires re-creation."



That's it!!

Note: If you disable this option, you are not warned when you save the table that the changes that you made have changed the metadata structure of the table. In this case, data loss may occur when you save the table.


Thursday 10 September 2015

SQL Server : Data Compression

Today's Post is all about Data Compression.
SQL Server’s data compression doesn't automatically compress the data in the database; instead, data compression can only be used for these types of database objects:

- A table stored as a heap
- A table stored as a clustered index
- A non-clustered indexes.
- An indexed view.
- Partitioned tables and indexes.

There are two different types of Data Compressions:
Row‐Level Data Compression
Row-level compression is essentially turning fixed length data types into variable length data types, freeing up empty space. It also has the ability to ignore zero and null values, saving additional space. In turn, more rows can fit into a single data page.

This is the simplest method of data compression is row-level compression, works by:
- Reducing the amount of metadata used to store a row.
- Storing fixed length numeric data types as if they were variable-length data types. For example, if you store the value 1 in a bigint data type, storage will only take 1 byte, not 8 bytes, which the bigint data types normally takes.
- Storing CHAR data types as variable-length data types. For example, if you have a CHAR (100) data type, and only store 10 characters in it, blank characters are not stored, thus reducing the space needed to the store data.
- Not storing NULL or 0 values.
Row-level data compression offers less compression than page-level data compression, but it also incurs less overhead, reducing the amount of CPU resources required to implement it.

Page Level Data Compression
Page Level Compression includes three additional features:
- Row Level Compression.
- Prefix Compression.
- Dictionary Compression.
Its starts with row-level data compression, then other two additional compression features: prefix and dictionary compression. We'll get more detail about these features later in this article. As you can imagine, page-level compression offers better data compression in compare to row-level compression.

Page-level data compression offers greater compression, but at the expense of greater CPU utilization. It works using these techniques:
- It starts out by using row-level data compression to get as many rows as it can on a single page.
- Next, prefix compression is run. Essentially, repeating patterns of data at the beginning of the values of a given column are removed and substituted with an abbreviated reference that is stored in the compression information (CI) structure that immediately follows the page header of a data page.
- And last, dictionary compression is used. Dictionary compression searches for repeated values anywhere on a page and stores them in the CI. One of the major differences between prefix and dictionary compression is that prefix compression is restricted to one column, while dictionary compression works anywhere on a data page.

The amount of compression provided by page-level data compression is highly dependent on the data stored in a table or index. If a lot of the data repeats itself, then compression is more efficient. If the data is more random, then little benefits can be gained using page-level compression.

Advantages
- Reducing the amount of physical disk space required to store data in result less number of reads and writes. - Reduces the amount of Buffer Cache memory needed to store data in the Buffer Cache. This in turn allows more data to be stored in the Buffer Cache, reducing the need for SQL Server to access the disk to get data, as the data is now more likely to be in memory than disk, further reducing disk I/O.

Disadvantages
- Using compression uses up additional CPU cycles. If your server has plenty to spare, then you have no problem. But if your server is already experiencing a CPU bottleneck, then perhaps compression is better left turned off.

Steps to enable Compression on a table or index refer MSDN Article

Here are the T-SQL syntax with example of compression:
To enable compression on a table:
USE AdventureWorks2012;
GO
EXEC sp_estimate_data_compression_savings 'Production', 'TransactionHistory', NULL, NULL, 'ROW' ;

ALTER TABLE Production.TransactionHistory REBUILD PARTITION = ALL
WITH (DATA_COMPRESSION = ROW); 
GO
To enable compression on an index:
USE AdventureWorks2012; 
GO
SELECT name, index_id
FROM sys.indexes
WHERE OBJECT_NAME (object_id) = N'TransactionHistory';

EXEC sp_estimate_data_compression_savings 
    @schema_name = 'Production', 
    @object_name = 'TransactionHistory',
    @index_id = 2, 
    @partition_number = NULL, 
    @data_compression = 'PAGE' ; 

ALTER INDEX IX_TransactionHistory_ProductID ON Production.TransactionHistory REBUILD PARTITION = ALL 
WITH (DATA_COMPRESSION = PAGE);
GO


Customize Script to get the compression details of all the tables in a database with estimated saving size:
CREATE PROCEDURE usp_CompressTbl_Report (@compress_type char(4))
AS 
SET NOCOUNT ON
BEGIN
DECLARE @schema_name sysname, @table_name sysname
CREATE TABLE #CompressTbl_Report 
(ObjName sysname,
schemaName sysname,
indx_ID int,
partit_number int,
size_with_current_compression_setting bigint,
size_with_requested_compression_setting bigint,
sample_size_with_current_compression_setting bigint,
sample_size_with_requested_compression_setting bigint)
DECLARE c_sch_tb_crs cursor for 
SELECT TABLE_SCHEMA,TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE LIKE 'BASE%' 
AND TABLE_CATALOG = upper(db_name())
OPEN c_sch_tb_crs
FETCH NEXT FROM c_sch_tb_crs INTO @schema_name, @table_name
WHILE @@Fetch_Status = 0 
BEGIN
INSERT INTO #CompressTbl_Report
EXEC sp_estimate_data_compression_savings
@schema_name = @schema_name,
@object_name = @table_name,
@index_id = NULL,
@partition_number = NULL,
@data_compression = @compress_type
FETCH NEXT FROM c_sch_tb_crs INTO @schema_name, @table_name
END
CLOSE c_sch_tb_crs 
DEALLOCATE c_sch_tb_crs
SELECT schemaName AS [schema_name]
, ObjName AS [table_name]
, avg(size_with_current_compression_setting) as avg_size_with_current_compression_setting
, avg(size_with_requested_compression_setting) as avg_size_with_requested_compression_setting
, avg(size_with_current_compression_setting - size_with_requested_compression_setting) AS avg_size_saving
FROM #CompressTbl_Report
GROUP BY schemaName,ObjName
ORDER BY schemaName ASC, avg_size_saving DESC 
DROP TABLE #CompressTbl_Report
END
SET NOCOUNT OFF
GO

Here is a sample execution and result set:

USE AdventureWorks2012 
GO
EXEC usp_CompressTbl_Report @compress_type = 'ROW'
GO


Monday 7 September 2015

SQL Server : Backup Compression

Today's will talk about Compress Backup feature? how to take compress backup and enable backup compression feature in sql server.
Backup compression is a powerful feature offered by SQL Server (versions 2008 and above) that allows take the backup in compressed file which reduce the amount of storage space used by the backup files while at the same time increasing the speed of recovery which goes hand in hand with lower CPU usage when restoring the database from your backup.

Recommendations
When you are creating an individual backup, configuring a log shipping configuration, or creating a maintenance plan, you can override the server-level default.
Also, backup compression is supported for both disk backup devices and also for tape backup devices.

Two ways to configure
1.) Through SSMS
1.a. In the Object Explorer window, right-click the database server for which you want compressed backups to be created and select Properties.
1.b. Click the Database Settings page in the left side menu
1.c. Under Backup and restore, the Compress backup tickbox shows the current setting for this option.

- If the Checkbox is unchecked, new backups are created uncompressed by default.
- If the Checkbox is checked, new backups are compressed by default.

Note: You need Syadmin or Serveradmin fixed server role, Then only can change the default setting by checking the check box.

2.) Using TSQL
2.a. Start by connecting to the Database Engine and pressing the New Query button to create the script file.
2.b. To configure the backup compression default option at server-level so that backups are created compressed by default, run the following query:
EXEC sp_configure 'backup compression default', 1;
        RECONFIGURE WITH OVERRIDE;
        GO
These settings take effect immediately so no server restart is required.
If you would just like to see the status of backup compression without using SSMS interface, you can run the following T-SQL query:
SELECT value
FROM sys.configurations
WHERE NAME = 'backup compression default';

There is also the option of overriding the server backup-compression default when creating a backup with T-SQL by using either WITH NO_COMPRESSION or WITH COMPRESSION option in your BACKUP statement.
An example statement of this can be seen below:
BACKUP DATABASE TestDB TO DISK = 'D:\MSSQL\Backup\TestDB.bak'
WITH FORMAT, COMPRESSION;


Monday 31 August 2015

SQL Server Unattended Installation

This post is to Automate SQL Server installations using unattended installation concept through command prompt.

To automate installation requires SQL Server ConfigurationFile, SQL Setup generates a configuration file based upon the dba inputs and files named ConfigurationFile.ini. The ConfigurationFile.ini file is a text file which contains the set of parameters in name/value pairs along with descriptive comments. Many of the parameter names correspond to the screens and options which you see while installing SQL Server through the wizard.
You can then use the configuration file to install SQL Server with the same configuration instead of going through each of the installation screens.

To create ConfigurationFile, follow the below steps:
1. Run SQL Server 2012 Setup as you normally would.
2. Set all of the configuration settings you want, clicking through all of the pages of the setup wizard.
3. When you reach the “Ready to Install” page, Setup will allow you to review all of your configuration settings. In addition, at the bottom of the screen it will show you the path of the configuration file that it has created itself. Make note of the configuration file path so you can grab the configuration file.



4. Cancel Setup.

Now, edit the ConfigurationFile as follows:
1. Set QUIET to “True”. This specifies that Setup will run in a quiet mode without any user interface

QUIET="True"

2. Set SQLSYSADMINACCOUNTS to “BUILTINADMINISTRATORS”. This will ensure that administrators on the machine are added as members of the sysadmin role. You can set its value based on your needs (Ex: SQLSYSADMINACCOUNTS=”domainYourUser”), but this is the more generic approach. I have added My user instead of BUILTINADMINISTRATORS, to secure SQL server from unwanted logins.

SQLSYSADMINACCOUNTS="BUILTINADMINISTRATORS"

3. Add PID and set its value to your product license key. If your setup.exe already comes preloaded with the key, there is no need to add this option to the configuration file.

4. Add IACCEPTSQLSERVERLICENSETERMS and set its value to “True”. This is to require to acknowledge acceptance of the license terms at time of unattended installations.

IACCEPTSQLSERVERLICENSETERMS="True"

5. Remove the UIMODE parameter as it can’t be used with the QUITE parameter.

6. Add the Directory of INSTALLSHAREDDIR, INSTALLSHAREDWOWDIR, INSTANCEDIR parameters. but if you want to install on the default installation directories then you can remove these parameters.

7. You can add or remove the feature you want to install, Select FEATURES=SQLENGINE,SSMS,ADV_SSMS in the configuration file. You can change that based on your needs.

For more details on features refer Article

Now ConfigurationFile.ini is ready, to execute that will create "SQLInstaller.bat" file which ask for the password and calls the ".ini".
Copy the below script and save as "SQLInstaller.bat"
@echo off
echo Installing SQL Server 2012

echo.
set /p SSvcpwd= Enter SQL Service Account Password "ENTERPRISE\SQLServices":

echo.
pause

date/t
time /t

"D:\SQLEntEdtn_2012_English\setup.exe" /SQLSVCPASSWORD="%SSvcpwd%"/AGTSVCPASSWORD="%SSvcpwd%" 
/ISSVCPASSWORD="%SSvcpwd%" /RSSVCPASSWORD="%SSvcpwd%" 
/ConfigurationFile="C:\SQLInstaller\ConfigurationFile.ini"

date/t
time /t

pause

That's it!!
Run the ".bat" as administrator, Enter the password and wait for sometimes. Installation will get complete after that verify the SQL Services & Instance.


Monday 24 August 2015

SQL Server : Use of DBCC PAGE

Use of DBCC PAGE
One more undocumented command is DBCC PAGE. Use it to read the content of database MDF and LDF files.

NAME: DBCC PAGE

FUNCTION:
   Prints out the contents of a SQL Server page.

SYNTAX:
   DBCC PAGE ( {'dbname' | dbid}, filenum, pagenum [, printopt={0|1|2|3} ])

PARAMETERS:
   Dbid or dbname - Enter either the dbid or the name of the database
                    in question.
   Filenum- Enter the file number, 1- mdf & 2- ldf

   Pagenum - Enter the page number of the SQL Server page that is to
             be examined.

   Print option - (Optional) Print option can be either 0, 1, or 2.

                  0 - (Default) This option causes DBCC PAGE to print
                      out only the page header information.
                  1 - This option causes DBCC PAGE to print out the
                      page header information, each row of information
                      from the page, and the page's offset table. Each
                      of the rows printed out will be separated from
                      each other.
                  2 - This option is the same as option 1, except it
                      prints the page rows as a single block of
                      information rather than separating the
                      individual rows. The offset and header will also
                      be displayed.
                  3-  This option gives in brief info about the page.

Before running this command, you should first set the trace flag by executing command DBCC TRACEON(3604) and then DBCC PAGE('dbname', pagenum, pageid, )
To find the table page which you would like to read use, DBCC IND and in SQL 2012 onwards - sys.dm_db_database_page_allocations.
Example as below:
DBCC IND('mydb1',EMPLOYEE,1)
GO

Or, we can use the DBCC SEMETADATA('Objectname') command, syntax as below:
Use myDb1
Go
DBCC SEMETADATA(N'Employee')
GO


So now lets assume, you would like to view the page 165, syntax as below:
DBCC PAGE('mydb1', 1,165,3 )
Go
For more details refer this Article


Friday 21 August 2015

SQL Server : Use of FN_DUMP_DBLOG

Use of FN_DUMP_DBLOG

fn_dump_dblog is an another undocumented hero in SQL Server function that read transaction log native or natively compressed backups.
Here is the syntax to execute fn_dump_dblog function on a specific transaction log backup and specify all 63 parameters:
SELECT 
[Current LSN], [Operation], [Context], 
[Transaction ID], [transaction name], [Description]
FROM fn_dump_dblog (NULL,NULL,N'DISK',1,N'D:\Mydb1\mydb1_trn1.trn',
DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT, 
DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT, 
DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT, 
DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT, 
DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT, 
DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT, 
DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT, DEFAULT); 
Here is the output:



using fn_dump_dblog function,
you can read the transaction's and identify the current LSN,
convert the Current LSN into the used format in Restore command similarly as in the last post i.e. use-of-fndblog

Thursday 20 August 2015

SQL Server: Use of FN_DBLOG

Use of FN_DBLOG

fn_dblog (Hero of this Post) is an undocumented SQL Server function that reads the active portion of an online transaction log.

Execute the below fn_dblog function to get the details:
Select * FROM sys.fn_dblog(NULL,NULL)

You can filter the result by adding where clause with options like - LOP_SHRINK_NOOP, LOP_INSERT_ROWS, LOP_MODIFY_ROW, LOP_DELETE_ROWS.
LOP_SHRINK_NOOP - This means taken log backup and shrinking the file.
LOP_INSERT_ROWS - Inserted new rows
LOP_MODIFY_ROW - Modified rows
LOP_DELETE_ROWS - Deleted rows

Lets now look into the situation where a user has deleted data and the techniques you can use to identify the user who has deleted data using the fn_dblog function and how you can restore up to the point prior to the data being deleted.

Let see through the example; Lets assume someone has deleted few rows from the table.
To be able to find any record of deleted data within the transaction log can be achieve using undocumented function fn_dblog and pass in two NULL values as shown below:
SELECT [Transaction ID], [Operation],[Context], [AllocUnitName]
FROM sys.fn_dblog(NULL,NULL) WHERE OPERATION = 'LOP_DELETE_ROWS';
Using the Null values means that I am not looking for a specific LSN Start and End point. When using this function a scan of the active log portion is carried out starting at the oldest uncommitted transaction up to the most recent record.
Result as below:



Using "Transaction ID" from above information we can identify who deleted the records using below script:
In this case we are taking [Transaction ID] = '0000:00001d88'
Use mydb1
Go

SELECT
[Current LSN],[Operation], 
[Transaction ID], [Begin Time],
[Transaction Name],
SUSER_SNAME ([Transaction SID]) [USER]
FROM sys.fn_dblog(NULL, NULL)
WHERE 
[Transaction ID] = '0000:00001d88'
AND
[Operation] = 'LOP_BEGIN_XACT'


The LOP_BEGIN_XACT is a log operation and includes information such as the SPID, transaction name and start time and by using the above script, you can find out the operation started at "2015/08/20 10:53:06:220" and was carried out by user "DOM\Amit_Bhardwaj".

With the above information now it is possible to restore the database prior to the deletion of rows. Using the "current LSN" number of "Transaction ID" -'0000:00001d88', will convert this value into a decimal value that shall be used within the log restore .
So the "Current LSN" full value is 00000045:00000109:0001 and break it and convert it as follows:-

Convert 1st Value- 00000045 from Hexadecimal to Decimal = 69
Convert 2nd Value- 00000109 from Hexadecimal to Decimal= 265
Convert 3rd Value- 0001 from Hexadecimal to Decimal = 1

Now using above value will create LSN number, which will use it into Restore Script with with the STOPBEFOREMARK option:
To Create LSN value,

Pick 1st Decimal Value- 69
Pick 2nd Decimal Value- 265
Pick 2nd Decimal Value- 1

Now as per LSN Number Syntax (LSN no.- 00 0000000000 00000)- 1st value contains 2 numeric decimal value (so its 69), 2nd value contains 10 numeric decimal value (so its 0000000265) and 3rd contains 5 numeric decimal value (so its 00001).
so according to this LSN number should be 69000000026500001

So first take the tail log backup.
Now Start the restoration:

1. Restore Full with norecovery
2. Restore log backup with norecovery and according to start time of the "Transaction ID" -'0000:00001d88' which is "2015/08/20 10:53:06:220". using this time restore this time log backup with the STOPBEFOREMARK option.

FUll backup mydb1_full.bak- taken at 2015/08/20 10:00
Tlog backup mydb1_trn1.trn taken at 2015/08/20 10:30
Tlog backup mydb1_trn2.trn taken at 2015/08/20 11:00

Here are the scripts to restore:
RESTORE DATABASE [mydb1] FROM  DISK = N'D:\Mydb1\mydb1_full.bak' WITH NORECOVERY
GO
RESTORE LOG [mydb1] FROM  DISK = N'D:\Mydb1\mydb1_trn1.trn' WITH NORECOVERY
GO
RESTORE LOG [mydb1] FROM  DISK = N'D:\Mydb1\mydb1_trn2.trn'
WITH RECOVERY STOPBEFOREMARK='LSN:69000000026500001'