Thursday 29 December 2022

SQL Server : When was my SQL Server Database Last Accessed?


Your client come up to you and says , “Hey, I wanna list of all of our databases and when they were last accessed”.
You might be okay if you have auditing enabled or if there is a trace or xevent capturing this kind of data, but I'm guessing none of them exist. That's great, I don't usually monitor things like that either.

Is there a method for you to obtain that information if you're not keeping an eye on it? Nothing particularly clear comes from SQL; there is no magic in the "last_access_date" column in sys.databases or anywhere else for that matter.

It's possible to work around this issue; have a look at sys.dm_db_index_usage_stats. For every index in a database, you will observe the following intriguing columns: last_user_seek, last_user_scan, last_user_lookup, and last_user_update.
You may find out when the database was last visited by using the code below, which will provide the most recent date that any index in the database was modified.
SELECT db_name(databases.database_id) AS DBName,
(SELECT MAX(last_user_access)
FROM (VALUES (MAX(last_user_seek)),(MAX(last_user_scan))
,(MAX(last_user_lookup))) AS value(last_user_access)) AS last_user_access
FROM sys.dm_db_index_usage_stats indexstats
RIGHT OUTER JOIN sys.databases databases
ON indexstats.database_id = databases.database_id
GROUP BY databases.database_id
ORDER BY db_name(databases.database_id)
Note: sys.dm_db_index_usage_stats like many DMVs is cleared out on a SQL restart.

Thursday 22 December 2022

RDS SQL Server : Enable and Configure Database mail profile


In this article will discuss about enabling and configuring Database Mail on AWS RDS SQL Server.

Before configuring Database Mail, you've to first enable it through a database parameter group.
In Amazon RDS, parameter groups act as a container for engine configuration values that are applied to one or more DB instances. Each RDS instance comes with an associated default parameter group; however, we can’t modify it.

You can either use a new parameter group or an existing created parameter group. If choosing an existing parameter group, it must support your SQL Server instance edition and version.
To enable Database Mail through a parameter group, complete the following steps:

- On the Amazon RDS console, choose Parameter groups.
- Choose the parameter group you want to use.
- In the search box, enter database mail xps.
- Choose Edit Parameters to modify the value.
- For Values, choose 1.
- Save your changes.
- On the Amazon RDS console, choose Databases.
- Choose the instance you want to use.
- Choose Modify.
- Under Database options, choose the parameter group that has database mail xps set to 1.
- Choose Continue.
- Under Scheduling of modifications, choose Immediately.
- Choose Modify DB Instance to apply the changes.

Before we can use Database Mail, we need to set up a mail configuration.
- Launch SQL Server Management Studio.
- Connect to the SQL Server engine of the RDS instance that Database Mail is enabled for.
- Open a new query.
Use the following stored procedures to create a simple Database Mail configuration.

- Create a Database Mail profile (a profile is a container used to store email accounts). See the following code:
use msdb
go

EXECUTE msdb.dbo.sysmail_add_profile_sp
@profile_name = 'Notifications',
@description = 'Profile used for sending outgoing notifications using SES.' ;

- Add principles to the profile; use public so any user can access the profile:
use msdb
go

EXECUTE msdb.dbo.sysmail_add_principalprofile_sp
@profile_name = 'Notifications',
@principal_name = 'public',
@is_default = 1 ;

We can grant the permissions on Database Mail objects as needed, but public is fine for now.
- Create the Database Mail account (make sure to enter the correct SMTP credentials):

use msdb
go

EXECUTE msdb.dbo.sysmail_add_account_sp
@account_name = 'Acc1',
@description = 'Mail account for sending outgoing notifications.',
@email_address = 'example@example.com',
@display_name = 'Automated Mailer',
@mailserver_name = 'email-smtp.us-west-2.amazonaws.com',
@port = 587,
@enable_ssl = 1, @username = 'SMTP-username',
@password = 'SMTP-password' ;

- Add Database Mail account to the Database Mail profile:
use msdb
go

EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
@profile_name = 'Notifications',
@account_name = 'Acc1',
@sequence_number = 1 ;

- Sending a test email
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Notifications',
@recipients = 'success@simulator.amazonses.com',
@body = 'The database mail configuration was completed successfully.',
@subject = 'Automated Success Message';
GO

Then run this stored procedure to see all email items:
SELECT * FROM msdb.dbo.sysmail_allitems

Please refer the below document for more details on the same : [+] Using Database Mail on Amazon RDS for SQL Server; https://aws.amazon.com/blogs/database/using-database-mail-on-amazon-rds-for-sql-server/

Tuesday 22 November 2022

RDS SQL Server : Script to automate native backup of all the individual databases using msdb.dbo.rds_backup_database Stored procedure


You can use the below command to automate native backup of all the databases using msdb.dbo.rds_backup_database Stored procedure. You can also use the below script to schedule as a agent job to take native backup (in .bak file) of your all databases.

Script to take Full backup without encryption


DECLARE @BackupFileName varchar(50)
DECLARE @DBName sysname
DECLARE @S3ARN_Prefix nvarchar(100)
DECLARE @S3ARN nvarchar(100)
SET @S3ARN_Prefix = 'arn:aws:s3:::awsbucket_name/' -- update your bucketname

DECLARE DBBackup CURSOR FOR
SELECT name
FROM master.sys.databases
WHERE name NOT IN ('rdsadmin', 'master', 'model', 'msdb', 'tempdb')
AND state_desc = 'ONLINE'
AND user_access_desc = 'MULTI_USER'
AND is_read_only = 0

OPEN DBBackup
FETCH NEXT FROM DBBackup INTO @DBName
WHILE (@@FETCH_STATUS = 0)

BEGIN
SET @BackupFileName = @DBName + '_' + REPLACE(REPLACE(REPLACE(CONVERT(varchar,GETDATE(),20),'-',''),':',''),' ','') + '.bak'
SET @S3ARN = @S3ARN_Prefix + @BackupFileName
EXEC msdb.dbo.rds_backup_database
@source_db_name = @DBName,
@S3_arn_to_backup_to = @S3ARN,
@overwrite_S3_backup_file = 0
FETCH NEXT FROM DBBackup INTO @DBName
END


Script to take Full backup with encryption


DECLARE @BackupFileName varchar(50)
DECLARE @DBName sysname
DECLARE @S3ARN_Prefix nvarchar(100)
DECLARE @S3ARN nvarchar(100)
DECLARE @KMS_master_key_ARN nvarchar(100)
SET @S3ARN_Prefix = 'arn:aws:s3:::awsbucket_name/' -- update your bucketname
SET @KMS_master_key_ARN = 'arn:aws:kms:us-east-1:XXXXXXXXXX:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxx' -- update your kms key ARN

DECLARE DBBackup CURSOR FOR
SELECT name
FROM master.sys.databases
WHERE name NOT IN ('rdsadmin', 'master', 'model', 'msdb', 'tempdb')
AND state_desc = 'ONLINE'
AND user_access_desc = 'MULTI_USER'
AND is_read_only = 0

OPEN DBBackup
FETCH NEXT FROM DBBackup INTO @DBName
WHILE (@@FETCH_STATUS = 0)

BEGIN
SET @BackupFileName = @DBName + '_' + REPLACE(REPLACE(REPLACE(CONVERT(varchar,GETDATE(),20),'-',''),':',''),' ','') + '.bak'
SET @S3ARN = @S3ARN_Prefix + @BackupFileName
EXEC msdb.dbo.rds_backup_database
@source_db_name = @DBName,
@S3_arn_to_backup_to = @S3ARN,
@KMS_master_key_arn = @KMS_master_key_ARN,
@overwrite_S3_backup_file = 0
FETCH NEXT FROM DBBackup INTO @DBName
END

To track the status of the job, use this SQL statement:


exec msdb..rds_task_status @task_id= 5 -- 5 as an example of your task id

Saturday 22 October 2022

SQL Server : Performance degraded after upgrading the compatibility level from 2008 to 2014/2017/2019 (i.e. 100 to 120 Compatibility version).


In this article, we will discuss about the performance impact after change in compatibility level of your databases.

Please allow me to shed some light about the concept of Compatibility Level, as you know prior to SQL Server 2014, the database compatibility level of your user databases was not typically an important property that you had to be concerned with, at least from a performance perspective. Unlike the database file level (which gets automatically upgraded when you restore or attach a down-level database to an instance running a newer version of SQL Server, and which can never go back to the lower level), the database compatibility level can be changed to any supported level with a simple ALTER DATABASE SET COMPATIBILITY LEVEL = xxx command.

With SQL Server 2012 and older, the database compatibility level was mainly used to control whether new features introduced with a particular version of SQL Server were enabled or not and whether non-supported old features were disabled or not. The database compatibility level was also used as a method to maintain better backwards application compatibility with old versions of SQL Server. If you didn’t have time to do full regression testing with the newest compatibility level, you could just use the previous compatibility level until you could test and modify your applications if needed.

Now when you create a new user database in SQL Server, the database compatibility level will be set to the default compatibility level for that version of SQL Server. So for example, a new user database that is created in SQL Server 2017 will have a database compatibility level of 140. The exception to this is if you have changed the compatibility level of the model system database to a different supported database compatibility level, then a new user database will inherit its database compatibility level from the model database.

But Database Compatibility Level 120, this was when the “new” cardinality estimator (CE) was introduced. In many cases, most of your queries ran faster when using the new cardinality estimator, but it was fairly common to run into some queries that had major performance regressions with the new cardinality estimator. Using database compatibility level 120 means that you will be using the “new” CE unless you use an instance-wide trace flag or a query-level query hint to override it.

You can refer the below article from MS on the Differences between lower compatibility levels and level 120; https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-database-transact-sql-compatibility-level?view=sql-server-2017#differences-between-lower-compatibility-levels-and-level-120
Also there classic whitepaper wrote by Joe Sack on “Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator” that explains the background and behavior of this change back in April of 2014. ; https://docs.microsoft.com/en-us/previous-versions/dn673537(v=msdn.10)?redirectedfrom=MSDN

Microsoft’s recommended upgrade process is :
[+] Change the Database Compatibility Level and use the Query Store; https://docs.microsoft.com/en-us/sql/database-engine/install-windows/change-the-database-compatibility-mode-and-use-the-query-store?view=sql-server-2017
- Upgrade to the latest SQL Server version and keep the source (legacy) database compatibility level.
- Enable Query Store, and let it collect a baseline of your workload.
- Change the database compatibility level to the native level for the new version of SQL Server.
- Use Query Store to fix performance regressions by forcing the last known good plan.

Please refer the below articles on the same :
[+] https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017

Microsoft also introduce the new tool Query Tuning Assistant; Starting with SQL Server Management Studio v18, the new Query Tuning Assistant (QTA) feature will guide users through the recommended workflow to keep performance stability during upgrades to newer SQL Server versions, as documented in the section Keep performance stability during the upgrade to newer SQL Server of Query Store Usage Scenarios. However, QTA doesn't roll back to a previously known good plan as seen in the last step of the recommended workflow. Instead, QTA will track any regressions found in the Query Store Regressed Queries view, and iterate through possible permutations of applicable optimizer model variations so that a new better plan can be produced.

Please have a look into the below article for more details on The Query Tuning Assistant workflow; https://docs.microsoft.com/en-us/sql/relational-databases/performance/upgrade-dbcompat-using-qta?view=sql-server-2017#the-query-tuning-assistant-workflow

Sunday 4 September 2022

RDS SQL Server : How to reduce the size on disc for RDS SQL Server database?


First of all I would like to explain you that SQL Server databases have two types of files; Data File and Log File.

1. Data files contain data and objects such as tables, indexes, stored procedures, and views.
Data files can be shrink, however it is not a good practice to shrink the data file unless necessary as it can result into blocking, dead locks and index fragmentation. The below documentation to provides more insights on why shrink data files should be avoided.

[+] https://littlekendra.com/2016/11/08/shrinking-sql-server-data-files-best-practices-and-why-it-sucks/
[+] https://www.brentozar.com/archive/2009/08/stop-shrinking-your-database-files-seriously-now/
2. Log files contain the information that is required to recover all transactions in the database.

Log Files can be shrink, based upon the log space utilization and on left a transaction open and running.
Now if you would like to know which file in the database is taking more space can use below command to identify file utilization:
USE [your_database_name]
GO

SELECT DB_NAME() AS DbName, 
    name AS FileName, 
    type_desc,
    size/128.0 AS CurrentSizeMB,  
    size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS INT)/128.0 AS FreeSpaceMB
FROM sys.database_files
WHERE type IN (0,1);
GO
Then accordingly perform the shrink, Here are the steps to perform the shrink for database files:

For Log files:
Firstly, I would humbly request you to check if you have an open or active transaction. The following queries may help:

1. DBCC SQLERF(LOGSPACE)
>> Provides the log space utilization
[+] How can I troubleshoot storage consumption in my Amazon RDS DB instance that is running SQL Server? -
https://aws.amazon.com/premiumsupport/knowledge-center/rds-sql-server-storage-optimization/
2. SELECT name, log_reuse_wait_desc FROM sys.databases
>> Provides why each database's log file is not clearing out
[+] sys.databases (Transact-SQL)
- https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-databases-transact-sql?view=sql-server-ver15
>> Includes the possible reasons[+]:
a. Log backup needs to be run
b. Active backup running - because the full backup needs the transaction log to be able to restore to a specific point in time
c. Active transaction - someone typed BEGIN TRAN and locked their workstation for the night

[+] My Favorite System Column: LOG_REUSE_WAIT_DESC
- https://www.brentozar.com/archive/2016/03/my-favorite-system-column-log_reuse_wait_desc/
As already mentioned above, you can perform the shrink on the transaction log using the below command.

Step 1: Please run the below command to get the used space and free space of transaction log in the database [+]
DBCC SQLPERF(LOGSPACE); --> review transaction log file utilization per database level, if it is more than 80% free then only perform step2
[+] https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-sqlperf-transact-sql?view=sql-server-2017
Step 2: Query to shrink transaction log file
USE [userdb-name]
GO
DBCC SHRINKFILE (logfilename, sizeinmb)

If this instance is vital to production, I would also like to recommend using CloudWatch Alarms [+] to monitor your 'FreeStorageSpace' to
prevent Storage Full issues. The alarm will monitor this metric and notify you when the defined threshold is breached, so you may take
immediate actions to increase storage.
[+] How can I create CloudWatch alarms to monitor the Amazon RDS free storage space and prevent storage full issues?
- https://aws.amazon.com/premiumsupport/knowledge-center/storage-full-rds-cloudwatch-alarm/
For Data Files:

You can use same 'DBCC SHRINKFILE' command for shrinking a data file to a specified target size.

The following example shrinks the size of a data file named 'DataFile1' in the user database to 1024 MB.
USE [userdb-name];
GO
DBCC SHRINKFILE (DataFile1, 1024);
GO
OR From GUI- right click on database>click task> shrink> files > select data file from drop down.
I would also like you to know that data that is moved to shrink a file can be scattered to any available location in the file. This causes index fragmentation and can slow the performance of queries that search a range of the index. To eliminate the fragmentation, consider rebuilding the indexes on the file after shrinking.
To know fragmentation information for the data and indexes of the specified database ‘DB_name’
  SELECT OBJECT_NAME(OBJECT_ID), index_id,index_type_desc,index_level,
	avg_fragmentation_in_percent,avg_page_space_used_in_percent,page_count
	FROM sys.dm_db_index_physical_stats
	(DB_ID(DB_name'), NULL, NULL, NULL , 'SAMPLED')
	ORDER BY avg_fragmentation_in_percent DESC

To rebuild all indexes in a table, here is the sample query:
	ALTER INDEX ALL ON DatabaseName.SchemaName
	REBUILD WITH (FILLFACTOR = 80, SORT_IN_TEMPDB = ON, STATISTICS_NORECOMPUTE = ON);
[+] https://docs.microsoft.com/en-us/sql/relational-databases/indexes/reorganize-and-rebuild-indexes?view=sql-server-ver15
These suggestions are based on best effort basis, to conclude I would highly recommend you to please test these thoroughly before implementing them on any production environment. This will help in proactive testing to avoid any unforeseen issues during actual implementation.

Thursday 1 September 2022

SQL Server : Capture expensive queries using Server side trace


You can follow the below steps to capture the expensive queries by creating the trace and also from SQL cache plan DMV's :

1. Create Server Side trace: by defining a trace, its the events and columns to captured, and add the filter condition.

You can use the below Script to create the trace:

-- Create Server-side trace
-- Create a Queue
DECLARE @rc int
DECLARE @TraceID int
DECLARE @maxfilesize bigint
set @maxfilesize = 2048

-- Please replace the text InsertFileNameHere, with an appropriate
-- filename prefixed by a path, e.g., c:\MyFolder\MyTrace. The .trc extension
-- will be appended to the filename automatically. If you are writing from
-- remote server to local drive, please use UNC path and make sure server has
-- write access to your network share

DECLARE @MyFileName nvarchar(256)
SELECT @MyFileName='D:\RDSDBDATA\Log\RDSSQLTrace_' + CONVERT(nvarchar(10),GetDate(),121) + '_' + REPLACE(CONVERT(nvarchar(10),GetDate(),108),':','-')


exec @rc = sp_trace_create @TraceID output, 0, @MyFileName, @maxfilesize, NULL
if (@rc != 0) goto error

-- Client side File and Table cannot be scripted
-- Set the events
DECLARE @on bit
set @on = 1 --sp_trace_setevent @traceid ,@eventid ,@columnid,@on

-- Capturing the RPC:Completed event

EXEC sp_trace_setevent @TraceID, 10, 16, @on
EXEC sp_trace_setevent @TraceID, 10, 1, @on
EXEC sp_trace_setevent @TraceID, 10, 17, @on
EXEC sp_trace_setevent @TraceID, 10, 14, @on
EXEC sp_trace_setevent @TraceID, 10, 18, @on
EXEC sp_trace_setevent @TraceID, 10, 12, @on
EXEC sp_trace_setevent @TraceID, 10, 13, @on
EXEC sp_trace_setevent @TraceID, 10, 8, @on
EXEC sp_trace_setevent @TraceID, 10, 10, @on
EXEC sp_trace_setevent @TraceID, 10, 11, @on
EXEC sp_trace_setevent @TraceID, 10, 35, @on

--Capturing the SQL:BatchCompleted event

EXEC sp_trace_setevent @TraceID, 12, 16, @on
EXEC sp_trace_setevent @TraceID, 12, 1, @on
EXEC sp_trace_setevent @TraceID, 12, 17, @on
EXEC sp_trace_setevent @TraceID, 12, 14, @on
EXEC sp_trace_setevent @TraceID, 12, 18, @on
EXEC sp_trace_setevent @TraceID, 12, 12, @on
EXEC sp_trace_setevent @TraceID, 12, 13, @on
EXEC sp_trace_setevent @TraceID, 12, 8, @on
EXEC sp_trace_setevent @TraceID, 12, 10, @on
EXEC sp_trace_setevent @TraceID, 12, 11, @on
EXEC sp_trace_setevent @TraceID, 12, 35, @on

-- Set the Filters
EXEC sp_trace_setfilter @TraceID,11,0,7,N'%NT AUTHORITY%' --SQL login
EXEC sp_trace_setfilter @TraceID, 10, 0, 7, N'RdsAdminService'

-- Set the trace status to start
EXEC sp_trace_setstatus @TraceID, 1

-- display trace id and file name, Note these details for future references
SELECT @TraceID,@MyFileName

goto finish

error:
select ErrorCode=@rc

finish:
go

2. Start the Trace using below command :

-- To Start the trace (Please update the trace_id as you may recived at the time of creation)
Exec sp_trace_setstatus @traceid = 2 , @on = 1 -- start

3. Start the application test run

4. Once test run is completed, Stop the trace using below command:

-- To Stop the trace (Please update the trace_id as you may recived at the time of creation)

Exec sp_trace_setstatus @traceid = 2 , @on = 0 -- stop
-- To Delete the trace, once application test is completed.
--Exec sp_trace_setstatus @traceid = 2 , @on = 0 -- Delete definition from server

5. Insert the trace output in dummy table, this helps to get the data in easy to read format.

USE [DB_name]
GO
SELECT * INTO RDSSQLTrace_20210722_1215 FROM ::fn_trace_gettable ('D:\RDSDBDATA\Log\RDSSQLTrace_2021-07-22_12-15-19.trc', default)
6. Capture the output of below commands and update the output in the attached excel sheet template in the TraceData sheet (File Name : Capture_ExpensiveQueries_Template.xlsx)

--To collect queries based on max Duration 
SELECT TOP 10 StartTime,EndTime,Reads, Writes, 
Duration/1000000 DurationSec,CPU, TextData,RowCounts,sqlHandle,[DatabaseName],
SPID,LoginName,EventClass FROM [gptst].[dbo].[RDSSQLTrace_20210722] 
ORDER BY Duration DESC

--To collect queries based on max CPU utilization
SELECT TOP 10 StartTime,EndTime,Reads, Writes, 
Duration/1000000 DurationMs,CPU, TextData,RowCounts,sqlHandle,[DatabaseName],
SPID,LoginName,EventClass FROM [gptst].[dbo].[RDSSQLTrace_20210722] 
ORDER BY CPU DESC

--To collect queries based on max Reads,Writes
SELECT TOP 10 StartTime,EndTime,Reads, Writes, 
Duration/1000000 DurationSec,CPU, TextData,RowCounts,sqlHandle,[DatabaseName],
SPID,LoginName,EventClass FROM [gptst].[dbo].[RDSSQLTrace_20210722]
ORDER BY Reads,Writes DESC
7. Also, capture the output of the below scripts as well and update the same in the attached excel sheet template in the CacheData sheet (File Name : Capture_ExpensiveQueries_Template.xlsx)
--TOP 10 Queries by Duration (Avg Sec) from system dmv's		
			
SELECT TOP 10 DB_NAME(qt.dbid) AS DBName,
o.name AS ObjectName,
qs.total_worker_time / 1000000 / qs.execution_count As Avg_CPU_time_sec,
qs.total_worker_time / 1000000 as Total_CPU_time_sec,
qs.total_elapsed_time / qs.execution_count / 1000000.0 AS Average_Seconds,
qs.total_elapsed_time / 1000000.0 AS Total_Seconds,
'',
qs.execution_count as Count,
qs.last_execution_time as Time,
SUBSTRING (qt.text,qs.statement_start_offset/2,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) AS Query
--,qt.text [MainQuenry]
--,qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
where qs.execution_count>5 and DB_NAME(qt.dbid) not in ( 'master','msdb','model')
--and DB_NAME(qt.dbid) is not null
ORDER BY average_seconds DESC
--TOP 10 Queries by CPU (Avg CPU time) from system dmv's
SELECT TOP 10 DB_NAME(qt.dbid) as DBName,
	qs.total_worker_time / 1000000 / qs.execution_count AS Avg_CPU_time_sec,
	qs.total_worker_time / 1000000 As Total_CPU_time_sec,
	qs.total_elapsed_time / 1000000 / qs.execution_count As Average_Seconds,
	qs.total_elapsed_time / 1000000 As Total_Seconds,
    (total_logical_reads + total_logical_writes) / qs.execution_count as Average_IO,
	total_logical_reads + total_logical_writes as Total_IO,	
    qs.execution_count as Count,
    qs.last_execution_time as Time,
	SUBSTRING(qt.[text], (qs.statement_start_offset / 2) + 1,
		(
			(
				CASE qs.statement_end_offset
					WHEN -1 THEN DATALENGTH(qt.[text])
					ELSE qs.statement_end_offset
				END - qs.statement_start_offset
			) / 2
		) + 1
	) as Query
   ,qt.text
	,qp.query_plan
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.[sql_handle]) AS qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
where 
DB_NAME(qt.dbid) not in ( 'master','msdb','model')
ORDER BY Avg_CPU_time_sec DESC

-- Top 10 query by IO (Avg IO) from system dmv's

SELECT TOP 10 DB_NAME(qt.dbid) AS DBName,
o.name AS ObjectName,
qs.total_elapsed_time / 1000000 / qs.execution_count As Average_Seconds,
qs.total_elapsed_time / 1000000 As Total_Seconds,
(total_logical_reads + total_logical_writes ) / qs.execution_count AS Average_IO,
(total_logical_reads + total_logical_writes ) AS Total_IO,
'',
qs.execution_count AS Count,
last_execution_time As Time,
SUBSTRING (qt.text,qs.statement_start_offset/2,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) AS Query
--,qt.text
--,qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
where last_execution_time > getdate()-1 
and DB_NAME(qt.dbid) not in ( 'master','msdb','model')
--and DB_NAME(qt.dbid) is not null
and qs.execution_count > 5 
ORDER BY average_IO DESC
Note : Please do capture the output after each run if running the test multiple times, to compare the result set.

Monday 22 August 2022

RDS SQL Server : Unable to restore and getting an error message : "There is not enough space on the disk to perform restore database operation."


If you are getting an error message as "There is not enough space on the disk to perform restore database operation."

-- This issue appears when rds restore starts, RDS makes an estimate about how much disk space will be required by the database that is going to be restored. If that estimate exceeds the amount of free space left on the disk, you will see this error.

Kindly note that as part of the restore process is trying to look for the data and log file size as was configured on the source and if the amount of space is not available on disk upfront its failing the restore process in the first place while trying perform the restore and also the SQL server restore process will not initiate the Storage Autoscaling.

For example: lets say the source database (On-prem\EC2\RDS sql server instance) is configured with 30 GB of data file and 30 GB of log file sizes and the used space inside the 30 GB data file is only 10 GB and used space inside the 30 GB log file is only 10 GB.The backup size of the database in this case would be only 20 GB(with compression this backup file size can further be reduced) out of actual 60 GB configured on the source database.During the restore of this 20 GB backup file on the RDS SQL server instance the process looks for the 60 GB of space on the underlying storage per the source data and log file configurations.In this case if the Target RDS database instance if it has 50 GB of underlying storage so restore process would fail as it requires 60 GB of storage space to proceed with the restore and it fails with the space issue.

Now I would suggest you to please check the actual database size on the source database using below script:
--Useful script to know the current size of database files and the free space.
USE [DBNAME] -- update your db name
GO
SELECT DB_NAME() AS DbName,
name AS FileName,
size/128.0 AS CurrentSizeMB,
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS INT)/128.0 AS FreeSpaceMB
FROM sys.database_files;

If log is using the most of the storage space then you can try to shrink the log file and then again the backup and try to restore again.

Sunday 22 May 2022

SQL Server : Unable to connect to the database, error message Logon Login failed for user 'USER01'. Reason: Password did not match that for the login provided. [CLIENT: 10.xxx.xxx.xxx]


In this article, you will be able to see how to troubleshoot when you are unable to connect to the database and getting an error message "Logon Login failed for user 'USER01'. Reason: Password did not match that for the login provided. [CLIENT: 10.xxx.xxx.xxx]."

Please allow me to explain you as per the error message it seems that Password you are providing for user "USER01" is not matching with actual one.

You can follow the below steps as a troubleshooting :
1. If you have SQL Server Management Studio, please re-try and test connectivity with the login and correct password.

2. If you are checking from Application Server then check the user / password settings with an udl file from the application server; just follow the steps below :

2.1) Create a new empty text file like "test.txt" (For example, right mouse click on an empty place on your desktop, select "New" and "Text Document")
2.2) Rename the file as "test.udl"
As soon as you renamed the file, text file icon should change to a UDL icon
2.3) Double click on test.udl file and you'll receive a window titled "Data Link Properties"
2.4) Provide the correct username and password; Test Connectivity from application server

3. You can check if there is any orphan user by the same login using below command:
-- command to check orphan user, this will give you the SID:
Use [YourDB_Name]
GO
EXEC sp_change_users_login 'Report'
GO

If that is the orphan user, than to fix the orphan user using below command:

-- First drop the user from login and then re-create using below script.
-- get the sid using above command for corresponding user
use master
go
CREATE LOGIN login name WITH PASSWORD = 'login password', sid='' -- use the same SID given by above script
GO