First of all I would like to explain you that SQL Server databases have two types of files; Data File and Log File.
1. Data files contain data and objects such as tables, indexes, stored procedures, and views.
Data files can be shrink, however it is not a good practice to shrink the data file unless necessary as it can result into blocking, dead locks and index fragmentation. The below documentation to provides more insights on why shrink data files should be avoided.
[+] https://littlekendra.com/2016/11/08/shrinking-sql-server-data-files-best-practices-and-why-it-sucks/
[+] https://www.brentozar.com/archive/2009/08/stop-shrinking-your-database-files-seriously-now/
2. Log files contain the information that is required to recover all transactions in the database.
Log Files can be shrink, based upon the log space utilization and on left a transaction open and running.
Now if you would like to know which file in the database is taking more space can use below command to identify file utilization:
USE [your_database_name] GO SELECT DB_NAME() AS DbName, name AS FileName, type_desc, size/128.0 AS CurrentSizeMB, size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS INT)/128.0 AS FreeSpaceMB FROM sys.database_files WHERE type IN (0,1); GOThen accordingly perform the shrink, Here are the steps to perform the shrink for database files:
For Log files:
Firstly, I would humbly request you to check if you have an open or active transaction. The following queries may help:
1. DBCC SQLERF(LOGSPACE)
>> Provides the log space utilization
[+] How can I troubleshoot storage consumption in my Amazon RDS DB instance that is running SQL Server? -
https://aws.amazon.com/premiumsupport/knowledge-center/rds-sql-server-storage-optimization/
2. SELECT name, log_reuse_wait_desc FROM sys.databases
>> Provides why each database's log file is not clearing out
[+] sys.databases (Transact-SQL)
- https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-databases-transact-sql?view=sql-server-ver15
>> Includes the possible reasons[+]:
a. Log backup needs to be run
b. Active backup running - because the full backup needs the transaction log to be able to restore to a specific point in time
c. Active transaction - someone typed BEGIN TRAN and locked their workstation for the night
[+] My Favorite System Column: LOG_REUSE_WAIT_DESC
- https://www.brentozar.com/archive/2016/03/my-favorite-system-column-log_reuse_wait_desc/
As already mentioned above, you can perform the shrink on the transaction log using the below command.
Step 1: Please run the below command to get the used space and free space of transaction log in the database [+]
DBCC SQLPERF(LOGSPACE); --> review transaction log file utilization per database level, if it is more than 80% free then only perform step2
[+] https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-sqlperf-transact-sql?view=sql-server-2017
Step 2: Query to shrink transaction log file
USE [userdb-name]
GO
DBCC SHRINKFILE (logfilename, sizeinmb)
If this instance is vital to production, I would also like to recommend using CloudWatch Alarms [+] to monitor your 'FreeStorageSpace' to
prevent Storage Full issues. The alarm will monitor this metric and notify you when the defined threshold is breached, so you may take
immediate actions to increase storage.
[+] How can I create CloudWatch alarms to monitor the Amazon RDS free storage space and prevent storage full issues?
- https://aws.amazon.com/premiumsupport/knowledge-center/storage-full-rds-cloudwatch-alarm/
For Data Files:
You can use same 'DBCC SHRINKFILE' command for shrinking a data file to a specified target size.
The following example shrinks the size of a data file named 'DataFile1' in the user database to 1024 MB.
USE [userdb-name]; GO DBCC SHRINKFILE (DataFile1, 1024); GOOR From GUI- right click on database>click task> shrink> files > select data file from drop down.
I would also like you to know that data that is moved to shrink a file can be scattered to any available location in the file. This causes index fragmentation and can slow the performance of queries that search a range of the index. To eliminate the fragmentation, consider rebuilding the indexes on the file after shrinking.
To know fragmentation information for the data and indexes of the specified database ‘DB_name’
SELECT OBJECT_NAME(OBJECT_ID), index_id,index_type_desc,index_level, avg_fragmentation_in_percent,avg_page_space_used_in_percent,page_count FROM sys.dm_db_index_physical_stats (DB_ID(DB_name'), NULL, NULL, NULL , 'SAMPLED') ORDER BY avg_fragmentation_in_percent DESC To rebuild all indexes in a table, here is the sample query: ALTER INDEX ALL ON DatabaseName.SchemaName REBUILD WITH (FILLFACTOR = 80, SORT_IN_TEMPDB = ON, STATISTICS_NORECOMPUTE = ON);[+] https://docs.microsoft.com/en-us/sql/relational-databases/indexes/reorganize-and-rebuild-indexes?view=sql-server-ver15
These suggestions are based on best effort basis, to conclude I would highly recommend you to please test these thoroughly before implementing them on any production environment. This will help in proactive testing to avoid any unforeseen issues during actual implementation.