Wednesday 14 October 2020

Azure PowerShell : Script to create Azure App Service

This PowerShell script is to create an Azure App Service with .NETCORE Web Application:
#Provide following variables:

$subscriptionname = ""
$Environment = ""
$webappName=""              #Provide Web App Name
$location="westus"                         #Provide Location
$resourceGroupName = ""     #Provide Resource Group
$AppServicePlan = ""   #Provide Service Plan
$BusinessOwner = ""
$Owner =""

$SubsId = Get-AzSubscription -SubscriptionName $subscriptionname

Select-AzSubscription -SubscriptionId $SubsId.Id

#Creating WebApp
New-AzWebApp -ResourceGroupName $resourceGroupName -Name $webappName `
-Location $location -AppServicePlan $AppServicePlan

Set-AzWebApp -ResourceGroupName $resourceGroupName `
-Name $webappName -AlwaysOn $true


#Setting .NET CORE
$dotnetResourceName = $webappName + "/metadata"

$PropertiesObject = @{
    "CURRENT_STACK" =  "dotnetcore"
}
New-AzResource -PropertyObject $PropertiesObject -ResourceGroupName $resourceGroupName `
-ResourceType Microsoft.Web/sites/config -ResourceName $dotnetResourceName -Force



#Creating App Insight
New-AzApplicationInsights -ResourceGroupName $resourceGroupName `
-Name $webappName -Location $location

# Collecting create app insight info
$appInsightsKey = Get-AzApplicationInsights -Name $webappName `
-ResourceGroupName $resourceGroupName


$errorpage = "https://"+ $webappName + 
".scm.azurewebsites.net/detectors?type=tools&name=eventviewer"

$appconnectionstring = "InstrumentationKey=" + $appInsightsKey.InstrumentationKey `
+";IngestionEndpoint=https://"+ $location + "-0.in.applicationinsights.azure.com/"


$app = Get-AzWebApp -ResourceGroupName $resourceGroupName -Name $webappName -ErrorAction Stop
$newAppSettings = @{} # case-insensitive hash map
# preserve non Application Insights application settings.
$app.SiteConfig.AppSettings | %{$newAppSettings[$_.Name] = $_.Value} 
$newAppSettings["ANCM_ADDITIONAL_ERROR_PAGE_LINK"] = $errorpage;
 # set the Application Insights instrumentation key
$newAppSettings["APPINSIGHTS_INSTRUMENTATIONKEY"] = $appInsightsKey.InstrumentationKey;
 # set the Application Insights connection string
$newAppSettings["APPLICATIONINSIGHTS_CONNECTION_STRING"] = $appconnectionstring;
 # enable the ApplicationInsightsAgent
$newAppSettings["ApplicationInsightsAgent_EXTENSION_VERSION"] = "~2";
$newAppSettings["XDT_MicrosoftApplicationInsights_Mode"] = "default";

$app = Set-AzWebApp -AppSettings $newAppSettings `
-ResourceGroupName $app.ResourceGroup -Name $app.Name -ErrorAction Stop

$Tags = @{App="Apps";BusinessOwner=$BusinessOwner;Owner=$Owner;Environment =$Environment};

Set-AzResource -ResourceGroupName $resourceGroupName `
-Name $webappName -ResourceType Microsoft.Web/sites -Tag $Tags -Force
    
  

Tuesday 21 July 2020

MS Azure : Use PowerShell ISE to Run Azure Automation Runbooks

We can Manage and Run Azure Automation Runbooks using PowerShell ISE. This way it helps to test the Powershell script in local Powershell console and then can be published to Azure Automation Runbooks.
To use that we have to install the module Azure Automation Runbook ToolKit on to local machine


Install the PowerShell Module using the below cmdlet:
You can install the ISE add-on module from PowerShell Gallery with:
Install-Module AzureAutomationAuthoringToolkit -Scope CurrentUser
Then, if PowerShell ISE to always automatically load the add-on, run:
Install-AzureAutomationIseAddOn
Otherwise, whenever you want to load the add-on, just run the following in the PowerShell ISE:
Import-Module AzureAutomationAuthoringToolkit


Here is How it looks like in Powershell after Importing and Signing in:




Wednesday 15 July 2020

MS Azure : Run Login-AzureRmAccount to Login Silently in Azure Automation Runbooks

How to Run Login-AzureRmAccount to Login Silently when working with PowerShell script in azure Automation Runbooks to get data from Azure Subscription

Issue: This blogpost is to fix the below error message:
Get-AzureRmResourceGroup : Run Login-AzureRmAccount to login.
At line:35 char:19
+ $ResourceGroups = Get-AzureRmResourceGroup
+                   ~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (:)PSInvalidOperationException
    + FullyQualifiedErrorId : InvalidOperation,Microsoft.Azure.Commands.Resources

When executing any AzureRM [i.e. Azure Resource Manager] PowerShell command first requires Azure authentication at Azure subscription level.
So if executing AzureRM command without Login-AzureRMAccount, we will get an error message as above.

Solution:
Use Azure AD Service Principal to resolve this issue. So what is Service Principal?
Service principal means running an application as a user and giving full access to it so now it can perform any action against azure subscription level. As Azure subscription is always present in the Azure Active Directory tenant; must add the information of application in Azure AD tenant and this is nothing but the service principal.
So how do we create a Service principal? Well you don’t have to create because it is already exists when we have an Azure ARM automation account created.
Open Assets -> Connections -> AzureRunAsConection
This shows type as Azure service principal and there are many Ids present.

Application Id is the one by which your Automation account is identified as Service principal in Azure AD. Tenant id is nothing but Azure AD id under which your subscription exists. Subscription Id is the actual Azure subscription Id.

Let’s verify this exist in your Azure AD as well. For the same, on Azure portal open
Azure Active Directory -> App Registrations
You will see an Application Id same as what we have observed under automation account connection.

This means AzureRunAsConection of automation account is acting as Service principal. Hence it can be used for authentication against the subscription and also to perform operations against our azure subscription. With this let’s write some PowerShell code to perform authentication using service principal.

Authenticating using Service principal

Code for authenticating Azure Automation account runbook using Automation connection as Service principal is shown below:
Add above code segment in any runbook you wish to in Azure Automation account and you will never receive error of “Run Login-AzureRMAccount to login”.
Thats it!! :-)
$connectionName = "AzureRunAsConnection"
try
{
    # Get the connection "AzureRunAsConnection "
    $servicePrincipalConnection = Get-AutomationConnection -Name $connectionName      
    "Logging in to Azure..."
    $account = Add-AzureRmAccount `
        -ServicePrincipal `
        -TenantId $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
}
catch {
    if (!$servicePrincipalConnection)
    {
        $ErrorMessage = "Connection $connectionName not found."
        throw $ErrorMessage
    } else{
        Write-Error -Message $_.Exception
        throw $_.Exception
    }
}
Write-Output $account

Friday 1 May 2020

MS Azure : PowerShell script to Map Azure Storage Fileshare as a Drive Letter

PowerShell script to map fileshare named folder "DatabaseFiles" from Azure fileshare named "filesdbprod01":

# Input Variables : Provide input to following Variables

$filesharename="filesdbprod01"
$accesskey = "asnjanskdaodnajksndjknaskdnand&^&T^ASDASasaksjdkasjsdajsod"
$root = "DatabaseFiles"
$drive = "X"


# Script Variables
$fullFSName= $filesharename + ".file.core.windows.net"
$FSroot = "\\" + $fullFSName + "\" +$root
$user = "Azure\" + $filesharename

$connectTestResult = Test-NetConnection -ComputerName $fullFSName -Port 445

if ($connectTestResult.TcpTestSucceeded) {
# Save the password so the drive will persist on reboot
cmd.exe /C "cmdkey /add:$fullFSName /user: $user /pass: $accesskey "
# Mount the drive
New-PSDrive -Name $drive -PSProvider FileSystem -Root $FSroot -Persist
} else {
Write-Error -Message "Unable to reach the Azure storage account via port 445.
Check to make sure your organization or ISP is not blocking port 445,
or use Azure P2S VPN, Azure S2S VPN, orExpress Route to tunnel 
SMB traffic over a different port."
} 



Monday 6 April 2020

SQL AlwaysON : TSQL Script to automatically add a new step to TestForPrimary

TSQL Script to automatically add a new step to TestForPrimary for SQL AlwaysOn Solution:


USE [DBA]
GO
SET NOCOUNT ON
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

--Job

DECLARE @jobID nvarchar(40),@stepID nvarchar(40),@stepID_nxt nvarchar(40),@procName varchar (255),
@jobName varchar (255), @command_job varchar(8000),@command_step varchar(8000),
@command_end varchar(8000),@command_sched varchar(8000),
@CommandString NVARCHAR(1000),@command_main nvarchar(max)

DECLARE @job_name as varchar(255),@owner_login_name as varchar(255),@description as varchar(255),
@category_name as varchar(255),@enabled as nvarchar(4)

DECLARE @notify_level_email as nvarchar(4),@notify_level_page as nvarchar(4),
@notify_level_netsend as nvarchar(4)

DECLARE @notify_level_eventlog as nvarchar(4),@delete_level as nvarchar(4),
@start_step_id nvarchar(4)

--Job Steps

DECLARE @step_name varchar(255),@command varchar(8000),@database_name varchar(255),
@database_user_name varchar(255), @subsystem nvarchar(40)

DECLARE @cmdexec_success_code nvarchar(2),@flags nvarchar(2),@retry_attempts nvarchar(2),
@retry_interval nvarchar(2),@output_file_name varchar (255)

DECLARE @on_success_step_id nvarchar(3),@on_success_action nvarchar(2),
@on_fail_step_id nvarchar(2),@on_fail_action nvarchar(2)

     
SET @CommandString = 'use master
GO
Declare @AGName varchar(10);
SELECT TOP(1) @AGName= name FROM sys.availability_groups;

if (SELECT ars.role_desc from sys.dm_hadr_availability_replica_states
ars inner join sys.availability_groups ag 
on ars.group_id = ag.group_id where ag.name = @AGName and ars.is_local = 1) = ''PRIMARY''
BEGIN
PRINT ''This is Primary Replica - execute job''
END

ELSE

BEGIN
-- we''re not in the Primary - exit gracefully: Deliberately cause a Failure
SELECT 1/0
PRINT ''This is Secondary replica - exiting with success'';
END'


if EXISTS(select * from DBA.INFORMATION_SCHEMA.TABLES where table_name='server_agent_jobs')

BEGIN

DROP TABLE DBA..server_agent_jobs

End

--ignore multi-server jobs at this time

select sj.job_id, name, '0' as executed,category_id,'1' as spcreated,
'1999-12-31 00:0000000' as createddate,
('EXECUTE [dbo].[sp_create_server_agent_job_' + name +']') as excmd,
('sp_create_server_agent_job_' + name) as spname into DBA..server_agent_jobs 
from msdb..sysjobs sj inner join msdb..sysjobservers ss on ss.job_id = sj.job_id 
and ss.server_id = '0' order by sj.job_id

WHILE EXISTS (select job_id from DBA..server_agent_jobs where executed = '0')

BEGIN

--get working job_id and stored proc name

SELECT top 1 @jobID = job_id, @procName = 'sp_create_server_agent_job_' + name, @jobName = name
FROM DBA..server_agent_jobs WHERE executed = '0' order by job_id


--get job info from sysjobs

select @job_name = a.name,@owner_login_name = b.name,@description = a.description,
@category_name = c.name,@enabled = a.enabled,
@notify_level_email = a.notify_level_email,@notify_level_page = a.notify_level_page,
@notify_level_netsend = a.notify_level_netsend,@notify_level_eventlog = a.notify_level_eventlog,
@delete_level = a.delete_level, @start_step_id = a.start_step_id
FROM msdb..sysjobs a, master..syslogins b, msdb..syscategories c
WHERE a.owner_sid = b.sid and a.category_id = c.category_id and job_id = @jobID

--get job info from sysjobsteps

if EXISTS(select * from DBA.INFORMATION_SCHEMA.TABLES where table_name='server_agent_job_steps')

BEGIN
DROP TABLE DBA..server_agent_job_steps
End

select *, '0' as executed into DBA..server_agent_job_steps
from msdb..sysjobsteps sjs where job_id = @jobID order by sjs.step_id

if EXISTS(select * from DBA.INFORMATION_SCHEMA.TABLES
where table_name='server_agent_job_schedules')

BEGIN
DROP TABLE DBA..server_agent_job_schedules
End

select *, '0' as executed into DBA..server_agent_job_schedules from msdb..sysjobschedules sjs
where job_id = @jobID order by sjs.schedule_id

--drop proc if it exists before it's re-created

if EXISTS(select * from DBA.INFORMATION_SCHEMA.ROUTINES where routine_name = @procName)
BEGIN
exec('DROP PROCEDURE [' + @procName + ']')
End

SET CONCAT_NULL_YIELDS_NULL OFF

select @command_job = 'CREATE Procedure [dbo].[' + @procName + '](
@operator varchar(255)=NULL
)

AS

BEGIN
SET NOCOUNT OFF
DECLARE @ReturnCode nvarchar (40)--, @jobID nvarchar (40)

Begin Transaction

--delete job if it already exists (by job name -> @jobName)

if EXISTS(select * from msdb..sysjobs where name = ''' + @jobName + ''')
begin
Print ''Job Updated''
end

--add job steps

'
WHILE EXISTS (select * from DBA..server_agent_job_steps where executed = '0')

BEGIN
select top 1 @stepID = step_id from DBA..server_agent_job_steps
where executed = '0' order by step_id desc

SELECT @step_name = a.step_name,@command = a.command, @database_name = a.database_name,
@database_user_name = a.database_user_name,
@subsystem = a.subsystem, @cmdexec_success_code = a.cmdexec_success_code, @flags = a.flags,
@retry_attempts = a.retry_attempts,
@retry_interval = a.retry_interval, @output_file_name = a.output_file_name,
@on_success_step_id = a.on_success_step_id,
@on_success_action = a.on_success_action, @on_fail_step_id = a.on_fail_step_id,
@on_fail_action = a.on_fail_action
FROM DBA..server_agent_job_steps a where a.step_id = @stepID

select @command_job = @command_job +'
EXECUTE msdb.dbo.sp_delete_jobstep
@job_id = ''' + @JobID + ''',
@step_id = '+ @stepID +'

'
update DBA..server_agent_job_steps set executed = '1' where step_id = @stepID
END

update DBA..server_agent_job_steps set executed = '0'

SELECT @command_step = 'EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep 
@job_id=''' + @JobID + ''',
   @step_name =''TestForPrimary'',
                  @step_id=1,
                  @cmdexec_success_code=0,
                  @on_success_action=3,
                  @on_success_step_id=0,
                  @on_fail_action=1,
                  @on_fail_step_id=0,
                  @retry_attempts=0,
                  @retry_interval=0,
                  @os_run_priority=0,@subsystem=''TSQL'',
                  @command = ''' + REPLACE (@CommandString,'''','''''') + ''',
                  @database_name=''master'',
                  @flags=0

'

WHILE EXISTS (select * from DBA..server_agent_job_steps where executed = '0')

BEGIN
select top 1 @stepID = step_id from DBA..server_agent_job_steps
where executed = '0' order by step_id

SELECT @step_name = a.step_name,@command = a.command,
@database_name = a.database_name,@database_user_name = a.database_user_name,
@subsystem = a.subsystem, @cmdexec_success_code = a.cmdexec_success_code,
@flags = a.flags, @retry_attempts = a.retry_attempts,
@retry_interval = a.retry_interval, @output_file_name = a.output_file_name,
@on_success_step_id = a.on_success_step_id,
@on_success_action = a.on_success_action, @on_fail_step_id = a.on_fail_step_id,
@on_fail_action = a.on_fail_action
FROM DBA..server_agent_job_steps a where a.step_id = @stepID

SET @stepID_nxt=@stepID+1

select @command_step = @command_step + 'EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep

@job_id = ''' + @JobID + ''',
@step_id = '''+ @stepID_nxt +''',
@step_name = ''' + @step_name + ''',
@command = ''' + REPLACE (@command,'''','''''') + ''',
@database_name = ''' + @database_name + ''',
@server = ''' + '' + ''',
@database_user_name = ''' + @database_user_name + ''',
@subsystem = ''' + @subsystem + ''',
@cmdexec_success_code = ''' + @cmdexec_success_code + ''',
@flags = ''' + @flags + ''',
@retry_attempts = ''' + @retry_attempts + ''',
@retry_interval = ''' + @retry_interval + ''',
@output_file_name = ''' + @output_file_name + ''',
@on_success_step_id = ''' + @on_success_step_id + ''',
@on_success_action = ''' + @on_success_action + ''',
@on_fail_step_id = ''' + @on_fail_step_id + ''',
@on_fail_action = ''' + @on_fail_action + '''

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

'

update DBA..server_agent_job_steps set executed = '1' where step_id = @stepID
END

--update start step id from sysjobs

select @command_step = @command_step + 'EXECUTE @ReturnCode = msdb.dbo.sp_update_job
@job_id = ''' + @JobID + ''',
@start_step_id = ''' + @start_step_id + '''

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

'

select @command_end = '

COMMIT TRANSACTION
GOTO THE_END

QuitWithRollback:
print ''job failed''

IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION

THE_END:
END'

--print (@command_job + @command_step + @command_sched + @command_end)

SET CONCAT_NULL_YIELDS_NULL ON

execute (@command_job + @command_step + @command_sched + @command_end)

update DBA..server_agent_jobs set executed = '1',createddate=getdate()
where DBA..server_agent_jobs.job_id = @jobID

select @command_step = ''

END

update DBA..server_agent_jobs set spcreated = 0,excmd='--NA',spname='--NA'
where spname NOT IN (SELECT SPECIFIC_Name FROM DBA.INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_TYPE = 'PROCEDURE' and SPECIFIC_NAME like 'sp_create_server_agent_job_%')


Monday 20 January 2020

PowerShell to Send Mail in HTML format using SMTP

PowerShell Script to Send mail in html format using smtp Server:

PS Script to Send mail with attachment(optional):
Information required:
1. Create ".html page" like in script using "content.html"
2. Make a note of file path, if needs to be attached with an email.
3. smtp server details.
4. Save the below script somewhere with ".ps1" extension, That's it!!

############################################

###########Define Variables########

$fromaddress = "donotreply@labtest.com"
$toaddress = "A@labtest.com"
$bccaddress = "A@labtest.com"
$CCaddress = "A@labtest.com"
$Subject = "ACtion Required"
$body = get-content C:\dba_path\content.html
$attachment = "C:\dba_path\test.txt"
$smtpserver = "smtp.lab.com"

####################################

$message = new-object System.Net.Mail.MailMessage
$message.From = $fromaddress
$message.To.Add($toaddress)
$message.CC.Add($CCaddress)
$message.Bcc.Add($bccaddress)
$message.IsBodyHtml = $True
$message.Subject = $Subject
$attach = new-object Net.Mail.Attachment($attachment)
$message.Attachments.Add($attach)
$message.body = $body
$smtp = new-object Net.Mail.SmtpClient($smtpserver)
$smtp.Send($message)

###############################################

Wednesday 15 January 2020

PowerShell Script to execute command on multiple Servers

PowerShell Script to execute same command and get some information from multiple servers:

PS Script to run remotely on multiple servers:

#path of the text file with the list of all the servers

$path = C:\PSScripts\serverslist.txt
$computers = Get-Content -Path $path

$software = "SQL Server*";
$installed = (Get-Service | Where-Object {$_.DisplayName -like $software}) -ne $null

#Another way is to use like below
#$computers = @("Server1",”Server2”,Server3)

#Can also ask\save for credential who have access on all servers before execution
#$Cred = Get-Credential # Add Credentials for all Servers (Domain or non-Domain)
  

# Run Command 

foreach($computer in $computers){
 Write-Host "Running process on $computer" -ForegroundColor green

    
if (Test-Connection -ComputerName $computer -Count 1 -ErrorAction SilentlyContinue)
    {

#With $cred option
#Invoke-Command -ComputerName $computer -ScriptBlock `
#{Get-WmiObject -Class Win32_Volume -EA silentlyContinue | `
#Select-Object $env:computername,Name,Label,BlockSize | Format-Table -AutoSize} -Credential $cred

 #Without $cred option
         Invoke-Command -ComputerName $computer -ScriptBlock `
         {Get-WmiObject -Class Win32_Volume -EA silentlyContinue | `
          Select-Object $env:computername,Name,Label,BlockSize | Format-Table -AutoSize}
                    

#additional check on running particular script on SQL Server
             If(-Not $installed) { Write-Host "'$software' is not installed." }
             else { Invoke-Sqlcmd -InputFile `
                    "C:\PSScripts\TempDBValidation.sql" -ServerInstance $computer}


    }
else{ Write-Host "$computer Server is not accessiable" -ForegroundColor red }

#For loop end
}
Save the script as ".ps1" extension.

Ways to execute the script and capture output in text\csv file :
1. Open PowerShell as admin mode
2. Go to path where ".ps1" file is saved.
3. Two ways to execute-
Simple way to execute & output on PowerShell console:
.\Remote_PS-Script.ps1

Capture Output in csv:
.\Remote_PS-Script.ps1 | `
Out-File -FilePath C:\output\PS_Script-$(Get-Date -format yyyyMMdd_hhmmsstt).csv -Append

Friday 10 January 2020

Useful PowerShell scripts -1

Here are the few very useful PowerShell short commands for basic checks:

Disk info:
Get-WmiObject Win32_volume -ComputerName .|Format-Table Name, Label,
@{Name="SizeGB";Expression={($_.capacity/1gb -as [int])}},
@{Name="FreeSpaceGB";Expression={([math]::Round($_.Freespace/1GB,0))}},
@{Name="UsedGB";Expression={($_.capacity/1gb -as [int])-([math]::Round($_.Freespace/1GB,0))}}


Disk block size format info:
Script-1:
Get-CimInstance -ClassName Win32_Volume | Select-Object Name, Label, BlockSize | Format-Table -AutoSize Script-2:
Get-WmiObject -Class Win32_Volume -ComputerName .| Select-Object Name,Label,BlockSize | Format-Table -AutoSize

Disk Lun ID\Physical location info:
Get-WmiObject Win32_DiskDrive | sort scsibus,scsilogicalunit | 
ft @{Label=”ScsiBus”;Expression={$_.scsibus}},@{Label=”LUN”;Expression={$_.scsilogicalunit}}, 
@{Label=”Disk Number”;Expression={$_.index}}, 
@{Label=”Size (GB)”;Expression={[Math]::Round($_.size / 1GB)}} -autosize


Disk partition style (GPT\MBR) info:

Get-Disk | Select-Object  DiskNumber,FriendlyName,SerialNumber,HealthStatus,OperationalStatus,
@{Name="SizeGB";Expression={($_.Size/1gb -as [int])}},PartitionStyle | Format-Table -AutoSize


Add User at server level remotely\locally :
Invoke-Command -ComputerName . 
-ScriptBlock {Add-LocalGroupMember -Group Administrators -Member "domain\account"}


Sample code to check if software is installed, do something & if not, do something else :
$software = "SQL Server*";
$installed = (Get-Service | Where-Object {$_.DisplayName -eq $software}) -like $null

If(-Not $installed)
{
    Write-Host "Copy '$software'..."

    Copy-Item -Path "Source_Path" -Destination "C:\dba\xxxx.msi"

    Get-Date

    Write-Host "Ready to install '$software'..."
    Read-Host -Prompt "Press Enter to continue"

    Invoke-Command -ComputerName . -ScriptBlock `
    {Start-Process msiexec.exe -Wait -ArgumentList '/I C:\dba\xxx.msi /quiet'}
    Write-Host "driver Installed!!"
}
else { Write-Host "'$software' is installed." }


Tuesday 7 January 2020

Azure Useful PowerShel Scripts -1

Here are the few very useful Azure PowerShell-CLI short commands for basic checks:

Select Subscription:
Get-AzureRmSubscription
Select-AzureRmSubscription -SubscriptionId "12y127128y172kajsnkdaj"
List RG:
Get-AzureRmResourceGroup | SELECT ResourceGroupName,Location
List of Storage Account:
Get-AzureRmStorageAccount | Select StorageAccountName, Location
Get Storage Account Key:
Get-AzureRmStorageAccountKey -ResourceGroupName "RGName" -AccountName "Storage_AccName"
List Containers:
$context = New-AzureStorageContext -StorageAccountName "Storage_AccName" `
-StorageAccountKey "XXXXXXXXXXXX"
List of all Blobs\Files:

Get-AzureStorageBlob -Container 'Container_Name' -Context $context
 | Select-Object @{name="Name"; 
expression={"https://xyz.blob.core.windows.net/sqldbbackups1/"+$_.Name}}
 | Where-Object { $_.Name -like '*.bak*'}

OR to use in restore command:

Get-AzureStorageBlob -Container 'Container_Name' -Context $context
 | Select-Object @{name="Name"; 
expression={"URL = 'https://xyz.blob.core.windows.net/sqldbbackups1/"+$_.Name + "',"}}
 | Where-Object { $_.Name -like '*.bak*'} | Format-List *

Change/replace $ sign to %24 to convert into URL:

Get-AzureStorageBlob -Container 'Container_Name' -Context $context
 | Select-Object @{name="Name"; 
expression={"URL = 'https://xyz.blob.core.windows.net/sqldbbackups1/"+$_.Name.replace('$','%24')
 + "',"}}
 | Where-Object { $_.Name -like '*.bak*'} | Format-List *
Real time used Sample Scripts :
Get the list of files with last modified date:
Get-AzureStorageContainer -Context $context | SELECT Name, Lastmodified

Another example, Check if files are older than 7 days on Storage blob:

$lastdate = Get-AzureStorageBlob -Container "Cont_Name" -Context $context  `
| SELECT Lastmodified | sort @{expression="LastModified";Descending=$false} | `
SELECT -First 1 | ft -HideTableHeaders | Out-String
$checkdate = Get-Date -date $(Get-Date).AddDays(-7) -Format "MM/dd/yyyy HH:mm K"

if($lastdate -le $checkdate) {
$fromaddress = "donotreply@xyz.com"
$toaddress = "amit@xyz.com"
$Subject = "Action Required : No Truncation "
$body = "Backup files are not getting truncated!!"
$smtpserver = "smtpmail.xyz.com"
$message = new-object System.Net.Mail.MailMessage
$message.From = $fromaddress
$message.To.Add($toaddress)
$message.Subject = $Subject
$message.body = $body
$smtp = new-object Net.Mail.SmtpClient($smtpserver)
$smtp.Send($message)
Write-Output "Alert triggered!!"}
else{
     Write-Output "All Good!!"
}

Count Total Files and Size Azure blob: 

$resourceGroup = "RGName"
$storageAccountName = "Storage_AccName"
$containerName = "containerName"

# get a reference to the storage account and the context
$storageAccount = Get-AzureRmStorageAccount -ResourceGroupName `
$resourceGroup -Name $storageAccountName
$context = $storageAccount.Context

# get a list of all of the blobs in the container 
$listOfBLobs = Get-AzureStorageBlob -Container $ContainerName -Context $context 

# zero out our total
$length = 0

# this loops through the list of blobs and retrieves the length for each blob
#   and adds it to the total
$listOfBlobs | ForEach-Object {$length = $length + $_.Length}

$count=$listOfBlobs | select Name, Length
$length= (($length)/1073741824)

# output the blobs and their sizes and the total 
Write-Host "List of Blobs and their size (length)"
Write-Host " " 
Write-Host " "
Write-Host "Total Files Count = " $count.Count
Write-Host "Total Size in GiB = " $length 

Wednesday 1 January 2020

SQL Server Database Partitioning

Today, We will be going with the example of SQL Database table Partitioning.
First we have to identify on the bases of which column we are going to partition the rows. mostly rows are partition on the bases of DateTime data type column which we are going to implement today.
Here the case is, we are creating table partition on basis of DateTime column in existing running database. Follow the below steps:
1. Add new file groups to existing database. So You could add files in multiple file groups on the same disk or on different disks. If you plan on using multiple aligned tables and data retrieval statements will almost always affect a small subset of rows in each table that could be grouped in partitions you should consider using multiple disk arrays. However, if you anticipate many SELECT statements retrieving a large majority of rows from each table, then performing input / output operations against multiple disk arrays could actually worsen performance. In such a case, you can still benefit from partitioning but should try to create multiple filegroups on the same disk array.
The following statements create filegroups in DBADB001 database:
ALTER DATABASE DBADB001 ADD FILEGROUP [FG_2012]  
GO  
ALTER DATABASE DBADB001 ADD FILEGROUP [FG_2013]  
GO  
ALTER DATABASE DBADB001 ADD FILEGROUP [FG_2014]  
GO  
ALTER DATABASE DBADB001 ADD FILEGROUP [FG_2015]  
GO
Each filegroup can have one or multiple files associated with it. 2. Add one file to each filegroup so that you can store partition data in each filegroup:
ALTER DATABASE DBADB001
  ADD FILE
  (NAME = N'DataFG_2012',
  FILENAME = N'D:\mssql\mssqlserver\mdf\DataFG_2012.ndf',
  SIZE = 50MB,
  MAXSIZE = 100MB,
  FILEGROWTH = 5MB)
  TO FILEGROUP [FG_2012]  
GO  
ALTER DATABASE DBADB001
  ADD FILE
  (NAME = N'DataFG_2013',
  FILENAME = N'D:\mssql\mssqlserver\mdf\DataFG_2013.ndf',
  SIZE = 50MB,
  MAXSIZE = 100MB,
  FILEGROWTH = 5MB)
  TO FILEGROUP [FG_2013]  
GO 
ALTER DATABASE DBADB001
  ADD FILE
  (NAME = N'DataFG_2014',
  FILENAME = N'D:\mssql\mssqlserver\mdf\DataFG_2014.ndf',
  SIZE = 50MB,
  MAXSIZE = 100MB,
  FILEGROWTH = 5MB)
  TO FILEGROUP [FG_2014]  
GO 
ALTER DATABASE DBADB001
  ADD FILE
  (NAME = N'DataFG_2015',
  FILENAME = N'D:\mssql\mssqlserver\mdf\DataFG_2015.ndf',
  SIZE = 50MB,
  MAXSIZE = 100MB,
  FILEGROWTH = 5MB)
  TO FILEGROUP [FG_2015]  
GO 
The CREATE TABLE statement normally specifies a particular filegroup on which the table is built. However, with SQL Server 2005 and later, you can reference the partition scheme as opposed to a filegroup, because you can spread each table across multiple filegroups. Partition functions commonly reference a column with DATETIME data type. This makes sense if you want to spread the data based on its creation timeframe. Data warehouse fact tables typically don't contain the DATETIME column. Instead, they normally include a foreign key referencing the date and time value in the time dimension (or, more accurately, the date dimension). Important point to keep in mind is that you're not limited to columns with DATETIME data type for partitioning keys. You could use the INTEGER data type key referencing the date dimension. However, if you use the integer values for partitioning and you want to partition based on date, then your application must ensure that records for a given week, month, quarter or year (depending on your implementation) must fall into certain ranges of integers.
3. Create the new table and will insert random data about 5000 rows.
use DBADB001
GO

CREATE TABLE [dbo].[TestTable] 
([pkcol] [int] NOT NULL,
 [Int1] [int] NULL,
 [Int2] [int] NULL,
 [TestName] [varchar](50) NULL,
 [partitioncol] DateTime)
GO

ALTER TABLE dbo.TestTable ADD CONSTRAINT PK_TestTable PRIMARY KEY CLUSTERED (pkcol) 
GO
CREATE NONCLUSTERED INDEX IX_TABLE1 ON dbo.TestTable (Int1,Int2)
  WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, 
        ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) 
  ON [PRIMARY]
GO
-- Populate table data
DECLARE @val INT
SELECT @val=1
WHILE @val < 5001
BEGIN  
   INSERT INTO dbo.TestTable(pkcol, Int1, Int2, TestName, partitioncol) 
      VALUES (@val,@val,@val,'TEST',getdate()-@val)
   SELECT @val=@val+1
END
GO

--To get table rows count
SELECT o.name objectname,i.name indexname, partition_id, partition_number, [rows]
FROM sys.partitions p
INNER JOIN sys.objects o ON o.object_id=p.object_id
INNER JOIN sys.indexes i ON i.object_id=p.object_id and p.index_id=i.index_id
WHERE o.name LIKE '%TestTable%'
4. Now Create a Partition Function:
CREATE PARTITION FUNCTION DateRangePFunc (DATETIME) AS
RANGE LEFT FOR VALUES 
('20121231 23:59:59.997', 
'20131231 23:59:59.997',
'20141231 23:59:59.997',
'20151231 23:59:59.997'
)
GO
5. Now Create Partition Scheme, which takes advantage of the partition function and maps each data range to different filegroups:
CREATE PARTITION SCHEME DateRangePScheme  AS
  PARTITION DateRangePFunc  TO 
  ([FG_2012],
  [FG_2013],
  [FG_2014],
  [FG_2015],
  [PRIMARY]  )
Using this partition scheme, all records with FullDate value prior to December 31, 2011 will be placed on [FG_2011] file group; values between January 1st, 2012 and December 31st, 2012 will be place on [FG_2012] and so forth. Any records for which FullDate column has a value after December 31st, 2015 will be placed on the PRIMARY file group.
Note that the "SaveAll" filegroup has to be specified. In this case, the PRIMARY file group for records that do not fall into any explicitly defined date ranges. Normally as a best practice, you should reserve the PRIMARY filegroup for system objects and have a separate SaveAll filegroup for user data.
Now that there is a partition function and scheme, you can create a partitioned table. The syntax is very similar to any other CREATE TABLE statement except it references the partition scheme instead of a referencing filegroup:
CREATE TABLE [dbo].[TestTable_PT] 
([pkcol] [int] NOT NULL,
 [Int1] [int] NULL,
 [Int2] [int] NULL,
 [TestName] [varchar](50) NULL,
 [partitioncol] DateTime)
  ON DateRangePScheme  (partitioncol)
GO
Now that the table has been created on a partition scheme, populate it based on the existing table "TestTable" and subsequently examine the row count in each partition:
/* normally you should avoid using "SELECT *" construct.
   But since this is an example of populating a sample table
   "SELECT *" won't cause any performance or usability issues.
   In production environments you will typically use BULK INSERT
   statement to populate partitioned tables based on flat files.  
*/
INSERT TestTable_PT
SELECT *
FROM   TestTable
       
/* next we can use $PARTITION function to retrieve row counts
  for each partition:  
*/
DECLARE @TableName NVARCHAR(200) = N'dbo.TestTable_PT'
 
SELECT SCHEMA_NAME(o.schema_id) + '.' + OBJECT_NAME(i.object_id) AS [object]
     , p.partition_number AS [p#]
     , fg.name AS [filegroup]
     , p.rows
     , au.total_pages AS pages
     , CASE boundary_value_on_right
       WHEN 1 THEN 'less than'
       ELSE 'less than or equal to' END as comparison
     , rv.value
     , CONVERT (VARCHAR(6), CONVERT (INT, SUBSTRING (au.first_page, 6, 1) +
       SUBSTRING (au.first_page, 5, 1))) + ':' + CONVERT (VARCHAR(20),
       CONVERT (INT, SUBSTRING (au.first_page, 4, 1) +
       SUBSTRING (au.first_page, 3, 1) + SUBSTRING (au.first_page, 2, 1) +
       SUBSTRING (au.first_page, 1, 1))) AS first_page
FROM sys.partitions p
INNER JOIN sys.indexes i
     ON p.object_id = i.object_id
AND p.index_id = i.index_id
INNER JOIN sys.objects o
     ON p.object_id = o.object_id
INNER JOIN sys.system_internals_allocation_units au
     ON p.partition_id = au.container_id
INNER JOIN sys.partition_schemes ps
     ON ps.data_space_id = i.data_space_id
INNER JOIN sys.partition_functions f
     ON f.function_id = ps.function_id
INNER JOIN sys.destination_data_spaces dds
     ON dds.partition_scheme_id = ps.data_space_id
     AND dds.destination_id = p.partition_number
INNER JOIN sys.filegroups fg
     ON dds.data_space_id = fg.data_space_id
LEFT OUTER JOIN sys.partition_range_values rv
     ON f.function_id = rv.function_id
     AND p.partition_number = rv.boundary_id
WHERE i.index_id < 2
     AND o.object_id = OBJECT_ID(@TableName);
Results:
PartitionID Row_count
1 1013000
2 2677000
3 24443000
4 32265000

Notice that partition identifiers start at 1. The "catchall" partition located on PRIMARY file group will have partition id equal to 5. This partition isn't retrieved by the above statement because it is empty - it doesn't have any records. You could retrieve all records for a particular partition identifier using the following syntax, again using $PARTITION function:
SELECT * FROM dbo.FactInternetSales_Partitioned
  WHERE $Partition.FullOrderDateRangePFN (FullDate) = 2
Now if you create an index on this table; by default the index will be partitioned using the same partition scheme as the table:
CREATE INDEX ix_FactInternetSales_Partitioned_cl
  ON FactInternetSales_Partitioned (   ProductKey,   FullDate)
  ON FullOrderDateRangePScheme  (FullDate)
It is possible to omit the partition scheme specification in this statement if you want the index to be aligned with the table. Note that an index doesn't have to use the same partition scheme and partition function to be aligned with the table. As long as an index is partitioned based on the same data type, has the same number of partitions as the table and each partition has the same data boundaries as the table, the index will be aligned.
Partitioning key of an index doesn't have to be part of the index key. So you could use the same partition scheme to create an index that does not reference FullDate as its index key. For example, the following statement is valid:
CREATE INDEX ix_FactInternetSales_Partitioned_ProductKey
  ON FactInternetSales_Partitioned (   ProductKey)
  ON FullOrderDateRangePScheme  (FullDate)
Use a partition strategy for an index which is completely different from the underlying table's partition scheme and partition function. Some examples of where this makes sense are:
The table isn't partitioned, but you wish to partition the index.
The table is partitioned but you would like to collocate the index data with other tables' indexes because these tables will be frequently joined.
You have a unique index you wish to partition and the index key isn't part of the table's partition function.
Table and index partition metadata can be retrieved for FactInternetSales_partitioned table using the following statement:
SELECT OBJECT_NAME([object_id]) AS table_name, *
  FROM sys.partitions
  WHERE [object_id] = OBJECT_ID('dbo.FactInternetSales_partitioned')
  ORDER BY index_id, partition_number
Results:
table_name partition_id object_id index_id partition_number hobt_id rows
FactInternetSales_Partitioned 72057594052411300 1294627655 0 1 72057594052411300 2026000
FactInternetSales_Partitioned 72057594052476900 1294627655 0 2 72057594052476900 5354000
FactInternetSales_Partitioned 72057594052542400 1294627655 0 3 72057594052542400 48886000
FactInternetSales_Partitioned 72057594052608000 1294627655 0 4 72057594052608000 64530000
FactInternetSales_Partitioned 72057594052673500 1294627655 0 5 72057594052673500 0
FactInternetSales_Partitioned 72057594052739000 1294627655 3 1 72057594052739000 2026000
FactInternetSales_Partitioned 72057594052804600 1294627655 3 2 72057594052804600 5354000
FactInternetSales_Partitioned 72057594052870100 1294627655 3 3 72057594052870100 48886000
FactInternetSales_Partitioned 72057594052935600 1294627655 3 4 72057594052935600 64530000
FactInternetSales_Partitioned 72057594053001200 1294627655 3 5 72057594053001200 0
It's important to note that in order to enable partition switching, all indexes on the table must be aligned. Partitioned indexes are normally implemented for query performance benefits, whereas partition switching yields great benefits in large table's manageability. So whether you align indexes with underlying tables depends on whether you are implementing table and index partitions primarily for performance tuning or for manageability.