THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

SQLOS Team

A blog for members of the SQL Server SQLOS team to share information and get your feedback.

  • MSA accounts used with SQL

    Here's a note from Anne Labbe, a premier field engineer in Charlotte who's been looking into managed service accounts..

    After painstaking research and tracking this down to the product team this is what I found:

    1. Managed Service Accounts (MSA) are different from Group Managed Service Accounts (gMSA).  The reason I’m calling this out is because a number of the references floated around are for gMSA and not MSA.  

    2. gMSA are not yet available, are not yet supported for SQL Server.  gMSA exist and are available and supported in Windows Server 2012 and higher.  SQL does not support them , but from an OS perspective, they exist and are supported.    

    3. MSA are supported from Windows 2008 on for specific applications and for specific purposes.

    4. MSA are supported from SQL 2012 on for use running SQL service accounts (all SQL Services) where they are confined to a SINGLE machine.  This means that this account can NOT be used across multiple machines.

    The confusion came (for me at least) because the installation code was changed to check for the use of MSA. Since the code was not in the setup for SQL 2008, I was able to install an instance using a MSA. The product team has reiterated that although I could do this it is NOT supported and has NOT BEEN TESTED.

    - Anne

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog

    This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server.

    Overview

    The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.
     
    Scenario 1: VMs connected through the same Cloud Service

    2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them.

    Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s).

    Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required.

    Step 1 – Provision VMs and Configure Ports

     

    Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):

     

    Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal):

    After provisioning of the 2 VMs above, here is the default port configurations for example:

    Step2 – Verify / Confirm the TCP port used by the database Engine

    By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.

     

    1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration

    2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses

     

    3. Confirm the port number used by SQL Server Engine; in this case 1433

    4. Update from Windows Authentication to Mixed mode

     

    5.       Restart SQL Server service for the change to take effect

    6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2

    Step 3 – Remote Powershell to DemoVM1

    Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck)

    Your will then be prompted to enter the password.

    Step 4 – Open 1433 port in the Windows firewall

    netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow

    Output:

    netsh advfirewall firewall show rule name=DemoVM1Port

    Rule Name:                            DemoVM1Port

    ----------------------------------------------------------------------

    Enabled:                              Yes

    Direction:                            In

    Profiles:                             Domain,Private,Public

    Grouping:                            

    LocalIP:                              Any

    RemoteIP:                             Any

    Protocol:                             TCP

    LocalPort:                            1433

    RemotePort:                           Any

    Edge traversal:                       No

    Action:                               Allow

    Ok.

    Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1

    Step 6 – Close port 1433 in the Windows firewall

    netsh advfirewall firewall delete rule name=DemoVM1Port

    Output:

    Deleted 1 rule(s).

    Ok.

    netsh advfirewall firewall show  rule name=DemoVM1Port

    No rules match the specified criteria.

     

    Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1
     

    Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1.

    Scenario 2: VMs provisioned in different Cloud Services

    2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s).

    Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed.

    Step 1 – Provision new VM3

    Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal):

    After provisioning is complete, here is the default port configurations:

    Step 2 – Add public port to VM1 connect to from VM3’s DB instance

    Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance.

    Step 3 – Remote Powershell to DemoVM1

    Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck)

    You will then be prompted to enter the password.

     

    Step 4 – Open 1433 port in the Windows firewall

    netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow

    Output:

    Ok.

    netsh advfirewall firewall show rule name=DemoVM1Port

    Rule Name:                            DemoVM1Port

    ----------------------------------------------------------------------

    Enabled:                              Yes

    Direction:                            In

    Profiles:                             Domain,Private,Public

    Grouping:                            

    LocalIP:                              Any

    RemoteIP:                             Any

    Protocol:                             TCP

    LocalPort:                            1433

    RemotePort:                           Any

    Edge traversal:                       No

    Action:                               Allow

    Ok.

     

    Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1

    RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above.

    Step 6 – Close port 1433 in the Windows firewall

    netsh advfirewall firewall delete rule name=DemoVM1Port

     

    Output:

    Deleted 1 rule(s).

    Ok.

    netsh advfirewall firewall show  rule name=DemoVM1Port

    No rules match the specified criteria. 

    Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1
     

    Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1.

    Conclusion

    Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections.

    References

    SQL Server in Windows Azure Virtual Machines

     

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • SQL Server 2014 CTP1 now available for download as well as in Windows Azure Image Gallery

    Exciting news - At TechEd Europe 2013 keynote today, we announced that SQL Server 2014 CTP1 is now available for download as well as in Windows Azure Image Gallery. Try it out now and give us feedback.

    http://www.microsoft.com/en-us/sqlserver/sql-server-2014.aspx

    http://europe.msteched.com/#fbid=bdRdsIPwIgn - Watch the Keynote again

     

    thanks,

    Madhan

     

     

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • SQL Server in Windows Azure Infrastructure Services: Performance Guidance Whitepaper and other TechEd NA 2013 updates

    Greetings,

    We are excited to announce the release of our new whitepaper, Performance Guidance for SQL Server in Windows Azure Virtual Machines. This paper provides you guidance to optimize your SQL Server workloads running in Windows Azure Infrastructure Services. Additionally, if you are attending TechEd North America 2013 in New Orleans this week, be sure to stop by DBI-B311 on Day 4 (details below) to learn these strategies in person!

    We also have a bunch of other sessions in TechEd North America 2013 to learn more about running SQL Server in Windows Azure Virtual Machines. Please see the full list of Breakout Sessions below. 

     

    Day 1 - Monday June 3rd

    4:45 PM

    DBI-B201: Lap Around Windows Azure SQL Database and Microsoft SQL Server in Windows
      Azure Virtual Machines

     

    Day 2 - Tuesday June 4th

    3:15 PM

    MDC-B406: Achieve High Availability with Microsoft SQL Server on Windows Azure Virtual
      Machines

    5:00 PM

    DBI-B206: Private Cloud – Better Together: Microsoft SQL Server 2012, Windows Server
      2012 and System Center 2012

     

    Day 3 - Wednesday June 5th

    8:30 AM

    DBI-B316: Running BI Workloads on Windows Azure Virtual Machines

    10:15 AM

    DBI-B306: Microsoft SQL Server High Availability and Disaster Recovery on Windows Azure

    5:00 PM

    DBI-B329: Overview, Best Practices and Lessons Learned on Microsoft SQL Server in
      Windows Azure Virtual Machines

     

    Day 4 - Thursday June 6th

    10:15 AM

    DBI-B311: Performance Tuning Microsoft SQL Server in Windows Azure Virtual Machines

    1:00 PM

    DBI-B310: Windows Azure Virtual Machines and SQL Unplugged

    2:45 PM

    MDC-B361: Best Practices from Real Customers: Deploying to Windows Azure Infrastructure
      Services (IaaS)

     

    Hope to see you there!

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • Enhancements to SQL Server Backup to Cloud in SQL Server 2012 SP1 CU4

    Cumulative Update 4 for SQL Server 2012 Service Pack 1 includes enhancements to improve the SQL Server Backup and Restore to Cloud user experience.  The enhancements include performance improvements, cleaning up invalid blobs when backups fail or are interrupted, and PowerShell support.  

    To request this update, see this knowledgebase article.

    Performance Improvements:

    Performance improvements include faster return of restore metadata operations.  The affected operations include:

    • RESTORE HEADERONLY
    • RESTORE FILELISTONLY
    • RESTORE LABELONLY

    Cleaning up invalid blobs after an interrupted or failed backup:

    If backup operation fails, it may result in an incomplete or invalid blob.SQL Server Backup to URL process attempts to cleanup cleaning blobs that result from a failed backup.   However, if the backup fails due to prolonged or sustained network connectivity failure, backup to URL process may not be able gain access to the blob and the blob may remain orphaned.  In such cases, you have to manually release the lease and delete the blob. This topic has the details on how.

     PowerShell Support:

    This cumulative update also includes PowerShell support for the backup to cloud feature with changes to existing cmdlets and 4 new cmdlets.  Following is a brief overview of the changes and sample syntax. 

    For more information and code examples, see  the following topics:

     Use PowerShell to Backup Multiple Databases to Windows Azure Blob Storage Service

     SQL Server Backup and Restore to Windows Azure Blob Storage Service

     

    Existing cmdlet changes:

    The Backup-SqLDatabase, and Restore-SqlDatabase cmdlets have new parameters to support creating a SQL Server backup to or restore from a Windows Azure Storage Container.  The following are the new parameters:

    BackupContainer:

    Using this parameter you can specify the URL of the container as the location for your backup files and let the Backup to URL process generate the file names.  Alternatively you can use the BackupFile parameter to specify both the location and the name of the file. You can also use this parameter to set locations for a folder on a disk backup device. This parameter can be useful when backing up multiple databases in a given instance.

    SqlCredential:

    This parameter allows you to specify the SQL Credential name of object that stores the Windows Azure Storage authentication information.

                   Sample Syntax:

    • This command backs up all databases on the server instance 'Computer\Instance' to the Windows Azure Blob storage container using the BackupContainer parameter.

     Get-ChildItem SQLSERVER:\SQL\Computer\Instance\Databases | Backup-SqlDatabase –BackupContainer "https://storageaccountname.blob.core.windows.net/containername" -SqlCredential "SQLCredentialName"

     

    • This command creates a full backup of the database 'MyDB'. It uses the BackupFile parameter to specify the location (URL) and the backup file name. The SqlCredential parameter is used to specify the name of the SQL Server credential.

     Backup-SqlDatabase –ServerInstance "Computer\Instance" –Database –"MyDB" -BackupFile "https://storageaccountname.blob.core.windows.net/containername/MyDB.bak" -SqlCredential "SQLCredentialName"

     

    • This command restores the full database 'MyDB' from the file on the Windows Azure Blob storage service to a SQL Server instance.

     Restore-SqlDatabase -ServerInstance Computer\Instance -Database MyDB -BackupFile "https://mystorageaccountname.blob.core.windows.net/container/MyDB.bak"  -SqlCredential "mySqlCredential"

    New cmdlets:

    New-SQLCredential

    The New-SqlCredential cmdlet creates a new SQL Server Credential object. The SQL Server credential is required when backing up to or restoring from the Windows Azure storage service, and is used to store the Windows Azure storage account name and access key information.

    Sample Syntax:

    • This command creates SQL Credential “mysqlcredential” on the current instance of SQL Server.

     New-SqlCredential – name "mysqlcredential" –Identity "storageAccount" –secret “storageAccessKey”

     

    Set-SqlCredential

    You can set the Identity and password properties for a SQL Credential object using this cmdlet. This cmdlet supports the two following modes of operation:

    By specifying the path or location of the credential including the credential name using the –path parameter.

    By passing a Smo.Credential object to the object using the –InputObject.

    Sample Syntax:

    • This command sets the identity of MySqlCredential to ‘mystorageaccount’.

    Set-SqlCredential -Path "SQLSERVER:\SQL\Computer\Instance\Credentials\MySqlCredential" –Identity "mystorageaccount"

    • This command retrieves the credential object from the Get-Credential cmdlet and then pipes it to the Set-Sql Credential cmdlet to set the identity of mySqLCrendential to  ‘mystorageaccount’.

    $cred = Get-SqlCredential -Name MySqlCredential

    $cred | Set-SqlCredential –Identity "mystorageaccount"

    Get-SqlCredential

    The Get-SqlCredential returns the SQL credential object.  This cmdlet supports the following modes of operation: 

    By specifying the name of the SQL credential and the path of the instance.

    By specifying the name of the SQL Credential and the server object.

     

    Sample Syntax:

    • This command returns the credential object.

    Get-SqlCredential –Name "mycredential"

     

    Remove-SqlCredential

    The Remove-SqlCredential cmdlet deletes a SQL Server credential object. This cmdlet supports the following two modes of operation:

    By specifying the path or location of the credential and the credential name using the –path parameter.

    By passing a Smo.Credential object to the object using the –InputObject parameter.

     

    Sample Syntax:

    • The command retrieves the credential object from the Get-Credential cmdlet and then pipes it to the Remove-Sql Credential cmdlet to delete the SQL Credential ‘MySqlCredential’.

    $cred = Get-SqlCredential -Name "MySqlCredential"

    $cred | Remove-SqlCredential

     

    This blog post is about the changes that were made in the Cumulative Update 4 for SQL Server 2012 SP1 release and assumes some amount of familiarity with SQL Server native backup to cloud functionality released in SQL Server 2012 SP1 CU2.  For information on  SQL Server Backup and Restore to Cloud, see the following articles:

     

    SQL Server Backup and Restore to Cloud Simplified

     

    Getting Started Tutorial

     


    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • SQL Server in Windows Azure Infrastructure Services – Updated Documentation and Best Practices for GA, Upcoming Blogs

    It’s been just over a week since Windows Azure announced the GA of Infrastructure Services, marking the beginning of a fully supported Infrastructure as a Service in Windows Azure, with SQL Server as a major component.

    Pre-installed SQL Server VMs are available for pay-per-hour usage in the Windows Azure gallery. Currently Enterprise, Standard and Web edition VMs running on Windows Server 2008 R2 SP1 are available, with more SQL Server editions coming soon. SQL Server editions running on Windows Server 2012 images are also on the way. For more details on the scenarios and benefits of running SQL Server workloads on Windows Azure Virtual Machines, please visit the SQL Server blog post here.

    We are very happy to announce that the updated technical documentation for deploying and running SQL Server in Windows Azure Infrastructures Services is now available online. When deploying SQL Server in Windows Azure Virtual Machines, we recommend that you follow the detailed guidance given in the new SQL Server in Windows Azure Virtual Machines documentation in the library. This documentation includes a series of articles and tutorials that provide detailed guidance on:

    Over the next few weeks we are planning a series of blog posts to provide more detailed information on specific SQL Server topics. Subjects in the pipeline include: high availability, disaster recovery, performance, application migration and security. Let us know what topics you would like to see covered in this series by adding comments to this post.

    SQL Server Team 

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • SQL Server Backup to Cloud – Managing Interrupted backups

    Writer: Karthika Raman

    Technical Reviewers: Guy Bowerman, Pat Schaefer, Andrew Cherry

     

    On January 24, 2013, the SQL Server engineering team released new functionality in SQL Server 2012 SP1 CU2, enabling SQL Server native backup to cloud. The steps to creating a SQL Server backup to Windows Azure Blob storage are simple, but if a backup is interrupted it can leave behind blob files that require additional steps to delete.  Some of this information along with other best practices and troubleshooting tips is addressed in the  Best Practices topic in SQL Server Books online.  In this blog post, we focus on the interrupted backup scenario, identifying locked blobs that might be partial or corrupt as a result of interrupted backups, and deleting them.

    The Blob, Blob Leases, and SQL Server section provides background information about blobs and leases in relation to SQL Server Backup to cloud. The Managing Interrupted Backup section discusses managing interrupted backups and a sample PowerShell code you can use to get started.

    Blobs, Blob Leases, and SQL Server

    In order to get exclusive write access to a blob, a lease is acquired. Acquiring a lease helps avoid accidental overwrites or deletes when there is already another process accessing a blob. Blob leases are two types:  One type of lease has a set duration.  This duration can be anywhere from 15 to 60 seconds. The lease is active for the time of duration, but can be reacquired before it expires, to complete the backup or restore process, by specifying the existing lease Id.   The other type where there is no set duration is an infinite lease.  Infinite leases are active unless explicitly released or broken.  In effect, the infinite lease on a blob acts like a lock.

    SQL Server acquires an infinite lease for backup and restore processes.  In the case of backup, a unique lease Id is used, and in the case of restore, a known lease id ‘BAC2BAC2BAC2BAC2BAC2BAC2BAC2BAC2’ is used.  Once the backup or restore process is successfully completed, the lease is released.

    What happens when a backup or restore is interrupted?

    During restores, the restore process always attempts to release the lease unless the network is interrupted.  If the lease remains active, overwrites or deletes cannot be performed until the lease is broken or released through a subsequent restore.  So an interrupted restore may not often be something that requires close monitoring.  However, interrupted backups present a different scenario. 

    If the backup process is interrupted, it can result in a partial or corrupt blob with an active lease which remains, blocking any overwrites, or deletes.  It also prevents any restores using this file, since the restore process needs to acquire a lease with the well-known lease Id, but the existing active backup lease prevents this – which is probably a desirable result as the file could be partial or corrupt.  So with an active lease on the blob, the blob remains locked until the lease is explicitly broken.  

    Managing Interrupted Backups

    As stated in the section above, interrupted backups need to be monitored and managed.  This section walks through the process of monitoring interrupted backups, identifying locked blobs, and breaking the lease using PowerShell scripts.

    Backups can be interrupted due to several reasons such as network failures, process canceled by the user, power outage etc.  As interrupted backups can result in a partial blob with an active lease, in order to overwrite this file or delete it, you must first identify such blobs and break the lease. 

    One way to identify such blobs is to actively monitor backup return codes/errors and interrupted backups.  Below is a list of error/return codes that are returned from SQL Server Backup and restore process, which you can use to monitor for blob files with active leases.

    Msg 3202, Level 16, State 1, Line 1

    Write on "https://mystorage.blob.core.windows.net/mycontainer/test.bak" failed: Backup to URL received an exception from the remote endpoint. Exception Message: The remote server returned an error: (412) There is currently a lease on the blob and no lease ID was specified in the request.

    BackupIoRequest::ReportIoError: read failure on backup device https://mystorage.blob.core.windows.net/mycontainer/test.bak '. Operating system error Backup to URL received an exception from the remote endpoint. Exception Message: The remote server returned an error: (409) Conflict..

    This method won’t catch cases where the backup process is aborted or the machine goes down.

    Another way is to run a script periodically to look for blobs with active leases in the storage account.  The PowerShell script example included here is a good starting point.  The PowerShell script establishes an authenticated connection to the storage account, looks for all files in a specified container that are in a ‘locked’ state and filters out restore leases using the well-known lease Id. 

    Below is the PowerShell example to identify locked blobs and breaking the lease. A downloadable version of the code is available on the TechNet code gallery.

    Important Note: 

    This example is intended to show how to break a lease, but running this while a backup is running will cause the backup to fail as it will break the lease that SQL Server acquired to do the backup.  Before running this script or scheduling it, ensure that no backup is running at the same time.

    param(

    [Parameter(Mandatory=$true)]

    [string]$storageAccount,

    [Parameter(Mandatory=$true)]

    [string]$storageKey,

    [Parameter(Mandatory=$true)]

    [string]$blobContainer,

    [Parameter(Mandatory=$true)]

    [string]$storageAssemblyPath

    )

    # Well known Restore Lease ID

    $restoreLeaseId = "BAC2BAC2BAC2BAC2BAC2BAC2BAC2BAC2"

    # Load the storage assembly without locking the file for the duration of the PowerShell session

    $bytes = [System.IO.File]::ReadAllBytes($storageAssemblyPath)

    [System.Reflection.Assembly]::Load($bytes)

    $cred = New-Object 'Microsoft.WindowsAzure.Storage.Auth.StorageCredentials' $storageAccount, $storageKey

    $client = New-Object 'Microsoft.WindowsAzure.Storage.Blob.CloudBlobClient' "https://$storageAccount.blob.core.windows.net", $cred

    $container = $client.GetContainerReference($blobContainer)

    #list all the blobs

    $allBlobs = $container.ListBlobs()

    $lockedBlobs = @()

    # filter blobs that are have Lease Status as "locked"

    foreach($blob in $allBlobs)

    {

    $blobProperties = $blob.Properties

    if($blobProperties.LeaseStatus -eq "Locked")

    {

    $lockedBlobs += $blob

    }

    }

    if ($lockedBlobs.Count -eq 0)

    {

    Write-Host " There are no blobs with locked lease status"

    }

    if($lockedBlobs.Count -gt 0)

    {

    write-host "Breaking leases"

    foreach($blob in $lockedBlobs )

    {

    try

    {

    $blob.AcquireLease($null, $restoreLeaseId, $null, $null, $null)

    Write-Host "The lease on $($blob.Uri) is a restore lease"

    }

    catch [Microsoft.WindowsAzure.Storage.StorageException]

    {

    if($_.Exception.RequestInformation.HttpStatusCode -eq 409)

    {

    Write-Host "The lease on $($blob.Uri) is not a restore lease"

    }

    }

    Write-Host "Breaking lease on $($blob.Uri)"

    $blob.BreakLease($(New-TimeSpan), $null, $null, $null) | Out-Null

    }

    }

    How to test the interrupted backup scenario using the script.

    The following is a self-guided walkthrough of creating a locked blob by interrupting a backup process, and using the script to release the lease to successfully delete the blob file that resulted from the interrupted backup.

    • The script prompts for storage account name, storage access key, container name, and the path and file name of the Windows Azure Storage Assembly.  You can either choose to provide these interactively or you can replace the params () section like this: 

    #provide values for parameters

    $storageAccount = "mycloudstorage"

    $storageKey = "<primary/secondary access key value of the storage account>”

    $blobContainer = "sqlbackup"

    $storageAssemblyPath = "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\Microsoft.WindowsAzure.Storage.dll"

    Note:  The storage assembly is stored in the Binn folder of the SQL Server 2012 SP1 CU2 instance. In the above example, it is the default SQL Server instance.

    • Create a SQL Server Backup to Windows Azure blob storage which completes successfully.

    • Run the PowerShell script.  It returns a message that says that there are no locked blobs.

    • Now run another SQL Server backup but stop it before it can complete to simulate interrupted backup scenario.

    • Check to see if the file was created on the storage account.

    • Try to overwrite the file – see error:

    • Try to delete – see error:

    • Run the PowerShell script again.  This time you should see that locked blob is identified, and after checking that it is not a restore lease, the lease is broken.

    • Try overwriting or deleting the file.  It should work!

     

    Here are some ways you can enhance this script to automate this process:

    • Output the list of all blobs that are locked to a report (ex: excel file).
    • Break the active leases and report out the list.
    • Break the active leases and report out the list, but remove the filter for restore lease.
    • Delete the blobs with active backup leases and report out the list.

     

    NOTE: If you are running PowerShell 2.0, you may have problems loading the Microsoft WindowsAzure.Storage.dll assembly. We recommend that you upgrade to Powershell 3.0 to solve the issue. You may also use the following workaround for PowerShell 2.0:

    Create or modify the powershell.exe.config file to load .NET 2.0 and .NET 4.0 assemblies at runtime with the following:

    <?xml version="1.0"?>

    <configuration>

     <startup useLegacyV2RuntimeActivationPolicy="true">

     <supportedRuntime version="v4.0.30319"/>

     <supportedRuntime version="v2.0.50727"/>

     </startup>

    </configuration>

    For more information on best practices, see SQL Server Backup and Restore Best Practices (Windows Azure Blob Storage Service)

     

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • When will my SQL Server Evaluation Edition expire?

     The SQL Server Evaluation Edition is a great way to get a fully functional and free instance of SQL Server for learning and developing solutions. The edition has a built in expiry of 6 months from the time that you install it. I often install an Eval edition on a machine to set up a test or demo, and then 6 months later it comes as a complete surprise when it suddenly stops working.

     

    Needless to say this can be mildly inconvenient, as it typically expires on the day you plan to demo it. It would be nice to get some kind warning.  

    This problem is exacerbated in the SQL Server virtual machine images in the Windows Azure gallery. While Azure VM's are in preview the SQL Server platform images are created with Evaluation edition, but the 6 month counter starts ticking when the VM is created rather than when you deploy it. This came to a head recently as a few months went by between platform image refreshes, so people deploying SQL VM images in Windows Azure were seeing them expire a few days later. See this thread for more details. Since then the SQL Server platform image has been refreshed so it won't be expiring for several months.


    The problem with Azure VM's will go away soon as fully licensed pay-per-hour images will be available soon, and free trial offers will be available. 

    What happens when your SQL instance does expire? What are the options? Particularly if you've developed a working solution and want to put it into production?

    One option is to purchase a license and enter the license key in SQL Server Installation Center->Maintenance->Upgrade License.

    If you're using an Azure VM another option would be to migrate your application to a new VM.

    One thing we've learned from this is that as long as time-bombed images are being used we need to be better at communicating and setting expectations about their expiration.

    If you're running an Evaluation edition now, how can you tell when it's due to expire? One way is to run this query...  

      

    sp_configure 'show advanced options', 1;
    RECONFIGURE

    GO

    sp_configure 'Agent XPs', 1;
    RECONFIGURE

    GO

    DECLARE @daysleft int
    DECLARE @instancename sysname
    SELECT @instancename = CONVERT(sysname, SERVERPROPERTY('InstanceName'))
    EXEC @daysleft = xp_qv '2715127595', @instancename
    SELECT @daysleft 'Number of days left'

    GO


    That will give you an answer in days until expiry. As of today, the SQL Server gallery image I launched a few days ago has 122 days left, which gives me a few months to do my demos and then forget that it's going to expire. Hopefully by that time I'll have either upgraded it or started with a refreshed or pay per hour image.

    - Guy

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • Backup and Restore to Cloud Simplified in SQL Server 2012 SP1 CU2

    SQL Server 2012 SP1 Cumulative Update 2 includes enhancements to SQL Server native backup that simplifies backup and restore of a SQL Server Database to Window Azure storage. The SQL Server database can be either on an on-premises instance of SQL Server or on an instance of SQL Server running in a hosted environment such as Windows Azure Virtual Machine. Read the information below to get a brief introduction to the new functionality and follow the links for more in-depth information. 

    To download the update, go to the SQL Release Services Blog or to Download SQL Server 2012 SP1 CU2.

    Overview:

    In addition to disk and tape you can now use SQL Server native backup functionality to back up your SQL Server Database to the Windows Azure Blob storage service.  In this release, backup to Windows Azure Blob storage is supported using T-SQL and
    SMO.  SQL Server Databases on an on premises instance of SQL Server or in a hosted environment such as an instance of SQL Server running in Windows Azure VMs can take advantage of this
    functionality.

    Benefits:

    • Flexible, reliable, and limitless off-site storage for improved disaster recovery: Storing your backups on Windows Azure Blob service can be a convenient, flexible and easy to access off-site option. Creating off-site storage for your SQL Server backups can be as easy as modifying your existing scripts/jobs. Off-site storage should typically be far enough from the production database location to prevent a single disaster that might impact both the off-site and production database locations. You can also restore the backup to a SQL Server Instance running in a Windows Azure
      Virtual Machine for disaster recovery of your on-premises database.  By choosing to geo-replicate the Blob storage you have an extra layer of protection in the event of a disaster that could affect the whole region. In addition, backups are available from anywhere and at any time and can easily be accessed for restores.
    • Backup Archive: The Windows Azure Blob Storage service offers a better alternative to the often used tape option to archive backups. Tape storage might require physical transportation to an off-site facility and measures to protect the media. Storing your backups in Windows Azure Blob Storage provides an instant, highly available and durable archiving option.
    • No overhead of hardware management: There is no overhead of hardware management with Windows Azure storage service. Windows Azure services manage the hardware and provides geo-replication for redundancy and protection against hardware failures.
    • Currently for instances of SQL Server running in a Windows Azure Virtual Machine, backing up to Windows Azure Blob storage services can be done by creating attached disks. However, there is a limit to the number of disks you can attach to a Windows Azure Virtual Machine. This limit is 16 disks for an extra-large instance and fewer for smaller instances. By enabling a direct backup to Windows Azure Blob Storage, you can bypass the 16 disk limit.
    • In addition, the backup file which now is stored in the Windows Azure Blob storage service is directly available to either an on-premises SQL Server or another SQL Server running in a Windows Azure Virtual Machine, without the need for database attach/detach or downloading and attaching the VHD.
    • Cost Benefits: Pay only for the service that is used. Can be cost-effective as an off-site and backup archive option.

    The Windows Azure pricing calculator can help estimate your costs.

    Storage: Charges are based on the space used and are calculated on a graduated scale and the level of redundancy. For more details, and up-to-date information, see the Data Management section of the Pricing Details article.

    Data Transfers: Inbound data transfers to Windows Azure are free. Outbound transfers are charged for the bandwidth use and calculated based on a graduated region-specific scale. For more details, see the Data Transfers section of the Pricing Details article.

    How it works:

    Backup to Windows Azure Storage is engineered to behave much like a backup device (Disk/Tape).  Using the Microsoft Virtual Backup Device Interface (VDI), Windows Azure Blob storage is coded like a “virtual backup device”, and the URL format used to access the Blob storage is treated as a device.  The main reason for supporting Azure storage as a destination device is to provide a consistent and seamless backup and restore experience, similar to what we have today with disk and tape. When the Backup or restore process is invoked, and the Windows Azure Blob storage is specified using the URL “device type”, the engine invokes a VDI client process that is part of this feature.  The backup data is sent to the VDI client process, which sends the backup data to Windows Azure Blob storage.

    As previously mentioned, the URL is much like a backup device used today, but it is not a physical device, so there are some limitations.  For a full list of the supported options, see SQL Server Backup and Restore with Windows Azure Blob Storage Service.

    How to use it

    To write a backup to Windows Azure Blob storage you must first create a Windows Azure Storage account, create a SQL Server Credential to store storage account authentication information. By using Transact-SQL or SMO you can issue backup and restore commands.

    The following Transact-SQL examples illustrate creating a credential, doing a full database backup and restoring the database from the full database backup. For a complete walkthrough of creating a storage account and performing a simple restore, see Tutorial: Getting Started with SQL Server Backup and Restore to Windows Azure Blob Storage Service.

     

    Create a Credential

    The following example creates a credential that stores the Windows Azure Storage authentication information.

     

    IF NOT EXISTS

    (SELECT * FROM sys.credentials WHERE credential_identity = 'mycredential')

    CREATE CREDENTIAL mycredential
    WITH IDENTITY = 'mystorageaccount'

    ,SECRET = '<storage access key>' ;

     

    Backing up a complete database

    The following example backs up the AdventureWorks2012 database to the Windows Azure Blob storage service.

     

    BACKUP DATABASE AdventureWorks2012

    TO URL = 'https://mystorageaccount.blob.core.windows.net/mycontainer/AdventureWorks2012.bak'

    WITH CREDENTIAL = 'mycredential'

    ,STATS = 5;

     

     

    Restoring a database

    To restore a full database backup, use the following steps.

     

    RESTORE DATABASE AdventureWorks2012 FROM URL =
    'https://mystorageaccount.blob.core.windows.net/mycontainer/AdventureWorks2012.bak'

    WITH CREDENTIAL = 'mycredential',  STATS = 5

    - - Note that options like STATS, COMPRESS, FORMAT etc.. can be used or omitted as for any T-SQL BACKUP command.

     

    Resources:

    Concepts/Script Examples

    Best Practices /Troubleshooting

    Getting Started Tutorial

    Please send your feedback on the feature and or
    documentation to karaman @ Microsoft.com or guybo @ microsoft.com.

     


    - Karthika Raman

     

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server

    With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server.

    MAP Toolkit 8.0 Beta is available for download here

    Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at mapfdbk@microsoft.com or through one of our surveys.

    Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following:

      • Perform an inventory
      • View the Windows Azure VM Readiness results and report
      • Collect performance data for determine VM sizing
      • View the Windows Azure Capacity results and report

    Perform an inventory:

    1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below:

    clip_image001

    2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue.

    clip_image003

    3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue.

    clip_image004

    Description of Discovery Methods:

      • Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS.
      • Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers.
      • System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers.
      • Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range.
      • Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers.
      • Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried.

    4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added.

    NOTE:

    The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment.

    clip_image005

    5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next.

    clip_image006

    6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names.

    clip_image007

    7. On the Summary page confirm your settings and then click Finish.

    clip_image008

    After clicking Finish the inventory process will start, as shown below:

    clip_image010

    Windows Azure Readiness results and report

    After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes.

    clip_image012

    If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report.

    clip_image014

    After the report is generated, select View | Saved Reports and Proposals to view the location of the report.

    clip_image015

    Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported.

    clip_image016

    Collect Performance Data

    Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for.

    clip_image017

    Windows Azure Capacity results and report

    After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed.

    clip_image018

    You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.

    clip_image019

     

    MAP Toolkit 8.0 Beta is available for download here

    Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at mapfdbk@microsoft.com or through one of our surveys.

    Useful References:

     

     

    Peter Saddow

    Senior Program Manager – MAP Toolkit Team

  • SQLOS and Cloud Infrastructure sessions at PASS Summit 2012

    The SQL Pass Summit 2012, the largest yet, is in full swing. Here's a summary of the sessions this week on cloud infrastructure and SQLOS topics. Some of these were today, and you can catch the recordings. One more session takes place on Friday covering SQL Server solution patterns in Windows Azure VMs...

    Also, catch Thursday's keynote with Quentin Clark which will feature a cool IaaS demo!

     

    SQL Server in Windows Azure VM Sessions

    CLD-309-A SQLCAT: Best Practices and Lessons Learned on SQL Server in an Azure VM

    Steve Howard, Arvind Ranasaria - Wednesday 11/6 10:15

    This session looked at some best practices to optimize Networking, Memory, Disk IO and high availability based on lessons learned during SQLCat work with customer deployments. Well worth catching the recording.

     

    SQL Server in Azure VM patterns: Hybrid Disaster Recovery, data movement and BI

    Guy Bowerman, Peter Saddow, Michael Washam, Ross LoForte - Friday 11/9 9:45 Rm 613

    [Note: In the guides this has an outdated title.]

    This session has a focus on SQL Server Azure VM solutions. Starting with the basics and then going deeper into:

    - New features in the Microsoft Assessment and Planning Toolkit 8.0 to help plan and size SQL VM migrations.
    - A Look at a Windows Azure VM SQL Server app making use of load balancing and SQL Server high availability features.
    - A BI case study running SQL BI components in Azure VMs and making use of Windows 8 tiles.
    - A training class in a VM case study.

     

    SQLOS Sessions

    DBA-500-HD Inside SQLOS 2012 (half-day session)

    Bob Ward - Wednesday 11/6 1:30pm

    Bob Ward from CSS applies his wealth of experience to look at the internals of SQLOS and what's changed in the various SQL 2012 components, including memory, resource governor, scheduler.

     

    DBA-403-M: SQLCAT: Memory Manager Changes in SQL Server 2012

    Gus Apostol, Jerome Halmans - 1:30pm

    Covers the redesigned SQLOS memory manager in SQL Server 2012 including the new page allocator for any size pages (and all that implies), DMVs, demo's. Not sure why this was placed at the same time as the SQLOS half-day session, but since it's recorded it's available for catch-up.

     

    - Guy

     

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • Resource Governor in SQL Server 2012 technical article published

    The SQL Server Technical article entitled Resource Governor in SQL Server 2012 has just been published.

    The article focuses on the SQL Server 2012 enhancements to the Resource Governor which reflect a growing need for centrally managed database services to support isolated workloads in multitenant environments.

    This document describes the enhancements and why they were added. It includes a self-contained walk through that you can try for yourself in order to gain familiarity with the new features.

    If you're not familiar with the SQL Server Resource Governor (which was introduced in SQL Server 2008), start with a comphrehensive white paper called Using the Resource Governor and then come back to Resource Governor in SQL Server 2012 to see what's new.

    - Guy

     

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • Memory Manager Configuration changes in SQL Server 2012

    Continuing from yesterday's article about Memory Manager surface area changes in SQL Server 2012, this post looks at corresponding configuraton changes for the new Memory Manager. The surface area change article mentioned  that the Memory Manager redesign resulted in being able to more accurately size and govern all SQL Server memory consumption. Let's look at what this means in terms of sizing the total memory usage.

    Max server memory

    In SQL Server 2008, the max server memory configuration setting only governed single 8K page allocations. CLR allocations, multi-page allocations, direct Windows allocations (DWA) and memory required by thread stacks would not be included, so you'd have to set the -g memory_to_reserve startup setting on 32-bit systems if you're using CLR for example, to reserve sufficient address space.

    Setting max server memory becomes more straightforward with SQL Server 2012. The redesigned Memory Manager takes a central role in providing page allocations for the other components, and the max server memory setting governs all memory manager allocations.

    With all page allocations are governed, including CLR page allocations, instance sizing is more predictable, which can help a lot in memory constrained and multi-instance scenarios. Direct Windows Allocations, (i.e. calls to VirtualAlloc()) will still remain outside of Memory Manager control.

    Here's a summary of the how the memory startup option settings changed in 2012:

    SQL Server 2008 R2

    Sp_configure option

    Default setting

    Lowest possible value

    Highest possible value

    Min server memory

    0

    16 (MB)

    Value
      less than max server

    memory setting

    Max
      server memory

    2147483647

    (available memory in the system)

    16 (MB)

    2147483647

    (available memory in the

    system)


    SQL Server 2012

    Sp_configure option

    Default setting

    Lowest possible value

    Highest possible value

    Min server memory

    0

    16 (MB)

    Value
      less than max server

    memory setting

    Max
      server memory

    2147483647

    (available memory in the system)

    32 bit - 64 (MB)

    64 bit – 128 (MB)

    2147483647

    (available memory in the

    system)

    sp_configure awe_enabled

    SQL Server 2008 R2 was the last release to support the awe_enabled option, which allowed use of memory above 4GB on 32-bit systems. In SQL Server 2012 32-bit architectures can no longer use >4GB, though you can still set this value on 32-bit systems to allow the use of locked pages but it's not needed on 64-bit. Note this setting does not affect the way Address Windowing Extensions are use to implement locked pages in memory on 64-bit systems.

    - Guy


     

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • Memory Manager surface area changes in SQL Server 2012

    There were various changes to memory related DMVs, DBCC memory status, and Perfmon counters in SQL Server 2012 as part of the redesign of the Memory Manager component of SQLOS. The Memory Manager redesign resulted in being able to more accurately size and govern all SQL Server memory consumption, making page allocations more efficient, and better handling and tracking of NUMA node memory allocations. This post aims to document these surface area changes in one place to serve as a quick reference.

    Some of these changes were to standardize memory counters as "kb" rather than bytes or pages, continuing a process begun earlier, but now more relevant since the single and multi-page allocators were replaced by an any-size page allocator, so counting pages allocated would no longer give a meaningful view of memory use. This consolidation of allocators also resulted consolidated counters for page allocation.

    Other changes reflect the central role of the redesigned Memory Manager in being the interface to the OS, providing page allocations for the other components. A modular summary of the architectural changes is shown below:

    DMV Changes

    sys.dm_os_memory_nodes

    The consolidation of the single and multi-page allocators is reflected in the pages_kb column in sys.dm_os_memory_nodes. The new foreign_committed_kb counter is a result of the improved NUMA management and reporting in SQL 2012 and identifies memory on a noe that belongs to a remote node, which can be useful if you're troubleshooting slower memory access. We will always try and use local memory first, at least until the target memory is reached.

    sys.dm_os_memory_nodes

    SQL Server 2012

    SQL Server 2008 R2

    pages_kb

    single_pages_kb +
      multi_pages_kb

    foreign_committed_kb

    -

    sys.dm_os_sys_info

    The main memory manager changes here are to standardize memory counters.

    sys.dm_os_sys_info

    SQL Server 2012

    SQL Server 2008 R2

    physical_memory_kb

    physical_memory_in_bytes

    virtual_memory_kb

    virtual_memory_in_bytes

    committed_kb

    bpool_committed

    committed_target_kb

    bpool_commit_target

    visible_target_kb

    bpool_visible

    sys.dm_os_memory_cache_counters

    sys.dm_os_memory_cache_counters

    SQL Server 2012

    SQL Server 2008 R2

    pages_kb

    single_pages_kb +
      multi_pages_kb

    pages_in_use_kb

    single_pages_in_use_kb
      + multi_pages_in_use_kb

     

    sys.dm_os_memory_cache_entries

    sys.dm_os_memory_cache_entries

    SQL Server 2012

    SQL Server 2008 R2

    pages_kb

    pages_allocated_count

     

    sys.dm_os_memory_clerks

    sys.dm_os_memory_clerks

    SQL Server 2012

    SQL Server 2008 R2

    pages_kb

    single_pages_kb +
      multi_pages_kb

    page_size_in_bytes

    page_size_bytes

     

    sys.dm_os_memory_objects

    sys.dm_os_memory_objects

    SQL Server 2012

    SQL Server 2008 R2

    pages_in_bytes

    pages_allocated_count

    max_pages_in_bytes

    max_pages_allocated_count

     

    DBCC Memory Status changes

     Many of the DBCC memory counters that were previously under Buffer Pool have now been consolidated under Memory Manager to reflect the revised architecture. There is also some additional information to reflect OS errors and NUMA awareness. NUMA growth phase shows memory growth before the target is reached. The “Pages In Use” counter shows the actual number of pages in use, including the stolen pages, database pages, and the reserved memory. The Away Committed counter indicates how much memory from this node is committed from other nodes.

    SQL Server
      2012

    SQL Server
      2008 R2

    Memory Manager \
      Large Pages Allocated

    -

    Memory Manager \
      Emergency Memory

    Memory Manager \
      Reserved Memory

    Memory Manager \ Emergency
      Memory In Use

    Memory Manager \
      Reserved Memory In Use

    Memory Manager \
      Target Committed (KB)

    Buffer Pool \
      Target (Pages)

    Memory Manager \
      Current Committed (KB)

    Buffer Pool \
      Committed (Pages)

    Memory Manager \
      Pages Allocated (KB)

    Buffer Pool \
      Stolen (Pages) + Buffer Pool \ Database (Pages)

    Memory Manager \
      Pages Reserved (KB)

    Buffer Pool \
      Reserved (Pages)

    Memory Manager \
      Pages Free (KB)

    Buffer Pool \ Free
      (Pages)

    Memory Manager \
      Pages In Use

    -

    Memory Manager \
      Page Alloc Potential (KB)

    Buffer Pool \
      Stolen Potential (Pages)

    Memory Manager \
      NUMA Growth Phase

    -

    Memory Manager \
      Last OOM Factor

    Buffer Pool \ Last
      OOM Factor

    Memory Manager \
      Last OS Error

    -

     

    SQL Server
      2012

    SQL Server
      2008 R2

    Memory Manager \
      Large Pages Allocated

    -

    Memory Manager \
      Emergency Memory

    Memory Manager \
      Reserved Memory

    Memory Manager \
      Emergency Memory In Use

    Memory Manager \
      Reserved Memory In Use

    Memory Manager \
      Target Committed (KB)

    Buffer Pool \
      Target (Pages)

    Memory Manager \ Current
      Committed (KB)

    Buffer Pool \
      Committed (Pages)

    Memory Manager \
      Pages Allocated (KB)

    Buffer Pool \
      Stolen (Pages) + Buffer Pool \ Database (Pages)

    Memory Manager \
      Pages Reserved (KB)

    Buffer Pool \
      Reserved (Pages)

    Memory Manager \
      Pages Free (KB)

    Buffer Pool \ Free
      (Pages)

    Memory Manager \
      Pages In Use

    -

    Memory Manager \
      Page Alloc Potential (KB)

    Buffer Pool \
      Stolen Potential (Pages)

    Memory Manager \
      NUMA Growth Phase

    -

    Memory Manager \
      Last OOM Factor

    Buffer Pool \ Last
      OOM Factor

    Memory Manager \
      Last OS Error

    -

    SQL Server
      2012

    SQL Server
      2008 R2

    Memory node Id = n
      \ Pages Allocated

    Memory node Id = n
      \ MultiPage Allocator + Memory node Id = n \ SinglePage Allocator

    Memory node Id = n
      \ Target Committed

    -

    Memory node Id = n
      \ Current Committed

    -

    Memory node Id = n
      \ Foreign Committed

    -

    Memory node Id = n
      \ Away Committed

    -

    Memory Clerks \
      Pages Allocated

    Memory Clerks \
      SinglePage Allocator + Memory Clerks \ MultiPage Allocator

    Perfmon counters

     The Perfmon counter changes follow the same consolidation of counters under the Memory Manager object, and the standardization of counters to KB. Several deprecated AWE counters are now removed.

    SQL Server 2012

    SQL Server 2008 R2

    Object - SQLServer:Memory
      Manager

    Object - SQLServer:Buffer
      Manager

    Database Cache Memory (KB)

    Database pages

    Free Memory (KB)

    Free pages

    Reserved Server Memory (KB)

    Reserved pages

    Stolen Server Memory (KB)

    Stolen pages

    Target
      Server Memory
      (KB)

    Target
      pages

    Total
      Server Memory (KB)

    Total
      pages

    -

    AWE
      lookup maps / sec

    -

    AWE stolen maps / sec

    -

    AWE
      write maps /
      sec

    -

    AWE
      unmap
      calls / sec

    -

    AWE
      unmap
      pages / sec

    SQL Server
      2012

    SQL Server
      2008 R2

    Object -
      SQLServer:Memory Node

    Object -
      SQLServer:Buffer Node

    Database Node
      Memory (KB)

    Database pages

    Free Node Memory
      (KB)

    Free pages

    Foreign Node Memory
      (KB)

    Foreign pages

    Stolen Node Memory
      (KB)

    Stolen pages

    Target Node Memory
      (KB)

    Target pages

    Total Node Memory
      (KB)

    Total pages

     

    I'll follow up this article with some additional surface area changes for configuration and new memory manager messages that appear in the errorlog. 

    The goal of these changes is to make the new Memory Manager activity easier and more intuitive to track, and to clean up some of the values that no longer make sense with the any size page allocator and deprecation of AWE. Anything you used to use but can't find any more? Let me know.

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
  • SQL Server 2012 Memory Manager KB articles

    Since the release of SQL Server 2012 with a redesigned memory manager, a steady stream of KB articles have been produced by CSS to provide guidance on the new or changed options, as well as fixes that have been published..

     

    How has memory sizing changed in SQL 2012?

    2663912 Memory configuration and sizing considerations in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2663912  

     

    Setting "locked pages" to avoid SQL Server memory pages getting swapped has been simplified, particularly for Standard Edition, the details can be found here:

    2659143 How to enable the "locked pages" feature in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2659143

     

    Note the following deprecation (particularly relevant for 32-bit installations):

    2644592 The "AWE enabled" SQL Server feature is deprecated - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2644592

     

    Note the following fixes available:

    2708594 FIX: Locked page allocations are enabled without any warning after you upgrade to SQL Server 2012 - http://support.microsoft.com/kb/2708594/EN-US

    2688697 FIX: Out-of-memory error when you run an instance of SQL Server 2012 on a computer that uses NUMA - http://support.microsoft.com/kb/2688697/EN-US

    Originally posted at http://blogs.msdn.com/b/sqlosteam/
More Posts Next page »
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement