THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Louis Davidson

  • Preparing for my Freecon Session

    Deep breath. There is two weeks and two day before the PASS Summit. And like the past few years, I didn't submit any sessions, so those three days will be nonstop learning and chilling with all of the SQL family who makes it to Seattle.

    But two weeks and one days from now is the "Seattle SQL Pro Workshop 2017", AKA the Seattle Freecon 2017. It is a one day event, with a slate of speakers that I would be honored to be listed amongst, even if it was just the list of people at the PASS Summit. They are: Andy Leonard, Jimmy May, Gail Shaw, Wayne Sheffield and Jason Brimhall. Nice.

    There are sessions on SSIS, SQL Injection, Parameter Sniffing, Monitoring IO, and then my session, which is very much T-SQL coding oriented.

    The abstract is here:

    ---------------------

    Implementing a Hierarchy in SQL Server

    One of the more common data structures you will come across in the real world is a hierarchy, sometimes a single parent "tree" or more commonly a multi-parent "graph". Many systems will implement the obvious examples, such as a corporate managerial structure or a bill of materials, however, it turns out that almost any many-to-many relationship can be treated as a hierarchy (for example, a customer to the products they purchased, or the relationship of actors to movies they’ve been in). In this session, we’ll discuss several implementations of trees and graphs using interesting algorithms and built in T-SQL features (CTEs and SQL Graph) that you can use to optimize your hierarchy implementations and put into practice immediately (along with ready-made T-SQL example code.)

    ---------------------

    I have been interested in hierarchies since I attended Dr Rozneshtein's T-SQL training class back late last century. His book: The Essence of SQL : A Guide to Learning Most of SQL in the Least Amount of Time, was one of my earliest influences on writing excellent SQL, favoring set-based processing and limiting the number of passes you need to take through a set of data to provide the most performance, particularly in reporting type queries where you are processing a large amount of data. During the class, he gave us the start of a book he was working on, named "Tree and Graph Processing in SQL", and while I didn't do much with it back then, it started to foster interest in the subject.

    Then a few years back, when I was writing my 2012 edition of my book, I wanted to expand my introduction of hierarchies, so I read a lot of current material on the subject, notably Joe Celko's "Joe Celko's Trees and Hierarchies in SQL for Smarties", some of Paul Nielsen's material in his last involvement in the SQL Server Bible series, and lot of online articles. From this, I was able to put down some conceptual material on several interesting ways to implement a hierarchy.  Not long after, I created a presentation and a lot of sample code to implement and test different methods of implementing tree hierarchies. Most of this presentation will be the same as it was back then (assuming the code works and I can remember how it still works, naturally :)).

    What was a very minor part of the presentation last time was graphs. I implemented a simple graph using relational tables, using a recursive CTE for processing. I will extend that example using the new graph tables feature in SQL Server 2017, and extend the example to show the features that the graph tables give you.

    Honestly, it is way too much to do in an hour, even if I didn't cover graphs, so I will economize as much as possible, but a load of code will accompany the download that you can download even if you don't make it to the Freecon from my presentation page: http://www.drsql.org/presentations. Of course, the code will be a lot easier if you attend!

    If you failed to click the link earlier, go to the Seattle Freecon 2107 Eventbrite link now. (And before you cry "I thought you said free!", the training and snackage are free, there is a charge for lunch, which is not required but highly recommended…)

  • How the rowversion datatype works when adding and deleting columns

    For years, I had thought (and was probably taught in SQL.AlongTimeAgoInAPlaceFarFarAway) that the timestamp column (well before rowversion was a thing,) was not guaranteed to be an ever increasing value. But this is not the case.

    In BOL (https://docs.microsoft.com/en-us/sql/t-sql/data-types/rowversion-transact-sql)  it states:      

    "The rowversion data type is just an incrementing number..."

    This makes it useful for determining rows that have changed, because it it automatic and the user cannot override the value in the column. However, there is a major concern when you use rowversions, and that is what happens when you change the structure of the table, and expect the consumer to see that change. The problem is that when you change the structure of the table, the rowversion will not be set (except when adding a new rowversion column.)

    So let's create a new database and a set of tables:     

    CREATE DATABASE TestRowVersion;
    GO
    USE TestRowVersion;
    GO      
           
    Now, lets check the status of the two functions that we can use to see  the status of the rowversion values: 

    SELECT @@DBTS AS DBTS, --the last rowversion that has been used
           --the next rowversion that will be used
           MIN_ACTIVE_ROWVERSION() AS MIN_ACTIVE_ROWVERSION;

    This returns and will always on a new database, at least always has every time I have ever built a new db and checked (which over the years is more often than I care to remember.)

         
    DBTS               MIN_ACTIVE_ROWVERSION
    ------------------ ---------------------
    0x00000000000007D0 0x00000000000007D1

    Next, let us create a few new tables, two with rowversion columns:

    CREATE TABLE TestRowversion1
    (
        TestRowversion1 int CONSTRAINT PKTestRowversion1 PRIMARY KEY,
        RowVersion rowversion NOT NULL
    );
    CREATE TABLE TestRowversion2
    (
        TestRowversion2 int CONSTRAINT PKTestRowversion2 PRIMARY KEY,
        RowVersion rowversion NOT NULL
    );      
    CREATE TABLE TestNoRowversionYet
    (
        TestNoRowversionYet int CONSTRAINT PKTestNoRowversionYet PRIMARY KEY
    );     
    GO      
          
    The rowversion values have not changed (only using the @@DBTS from now on because it is usually what we want to see):      
     
    SELECT @@DBTS AS DBTS;

    You will see no change from the first execution.

    Now, lets add a few rows to the table with rowversions.     

    INSERT INTO dbo.TestRowversion1(TestRowversion1)
    VALUES(1),(2),(3);
    GO
    INSERT INTO dbo.TestRowversion2(TestRowversion2)
    VALUES(10),(20),(30);     
    GO      
     
    And now check the values:      
     
    SELECT *
    FROM    dbo.TestRowversion1;
    SELECT *
    FROM    dbo.TestRowversion2;

    SELECT @@DBTS AS DBTS;

    This will return:
          
    TestRowversion1 RowVersion
    --------------- ------------------
    1               0x00000000000007D1
    2               0x00000000000007D2
    3               0x00000000000007D3

    TestRowversion2 RowVersion
    --------------- ------------------
    10              0x00000000000007D4
    20              0x00000000000007D5
    30              0x00000000000007D6

    DBTS              
    ------------------
    0x00000000000007D6

    Next step, let's add a new column to both tables to see what the effect is. In the first table, the column will be NOT NULL, with a default, and the second will be null.

    ALTER TABLE dbo.TestRowversion1
        ADD NewNotNullColumn varchar(20) NOT NULL
            CONSTRAINT DFLTTestRowversion1_NewNotNullColumn DEFAULT ('Not Null'); 
        
    ALTER TABLE dbo.TestRowversion2
        ADD NewNullColumn varchar(20) NULL;

        
    Now, let's look at the data:

    SELECT *
    FROM    dbo.TestRowversion1;
    SELECT *
    FROM    dbo.TestRowversion2;

    SELECT @@DBTS AS DBTS; 

    You will see the following:      

    TestRowversion1 RowVersion         NewNotNullColumn
    --------------- ------------------ --------------------
    1               0x00000000000007D1 Not Null
    2               0x00000000000007D2 Not Null
    3               0x00000000000007D3 Not Null

    TestRowversion2 RowVersion         NewNullColumn
    --------------- ------------------ --------------------
    10              0x00000000000007D4 NULL
    20              0x00000000000007D5 NULL
    30              0x00000000000007D6 NULL


    DBTS               MIN_ACTIVE_ROWVERSION
    ------------------ ---------------------
    0x00000000000007D6 0x00000000000007D7


    Note that, from a RowVersion standpoint, the values have not changed.

    So, if you are using this value in some form of ETL, this is something you will need to be aware of, particularly for columns that are declared as NOT NULL. You may need to tweak the rowversion values, using a query such as:      
          
    UPDATE dbo.TestRowversion1
    SET    NewNotNullColumn = NewNotNullColumn;            
          
    Note that if your software keeps the lastrowversion per table, and not at the full database level, you might just set that value back to the start of time 0x0000000000000000, which will perform a lot better! In fact, for a very large table, you may need to do a "chunked" update, just updating rows where the rowversion value is still NULL.

    Checking the data:     

    SELECT *
    FROM    dbo.TestRowversion1;      
           
    You will see the rowversion values have now been incremented:

    TestRowversion1 RowVersion         NewNotNullColumn
    --------------- ------------------ --------------------
    1               0x00000000000007D7 Not Null
    2               0x00000000000007D8 Not Null
    3               0x00000000000007D9 Not Null

    Now, let's drop the new column, and see if there is any change:     
     
    ALTER TABLE dbo.TestRowversion1
        DROP DFLTTestRowversion1_NewNotNullColumn;

    ALTER TABLE dbo.TestRowversion1
         DROP COLUMN NewNotNullColumn;      
           
    Now, check the table:      
     
    SELECT *
    FROM   dbo.TestRowversion1;       
           
    And you will see that the rowversions still match:

    TestRowversion1 RowVersion
    --------------- ------------------
    1               0x00000000000007D7
    2               0x00000000000007D8
    3               0x00000000000007D9

    This is, from a performance standpoint, expectable. You would not expect that they would want to change every single row in the table when adding or deleting a column. Especially for a nullable column I was not surprised, that the rowversion stayed the same. It is however, just something you need to realize when using rowversions for ETL (it would be the same if you created your own time based ETL datetime value as well).

    Finally, what about when you add the RowVersion column to the table? This one is a bit more obvious than the previous case, since it obviously needs to grab a value to add it, but it never hurts to check it out.      
          
    SELECT @@DBTS;      
          
    INSERT INTO dbo.TestNoRowversionYet(TestNoRowversionYet)
    VALUES(1),(2),(3);     

    SELECT @@DBTS;      

    Which returns the same value twice, since there is no change:

         
    ------------------
    0x00000000000007D9

    ------------------
    0x00000000000007D9

    Now we add the rowversion:     
     
    ALTER TABLE dbo.TestNoRowversionYet
        ADD RowVersion rowversion NOT NULL; --you can declare NULL, but it still behaves the same      

    And it has changed for the database, and the columns have rowversion values:

    SELECT @@DBTS;

    SELECT *
    FROM   dbo.TestNoRowversionYet;     

     

    ------------------
    0x00000000000007DC

    TestNoRowversionYet RowVersion
    ------------------- ------------------
    1                   0x00000000000007DA
    2                   0x00000000000007DB
    3                   0x00000000000007DC

    Using rowversion is a very easy way to build a fast, and reliable method of change detection, but it is not without its caveats, which could bite you hard if you do not realize what is occurring.      

  • Doing a book signing at the DataCore booth at Microsoft Ignite!

    If you are in the intersection of these two groups:

    • Doesn't currently have a copy of my current database design book, or wants to give one to a friend who architects poor databases
    • Will be attending the Microsoft Ignite Conference

    Then I have a great deal for you. The middle four days of the Microsoft's Ignite conference in Orlando, I will be signing 300 books at the DataCore booth #2213, where they are launching a product called MaxParallel for SQL Server.

    There is no requirement to purchase anything from DataCore to get your free, signed copy of my book (or even talk to them at all, if you don't want to!) Just line up, say hi and I will put marks in the books that will reduce it from mint to near mint condition. Obviously, they hope you hang around and check out the product, and at least let them scan your badge to send you a bit of information. They are nice people and their product is very interesting, so I hope you do.

    You can read more about MaxParallel here, and I am certain that you are going to be as skeptical as I was. After I had the demo from them that will be similar to what they do on the show floor, I have personally tried it in a lab setting (Azure VM, 4 CPU, with 8 spinning disks using HammerDB to do a TPC test,) and I saw rather promising results. I know they stand behind the product, because you can try it now on Azure Marketplace yourself if interested for 30 days without spending a dime, and will also be able to try it on your on-premises edition as well before buying. (I know I personally don't like trying an iPhone app for 3 bucks if I can't see if it works as expected!)

    The book they are giving away is my latest: Pro SQL Server Relational Database Design and Implementation and I will be at their booth at least for the following times:

    Monday: 1 PM -3 PM
    Tuesday: 2 PM- 4 PM
    Wednesday: 2 PM- 4 PM
    Thursday: 10 AM to 12 PM

    Why my book? The topic of how to build and implement a database properly goes hand in hand with performance tuning. The better you build your database, the more data you can work with in a smaller amount of time with fewer resources. Once your database is built, properly or not, and code has been written, changing the code is hard, and changing the design is exponentially harder. That is where the iterative nature of SQL Server comes in, getting better almost yearly now. Add to that hardware increases, and a database platform optimization tool like MaxParallel, and you can help bad code run faster, and great code run fastest.

    Hope to see you there!

    Disclaimers

    Some of my expenses are being reimbursed by DataCore (conference entrance and lodging), but I am not being reimbursed for my time or other travel expenses. The books they are giving away were purchased directly from my publisher. Book inscription is not guaranteed to be legible or even make sense until you read the book.

  • Temporal Tables - Part 6 - Start and End Times in UTC Time Zone

    In my first 5 blogs on Temporal, I failed to note something pretty important about their usage. The start and end times for the row (and it follows, the historical rows) are likely not in your local time zone. They are stored in UTC time zone. This is obviously good because of that pesky daylight saving time issue where 1:30 AM occurs twice each year (here in the US, the time occurs on the first Sunday of November).

    Unless you live in London, England or someplace where the offset from UTC is 0, it can make working with these tables confusing, because most people rarely work in UTC time, and even rarer is to think in UTC time when most of your data is likely in your local time zone. So you write your query and use local time...and then, most likely, data is returned…but not necessarily the data you actually desired.

    The place you will probably notice the issue is right after you create a table, especially if your server is in a time zone that has a negative offset from UTC. What will happen is that you insert a row, and immediately try to fetch a row using the FOR SYSTEM_TIME AS OF the current time in your query, and nothing will be returned. What you will have done is ask for rows that existed before your table even existed. For example, consider the following structure (basically the same table structure used in part 5):

    CREATE DATABASE TemporalExample
    GO
    USE TemporalExample
    GO
    CREATE SCHEMA Sales;
    GO
    CREATE SCHEMA SalesHistory; --Unlike earlier examples, I put the history in its own schema for granting security purposes.
    GO
    CREATE SEQUENCE Sales.SalesOrderData_SEQUENCE MINVALUE 1 START WITH 1;
    GO
    CREATE TABLE Sales.SalesOrder

         SalesOrderId int NOT NULL CONSTRAINT PKSalesOrder PRIMARY KEY,
                                  --default to a text hex value, so we can see changes...
        Data varchar(30) NOT NULL DEFAULT (SUBSTRING(CONVERT(varchar(20),
                         CAST(NEXT VALUE FOR Sales.SalesOrderData_SEQUENCE as varbinary(5)), 1),3,12)),
        ValidStartTime datetime2 (0) GENERATED ALWAYS AS ROW START, --(0) to keep the output short. Typically (7)
        ValidEndTime datetime2 (0) GENERATED ALWAYS AS ROW END,
        PERIOD FOR SYSTEM_TIME (ValidStartTime, ValidEndTime)

    WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = SalesHistory.SalesOrder));

    Now, I will create a couple of new rows. Nothing too interesting, just needs to have a start and end time, and 2 rows seems cleaner than 1:

    INSERT INTO Sales.SalesOrder(SalesOrderId, Data)
    VALUES (1, DEFAULT), (2, DEFAULT);
    GO

    Check the rows of the table:

    SELECT *
    FROM Sales.SalesOrder;

    This returns data just as expected, but I am not writing this blog after midnight in the Eastern Time Zone, rather it was just after 8:00 PM:

    SalesOrderId Data                           ValidStartTime              ValidEndTime
    ------------ ------------------------------ --------------------------- ---------------------------
    1            0000000001                     2017-09-18 00:07:25         9999-12-31 23:59:59
    2            0000000002                     2017-09-18 00:07:25         9999-12-31 23:59:59

    So if I write my query to get the data as of now, as I did when I was building my new temporal table, using SYSDATETIME();

    DECLARE @AsOf datetime2(0) = SYSDATETIME(); --This time will be just after 8:00 PM unless I waited 4+ hours.

    SELECT *
    FROM  Sales.SalesOrder FOR SYSTEM_TIME AS OF @AsOf;
    GO

    Nothing at all:

    SalesOrderId Data                           ValidStartTime              ValidEndTime
    ------------ ------------------------------ --------------------------- ---------------------------

    Change SYSDATETIME() to SYSUTCDATETIME(), and you will get back the same data as you will without the FOR SYSTEM_TIME AS OF. Of course, it is a VERY unlikely usage of the FOR SYSTEM_TIME clause to only get back the current data in the table when you can just not have the clause, unless you are building a tool and you do something like:

    SET @asOfParameterValue = ISNULL(@asOfParameterValue,SYSUTCDATETIME())

    As most people are going to want to work in their specific time zone, we can change our queries to convert the time zone on the parameter.  So we can change the data using the AT TIME ZONE function as I covered in my previous blog on that subject:

    DECLARE @TimeLocal datetime2(0) = SYSDATETIME(); --AT TIME ZONE will not work with a literal
    DECLARE @AsOf datetime2(0) = @TimeLocal AT TIME ZONE 'Eastern Standard Time' AT TIME ZONE 'UTC';

    SELECT SalesOrder.SalesOrderId, SalesOrder.Data,
           SalesOrder.ValidStartTime AT TIME ZONE 'UTC' AT TIME ZONE 'Eastern Standard Time' AS ValidStartTime,
           SalesOrder.ValidEndTime AT TIME ZONE 'UTC' AT TIME ZONE 'Eastern Standard Time' AS ValidStartTime
    FROM  Sales.SalesOrder FOR SYSTEM_TIME AS OF @AsOf;
    GO

    This returns:

    SalesOrderId Data                           ValidStartTime                     ValidStartTime
    ------------ ------------------------------ ---------------------------------- ----------------------------------
    1            0000000001                     2017-09-17 20:07:25 -04:00         9999-12-31 18:59:59 -05:00
    2            0000000002                     2017-09-17 20:07:25 -04:00         9999-12-31 18:59:59 -05:00

    Then you can enter any time in place of SYSDATETIME() in Eastern Standard Time, and it will work as desired. Note that the end time actually ends up in a different time zone offset than the start time. It is an impossibly large time in any case.

    If you are building an application that needs to run in any time zone, you could change this to parameterize the time zones:

    DECLARE @TimeZone sysname = 'Eastern Standard Time'; --This could even be a parameter to a stored procedure, or value in a table
    DECLARE @TimeLocal datetime2(0) = SYSDATETIME() --AT TIME ZONE will not work with a literal
    DECLARE @AsOf datetime2(0) = @TimeLocal AT TIME ZONE @TimeZone AT TIME ZONE 'UTC';

    SELECT SalesOrder.SalesOrderId, SalesOrder.Data,
           SalesOrder.ValidStartTime AT TIME ZONE 'UTC' AT TIME ZONE @TimeZone AS ValidStartTime,
           SalesOrder.ValidEndTime AT TIME ZONE 'UTC' AT TIME ZONE @TimeZone AS ValidStartTime
    FROM  Sales.SalesOrder FOR SYSTEM_TIME AS OF @AsOf;
    GO

    This works great, and interestingly, if you are working with a timezone that is + hours, it also works. The end time is always 9999-12-31 23:59:59 (plus a fractional part corresponding the precision you set on the datetime2 datatype of the start and end times). Removing the SYSTEM_TIME AS OF, so we get back data (note I also tested this with datetime2(7) datatypes for the start and end times):

    DECLARE @TimeZone sysname = 'Romance Standard Time'; --This could even be a parameter to a stored procedure

    SELECT SalesOrder.SalesOrderId, SalesOrder.Data,
          SalesOrder.ValidStartTime,
           SalesOrder.ValidStartTime AT TIME ZONE 'UTC' AT TIME ZONE @TimeZone AS ValidStartTime,
           SalesOrder.ValidEndTime,
           SalesOrder.ValidEndTime AT TIME ZONE 'UTC' AT TIME ZONE @TimeZone AS ValidStartTime
    FROM  Sales.SalesOrder;
    GO

    This returns:

    SalesOrderId Data        ValidStartTime       ValidStartTime              ValidEndTime         ValidStartTime
    ------------ ----------- -------------------- --------------------------- -------------------- ---------------------------
    1            0000000001  2017-09-18 00:07:25  2017-09-18 02:07:25 +02:00  9999-12-31 23:59:59  9999-12-31 23:59:59 +00:00
    2            0000000002  2017-09-18 00:07:25  2017-09-18 02:07:25 +02:00  9999-12-31 23:59:59  9999-12-31 23:59:59 +00:00

    The end time has no offset. So it is technically the same time as it was before the AT TIME ZONE functions were applied, while the start time works fine.

  • Using AT TIME ZONE to manipulate and convert time zones

    I have a series of blogs about temporal tables in SQL Server 2016, which you can see here. However, one thing I never had mentioned was that the Temporal Row Start and End Times are always stored in UTC times (which is generally inconvenient to work with unless you are from the Great Britain area).  However, when you are looking at your data and want to see what the data looked like at 'Yesterday at 12:00 at the computer you are working from, this can be a bit more work.

    In 2016 SQL Server, a new feature was added that totally passed me by. AT TIME ZONE. It is used to manipulate the time zone offset of a point in time value,either to append the time zone to a value, or to shift the time zone value of a DATETIMEOFFSET value. The most common practical use that I will show is converting a value in your local time to UTC (and vice versa).

    Appending the time zone offset to a time value

    The first use of AT TIME ZONE doesn't seem overly interesting for normal day to day use on the face of things. It adds time zone information to a variable. For example, consider the following point in time value:

    SELECT  SYSDATETIME() AS CurrentTime;

    CurrentTime
    ---------------------------
    2017-09-01 20:40:58.5931246

    Now, if you want to state that the time zone offset of a time value is for a given time zone, you can use AT TIME ZONE to append the offset (with the output type in this case being DATETIMEOFFSET(7)):

    SELECT  SYSDATETIME() AT TIME ZONE 'Central Standard Time' AS CentralTime, 
    SELECT  SYSDATETIME() AT TIME ZONE 'Eastern Standard Time' AS EasternTime; 

    CentralTime                        EasternTime
    ---------------------------------- ----------------------------------

    2017-09-01 20:41:28.9633676 -05:00
    2017-09-01 20:41:28.9893637 -04:00

    The time zone can be not only be a literal, but can be an expression, so the following will work too:

    DECLARE @TimeZone1 NVARCHAR(200) = 'Central Standard Time'
    SELECT  SYSDATETIME() AT TIME ZONE @TimeZone1 ;
     

    This in and of itself is pretty cool. However, most of the time, I don't see many people using columns of DATETIMEOFFSET, just one of the typical point in time types like DATETIME2. Just adding the time zone is cool, but when you cast the values to a DATETIME2, the time zone is lost:

    DECLARE @timeValue datetime2 = SYSDATETIME(); --fetch a point in time

           --cast the time in the Central Time Zone to datetime2 
    SELECT CAST(@timeValue AT TIME ZONE 'Central Standard Time' AS datetime2) AS CurrentTime,

           --cast the time in the Pacific Time Zone to datetime2
           CAST(@timeValue AT TIME ZONE 'Pacific Standard Time' AS datetime2) AS StillCurrentTime,

           --compare the two
           CASE WHEN CAST(@timeValue AT TIME ZONE 'Central Standard Time' AS datetime2) =
                   CAST(@timeValue AT TIME ZONE 'Pacific Standard Time' AS datetime2) THEN 'Match'

           ELSE 'No Match' END AS Match;

    Which returns:

    CurrentTime                 StillCurrentTime            Match
    --------------------------- --------------------------- --------
    2017-09-01 21:16:20.1161830 2017-09-01 21:16:20.1161830 Match

    The offset times before you cast to the datetime2 would not match, if you compared them. For example:

    DECLARE @datetimeoffset1 datetimeoffset = GETDATE() AT TIME ZONE 'Eastern Standard Time'
    DECLARE @datetimeoffset2 datetimeoffset = GETDATE() AT TIME ZONE 'Central Standard Time'

    SELECT CASE WHEN @datetimeoffset1 < @datetimeoffset2 THEN 1 ELSE 0 end

    will return 1, because the same clock time in the Eastern Time Zone is earlier than the same clock time in the Central Time Zone.

    Converting a time from one time zone to another

    The second, very practical, thing the feature does is to change the time zone of a value that already has a time zone offset. So consider the time:

    DECLARE @datetimeoffsetValue datetimeoffset(7) =
                    '2017-09-01 20:41:28.9633676 -05:00'

    This is one of the values used earlier, and this offset corresponds to the offset in the Central Time Zone.  To convert this to the Eastern Time Zone, you can use AT TIME ZONE again:

    SELECT @datetimeoffsetValue AT TIME ZONE 'Eastern Standard Time'

    ----------------------------------
    2017-09-01 21:41:28.9633676 -04:00

    Which is one clock hour later in the Eastern Time Zone (and the corresponding offset is one hour less too). Casting the values to datetime2 value, it will strip off the time zone offset, and then it would look like 1 hour different.

    As I was doing some research trying to find examples of AT TIME ZONE, I found the following thread on Stack Overflow: (https://stackoverflow.com/questions/36393742/using-at-time-zone-to-get-current-time-in-specified-time-zone )

    Since the value in the Eastern Time Zone is a DATETIMEOFFSET value, you can simply use another AT TIME ZONE clause to convert the time zone to a different time zone. For example, at 10:00AM in the Eastern Time Zone, what time is it in the Pacific? (note that you cannot use AT TIME ZONE on a string literal… )

    SELECT CAST('2017-01-01 10:00' AS datetime2) AT TIME ZONE 'Eastern Standard Time' AT TIME ZONE 'Pacific Standard Time'

    It is 3 hours earlier:

    ----------------------------------
    2017-01-01 07:00:00.0000000 -08:00

    What makes this a very practical feature that many programmers need is to translate a time from their local time zone to the UTC time zone. For example:

    DECLARE @TimeInUTC datetime2 = '20170101 12:00'
    SELECT @TimeInUTC, @TimeInUTC AT TIME ZONE 'Eastern Standard Time' AT TIME ZONE 'UTC'

    This returns:

    --------------------------- ----------------------------------
    2017-01-01 12:00:00.0000000 2017-01-01 17:00:00.0000000 +00:00

    Because the value can be an expression, consider that you could build the following table, storing the time zone by name (Or perhaps calculating it by location the time zone the data based on the geography of the customer):

    CREATE TABLE dbo.CustomerActivity
    (
        CustomerId    int CONSTRAINT PKCustomerActivity PRIMARY KEY,
        ActivityUtcTime datetime2,
        TimeZoneName sysname
    );
    GO
    INSERT INTO dbo.CustomerActivity(CustomerId, ActivityUtcTime,TimeZoneName)
    VALUES(1,SYSUTCDATETIME(),'Eastern Standard Time'),(2,SYSUTCDATETIME(),'Pacific Standard Time'),
          (3,SYSUTCDATETIME(),'UTC'),(4,SYSUTCDATETIME(),'AUS Eastern Standard Time'),
          (5,SYSUTCDATETIME(),'Not A Time Zone'); --To force a failure

    Query the data, casting the data first to DATETIMEOFFSET (which is equivalent in this case to AT TIME ZONE 'UTC'):

    SELECT CustomerActivity.CustomerId,
           CAST(CAST(ActivityUtcTime AS datetimeoffset(7)) AT TIME ZONE TimeZoneName AS datetime2(7)) AS ActivityTimeLocal,
           CustomerActivity.TimeZoneName
    FROM   dbo.CustomerActivity;

    Then you can get back the times in the local time for the customer when you need it:

    CustomerId  ActivityTimeLocal           TimeZoneName
    ----------- --------------------------- ------------------------------
    1           2017-09-01 22:12:07.1144281 Eastern Standard Time
    2           2017-09-01 19:12:07.1144281 Pacific Standard Time
    3           2017-09-02 02:12:07.1144281 UTC
    4           2017-09-02 12:12:07.1144281 AUS Eastern Standard Time

    This lets you store the times as UTC, which is the typical desired way to store data when dealing with localities, particularly events where the overlapping time during the "Fall Back" part of Daylight Saving Time would be inconvenient. Alternatively, you might store data in your local time zone and convert not from UTC, but from the literal local value, but to the variable based customer's time zone.

    Note that my row #5 will fail because of an invalid time zone:

    Msg 9820, Level 16, State 1, Line 52
    The time zone parameter 'Not A Time Zone' provided to AT TIME ZONE clause is invalid.

    You can see the valid ones IN the following table, which will return 135 rows...

    SELECT * FROM sys.time_zone_info

    One thing that is really interesting is that most of the time zones are suffixed "Standard Time", though it is the same during Daylight Saving Time, which is generally part of the name, as in "Eastern Daylight Saving Time", rather than just "Eastern Time."

  • Row Level Security and Indexed Views

    I was giving my Row Level Security session last weekend for the Richmond User Group (@RichmondSQL http://rva.pass.org/) and a question came up (ok, so I technically asked the question). How does the feature work with indexed views?

    Generally speaking, the answer turned out to be obvious. You can apply a RLS Security Policy to a view that has an index on it, but not to a table that is referenced by an indexed view.

    Example code:

    Create a table

    CREATE SCHEMA Demo;
    GO
    CREATE TABLE Demo.SaleItem
    (
        SaleItemId    int CONSTRAINT PKSaleIitem PRIMARY KEY,
        ManagedByRole nvarchar(15), --more typically would be sysname, but nvarchar(15) is easier to format for testing
        SaleItemType  varchar(10)
    )
    GO


    And a very simple predicate function

    CREATE OR ALTER FUNCTION rowLevelSecurity.ManagedByRole$SecurityPredicate_RLSView (@ManagedByRole AS sysname)
        RETURNS TABLE
    WITH SCHEMABINDING --if schemabound, users needn't have rights to the function
    AS
        RETURN (SELECT 1 AS ManagedByRole$SecurityPredicate) ; --Works no matter what, for simplicity sake
    GO


    Next, create a view. Must be schemabound for RLS, and for an indexed view.

    CREATE OR ALTER VIEW Demo.SaleItem_RLSView
    WITH SCHEMABINDING
    AS
        SELECT SaleItemId, ManagedByRole, SaleItemType
        FROM   Demo.SaleItem
    WITH CHECK OPTION;
    GO

    Then add an index to the view:

    CREATE UNIQUE CLUSTERED INDEX IndexedView ON Demo.SaleItem_RLSView (SaleItemId);

    Now, attempt to add to table will fail:

    CREATE SECURITY POLICY rowLevelSecurity.Demo_SaleItem_SecurityPolicy
        ADD FILTER PREDICATE rowLevelSecurity.ManagedByRole$SecurityPredicate_RLSView(ManagedByRole)
                ON Demo.SaleItem
        WITH (STATE = ON); --go ahead and make it apply
    GO

    Msg 33265, Level 16, State 1, Line 42
    The security policy 'rowLevelSecurity.Demo_SaleItem_SecurityPolicy' cannot have a predicate on table 'Demo.SaleItem' because this table is referenced by the indexed view 'Demo.SaleItem_RLSView'.


    But you can still add to the view:

    CREATE SECURITY POLICY rowLevelSecurity.Demo_SaleItem_SecurityPolicy
        ADD FILTER PREDICATE rowLevelSecurity.ManagedByRole$SecurityPredicate(ManagedByRole)
                 ON Demo.SaleItem_RLSView
        WITH (STATE = ON); --go ahead and make it apply
    GO

    Now the filter predicate will be applied to usage of the view, exactly like it is for a table (you cannot have a BLOCK predicates on any kind of view). This and a few more tweaks have been made to the Latest Version of the presentation code which can be accessed from the Dropbox folder link you can find on my webpage: http://www.drsql.org/presentations

  • Scenic City Summit is this Friday

    Scenic City Summit is a one day conference in Chattanooga, the Scenic City Summit is a new experience for me, although in a familiar, Devlink-esqe setting of the Chattanooga Convention Center in Chattanooga, TN.

    I am looking forward to attending, and seeing a couple of fellow Microsoft MVP's do keynotes: Cory House and David Neal. I have never seen/met Cory House, and as he is doing the opening keynote, there is only a 30% chance I will actually make it there. David Neal on the other hand, I have seen a few times at Music City Code and Devlink, and he and I also share a devotion to the cullinary masterpiece known as Nashville Hot Chicken (my taste bud just twanged a bit thinking about it).

    But I digress.

    Scenic City Summit isn't a large conference, and is even smaller when it comes to database content. Under the topic of database, there are 2 sessions: One by Eric Cobb, (he is from Nashville, so I may know him already) called Building Better SQL Server Databases, which I plan to attend.

    Then my session:

    Implementing Row Level Security in SQL Server

    Since the early days of SQL Server, it has been possible to use basic security measures to allow (and disallow) user access to any object in a database, and even limit access to a given column. A more difficult task has been to limit access to the individual rows in a table. This has made application development more complex because whereas most every user may have access to a table like HR.Employee, not every employee will have access to every employee's data. In this session I will cover several techniques to limit access to rows, from methods that have existed since early versions of SQL Server, to the new Row Level Security feature in SQL Server 2016.

    What is interesting about this session for me is the time frame. Some conferences you get an hour and 15 minutes and have to stretch your topic to fit (or to be fair, not hurry to get to the end because you have too much material to fill the typical hour. This presentation will be the other way. It is scheduled for 45 minutes. And since these will be more of a coder/developers type of audience, I really have no idea whether they will want to see 2016 topics, or how to do it with pre-2015 technologies. So I will have my 2 hours of material (all code in SSMS) ready, and see where things go.

    There are a few other sessions I am keen to attend, both from Gaines Kergosien (great guy who runs Music City Code), the first of which conflicts with Eric's session, but how can you not want to see: a session on "7 Habits of Highly Paid Developers" I work for a non-profit, but I am not dead inside. The one I do plan to make it to is called "From Developer to Data Scientist" as I am interested in the whole data scientist path, if not necessarily to become one, to build better databases that support them.

  • 50ths… SQL Saturday and Life

    This week I will experience a couple of events associated to the number 50.

    SQL Saturday Atlanta

    The second will be my 50th SQL Saturday Event! (49th speaking, 1 attended in casual mode… 50th speaking Saturday will be in August at SQL Saturday Louisville).  This weekend is SQL Saturday Atlanta. It is one of my favorite events, not just for the great people I have known for over 10 years, not just for the mid major conference feel, not even because of the awesome theming.  Well maybe it is the theming, because I can say the rest of those about all of the SQL Saturdays I have attended!

    And awesomely, for my 50th event I will be doing my good old standard database design session:

    Database Design Fundamentals

    Data should be easy to work with in SQL Server if the database has been organized as close as possible to the standards of normalization that have been proven for many years, but are often thought of as old-fashioned. Many common T-SQL programming "difficulties" are the result of struggling against these standards and can be avoided by understanding the requirements, applying normalization, as well as a healthy dose of simple common sense. In this session I will give an overview of how to design a relational database, allowing you to work with the data structures instead of against them. This will let you use SQL naturally, enabling the query engine internals to optimize your output needs without you needing to spend a lot of time thinking about it. This will mean less time trying to figure out why SUBSTRING(column,3,1) = 'A' is killing your performance, and more time for solving the next customer problem.

    This is by far the session I have done the most times through the years at SQL Saturday and PASS Summits, and it is the topic that got me into the whole speaking, writing, and SQL for a hobby/living situation I have lived for the past 20 years.

    Life

    But before I reach 50 SQL Saturdays, I have another milestone to scoot past. 50 years of life as of July 12, 2017. I have whined written about my travails over the past 20 years that make me feel old: gray hair, grandkids, hip replacements, knee problems, just to name a few. Reaching the decade when my father passed away in his life is mildly scary to me also (even if he died of an illness). Don't get me wrong, not afraid to pass away some day, but I am not looking forward to it either. (If I was, I wouldn't be consistently going to the gym for 50 minutes a day, and I would eat a lot more fried chicken from Chef Big Shakes!)

    While I certainly do not feel like I did when I was 20, I don't exactly feel old. Sure I can't stand up without making that middle aged groaning/grunting noise, but who can really? Sometimes I feel like just a big kid, and the SQL community is a big part of that, and the reasons are really twofold. First, in having a community of people who shares SQL Server with me, my world has stretched a lot since I first attended a conference and didn't leave the hotel where it was held. I have also learned that all of the seemingly "kiddie" stuff I like to do is not abnormal. Some of my favorite things to do are:

    • Legos - I can see 7 sets of Legos from my desk without turning my head (in which case there are several more sprinkled around my home office.
    • Sci-Fi/Fantasy - I love Star Wars (Movies, Comics, TV Shows), Marvel (Movies and TV Shows), Planet of The Apes (Going to the see the new trilogy on my birthday!), Lord of the Rings (Movies), to name just a few of my favorites
    • Kids movies - Way more than 1/2 of the movies I go to see are animated, and the same can be said of those I stream.
    • Theme Parks - My wife and I plan our vacations like two pre-teenagers would. Resort time? Nah, that is taking up ride time.
    • Roller Coasters - I have some challenges with them, but if I can ride it, I will ride the heck out of it. I currently hold the record for number of times to ride the Tennessee Tornado at Dollywood in a day (52 times).

    Every one of these are things that I have in common with some of the members of the SQL community, and often discuss them on Twitter with them. Then add on the stuff I do in my "hobby" SQL community, learning, writing, blogging, tech editing books, I don't exactly sleep 15 hours a day anymore.

    If I just had my father, mother, grandparents, and other (what were at the time) old people that I have known as my guide, I have no idea what I would do with my time. I love sitting down and watching copious amounts of TV, but something that makes watching TV better is that it is a break from doing way too much work. Looking back at my childhood, I have a much greater appreciation for just how hard my father had to work for a living compared to the amount of work I have to do as a data architect.

    What is kind of cool is that even those things that technically make me feel old help me feel young sometimes. Grandchildren makes you sound old, but kids are generally fun. My oldest likes roller coasters as much, if not more, than me. And while your own children getting older sounds bad, watching them grow up to be something awesome is worth getting older for. And hey, in 20-30 years she will get to become my parent, hopefully :)

    Future

    What does the future hold? Who knows? I still like working with SQL, and I love the SQL community. Travelling to 5+ events every year? Most likely if they will still have me, I will continue to put in to speak. When I am tired of that, I will probably still show up. I do at our local user group, even when the topic doesn't seem like it will be one I am initially keen on.

    Age is just a number, though as the saying goes, 50 is a big number. The only people who will tell you otherwise are either too young know or too old to be trusted…

    Note, this is the second in my trilogy of personal blogs, the next will come later this year. Technical stuff will resume when I am finished with some editing work that is kicking my rear end. I don't want people to think I am just getting old and reflective. Just some events kind of lead you to bring them up and discuss. The SQL Community has been a part of my life now for 20 years, and for the past 10 or so, it is not only growing, but becoming more tightly knit due to social media and blogs. I just want to say thanks :)

  • SQL Saturday Chattanooga is THIS WEEK!

    My goodness how time does fly. It seems like just a few months ago I was going to my hometown SQL Saturday in Nashville, and now here I go again to my NEW hometown SQL Saturday in Chattanooga, TN. Much like the Nashville event, I didn't really do anything in terms of organization of the event, but I will be speaking at the event this year (last year, Chattanooga was he event I just attended… I remember liking that feeling in the days leading up to the event, as a matter of fact.)

    I will be doing my Database Design Fundamentals session. Here is the abstract:

    Data should be easy to work with in SQL Server if the database has been organized as close as possible to the standards of normalization that have been proven for many years, but are often thought of as old-fashioned. Many common T-SQL programming "difficulties" are the result of struggling against these standards and can be avoided by understanding the requirements, applying normalization, as well as a healthy dose of simple common sense. In this session I will give an overview of how to design a relational database, allowing you to work with the data structures instead of against them. This will let you use SQL naturally, enabling the query engine internals to optimize your output needs without you needing to spend a lot of time thinking about it. This will mean less time trying to figure out why SUBSTRING(column,3,1) = 'A' is killing your performance, and more time for solving the next customer problem.

    This session is totally my "chestnut" session (the derivation of that expression is not what I thought it was!), as a variation of this session has been done by me for 15 years or so, and some of the concepts have been around since the mid eighties (when the music was REAL!) 

    Larry Ortega (@SpoChatt) and team puts on a great conference, I have enjoyed it every year, and he is the lead of our local user group as well.  I hope to see you there… Convinced? Click here: https://www.sqlsaturday.com/624/RegisterNow.aspx

    Still not convinced?

    How can you resist attending SQL Saturday #624 on 6/24? It will be like many 1000s of events before you can possibly do that again!

    If my session, and the 6/24 thing, isn't enough, check out the full schedule for, frankly, a heck of a group of speakers! http://www.sqlsaturday.com/624/Sessions/Schedule.aspx

    Still not convinced? Well then you are just too hard to please for me. Stay home and enjoy a rainy Saturday afternoon without SQL.

  • Father, Son, Daughter, and SQL

    I don't love Father's Day. Mostly because I spoke to my father for the last time on Father's day back in 1996. He wasn't feeling well, we exchanged a few pleasantries, and the words I remember were: "I don't feel well, let me give you to your mother." Better than the actual last word's that Marshall got: "that fungus thing is acting up again;" but not as good as the words just before those: "I love you" (don't get the reference? check here.)

    As a son I was by no means the worst, but still I was not the best. My father loved me, much in the same way that a drill instructor loves a new set of recruits. He wanted the best for me, and while his discipline was very (very) strong (yes, he hit me with belts and switches, and I more than deserved it,) the care part was there, wanting the best for me. He taught me to treat everyone equal. We tore down the fence of our neighbor who was the only non-Caucasian person in the neighborhood. I still remember the gist of his words, along with the geographic location of that belt utilization: "he has enough trouble just being different." Times have changed, but it is a lesson I still carry today, and not just based on the color of a person's ethnicity. Love your neighbor.

    He was actually who got me into SQL, indirectly. When I was in high school, he had me take a class that I insisted was just for girls (I think there may have been a few football players in there too,) typing. I was never going to type, much less for a living. Ha! (I also took home economics, which was a good thing too, though apparently it is now called: family and consumer sciences.)

    Then, he helped me get my first computer, a Commodore 64, with the most horrible off-brand TV for a monitor. We were relatively poor at the time, him being between careers (after losing his management position at a car dealer to the owner's son, he quit, took about a year off, and ended up with a great position with the USDA Forest Service, managing their fleet), and it was a great inspiration over the years, after my brain realized all that had occurred. My love of computers started there, though I was still working on an engineering degree, which I failed at. He had always questioned if that was what I should do, and to be realistic, my head was not where it should have been (you can guess where it was at.) It took me time to get back in school, and finally to work on a Computer Science degree. The rest is more than I am going to write about here :)

    Though we were never close in a buddy buddy manner, I feel his influence even today. We were similar teachers. If you don't want to learn, we are terrible. He was a master mechanic, but could never teach me a thing about cars. He was, by all accounts, a great driving instructor in his job, and won many awards for the job he did. I love teaching people about SQL, because everyone who attends my sessions does it because they love the subject.

    The other part of Father's Day I do like is being a father, though it is difficult too. Am/Was I a great father? I don't know. I never felt like a bad father (embarrassing at times for sure,) but never "great". I don't know how some of the SQL community does it. Speaking, writing, consulting on the road, I applaud you if you can balance things with being a parent. I remember sitting in many volleyball, basketball, and school musicals on my Palm Pilot writing/editing/planning my first book.

    My daughter was 10, 11, and/or 12 then. She is not now, as one of my grandkids is 8! If we measured SQL programmers by whether their children became nerds too, then I did fail big time. I never did a great job teaching her much of any skills, and she never really cared about technical things. But my son-in-law is currently working in IT, and my 8 year old granddaughter loves math.

    Happy Father's Day, and thanks to all of the dads out there…

  • Utility to temporarily drop FOREIGN KEY constraints on a set of tables

    I was working on a database a few weeks back (maybe longer, I am kind of busy with several projects right now), and I wanted to truncate a set of tables. There was one central table, and 8 tables related by a FOREIGN KEY constraint. The central table had 6 million rows, and a few of the other tables 20+ million. TRUNCATE is amazingly fast,and when I tried to just use DELETE, it took almost as long as it took me to load the tables. (Exaggeration? A little bit, but not as much as you might imagine, since I use SSIS to BULK LOAD these tables, and the source data is very simple.)

    I could just get rid of the constraints, but as a relational db lover, I love constraints. When enforced, they protect me. When not enforced (like when SSIS disables them for a BULK LOAD operation), they are still good documentation. So as any good nerd programmer type would, I started coding a tool to deal with the constraints for me. And as a decent blogger, as soon as it started to get interesting, I realized I could blog about it and upload the code to my website. This in the end makes the code better, because I have to test more, and I learn stuff from readers reminding me things (like a FK script needs to honor CASCADE and NOT FOR REPLICATION, oops.)

    I am not going to go over, or even paste, the code in this blog. What I will do is show the interface, and demonstrate how the code works. The interface for the tool is two stored procedures. The first is used to drop the foreign key constraints, storing the scripts for the constraints in a table that it creates named Utility.foreign_key$batch_drop_toRestore:

    utility.foreign_key$batch_drop
        @table_schema sysname = '%', --lets you control what schema to drop FKs from
        @table_name sysname = '%',  --lets you control the tables to drop FKs from
        @add_to_history_flag BIT = 0, --by default the procedure creates a table to hold history, this parameter tells it to add to history, if you need to do things incrementally
        @force_replace_status  VARCHAR(20) = 'AS_WAS' --Using the following values: ENABLED, UNTRUSTED, DISABLED, lets you force the status of the constraints, like to quickly turn on the constraint as UNTRUSTED

    Then, to recreate the foreign key constraints after you have done your business with the tables:

    utility.foreign_key$batch_recreate --if the utility.foreign_key$batch_drop_toRestore table exists, do what is in the table.

    Note, If your structures become out of sync with the data in utility.foreign_key$batch_drop_toRestore, you may have to manually apply scripts from the table and/or drop the table. The point of the tool  is to use to drop, do some action, and add the FKs back. The more time passes without applying the script, and things could get out of sync.

    To demonstrate their use, here are a few samples usages. The utility uses the code from the previous blog showing how to script a foreign key:  http://sqlblog.com/blogs/louis_davidson/archive/2017/05/24/utility-to-script-a-foreign-key-constraint.aspx and you can download the code for this utility here: https://www.dropbox.com/s/3m9lghtfrnhhxgh/Utility-ForeignKey%24BatchDropRecreate.sql?dl=0

    USE TestRebuildConstraints
    GO

    --Recreate the tables, so we have a known state that can be compared to a script 
    DROP TABLE IF EXISTS Demo.ChildTable;
    DROP TABLE IF EXISTS Demo.ParentTable;
    DROP TABLE IF EXISTS Demo.GrandParentTable;

    CREATE TABLE Demo.GrandParentTable
    (  
        GrandParentTableId INT NOT NULL
            CONSTRAINT PKGrandParentTable PRIMARY KEY
    );

    CREATE TABLE Demo.ParentTable
    (
        ParentTableId INT NOT NULL
            CONSTRAINT PKParentTable PRIMARY KEY,
        GrandParentTableId INT NULL,
        CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
            FOREIGN KEY (GrandParentTableId)
            REFERENCES Demo.GrandParentTable (GrandParentTableId)
    );

    CREATE TABLE Demo.ChildTable
    (
        ChildTableId INT NOT NULL
            CONSTRAINT PKChildTable PRIMARY KEY,
        ParentTableId INT NULL,
    );

    --an untrusted constraint
    ALTER TABLE Demo.ChildTable WITH NOCHECK
      ADD CONSTRAINT [ChildTable$ref$ParentTable_NotTrusted]
           FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable (ParentTableId);

    --adding a second constraint. Not typical (or a great idea) but good enough for this exercise
    --disabled constraint
    ALTER TABLE Demo.ChildTable WITH NOCHECK
        ADD CONSTRAINT [ChildTable$ref$ParentTable_Disabled]
            FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable (ParentTableId)
            ON DELETE CASCADE
            ON UPDATE SET NULL;

    ALTER TABLE Demo.ChildTable
        NOCHECK CONSTRAINT [ChildTable$ref$ParentTable_Disabled];
    GO

    This is the set of data to compare against in order to make sure that nothing has changed that we did not want to change.

    SELECT is_not_trusted, is_disabled, delete_referential_action_desc, update_referential_action_desc, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo'
    ORDER BY name;

    Which returns:

    is_not_trusted is_disabled delete_referential_action_desc update_referential_action_desc  name
    -------------- ----------- ------------------------------ ------------------------------- ------------------------------------------
    1              1           CASCADE                        SET_NULL                        ChildTable$ref$ParentTable_Disabled
    1              0           NO_ACTION                      NO_ACTION                       ChildTable$ref$ParentTable_NotTrusted
    0              0           NO_ACTION                      NO_ACTION                       ParentTable$ref$GrandParentTable_Enabled

    The basic goal of the procedure is something like the following. I want to run the following set of statements on these tables I have created:

    TRUNCATE TABLE Demo.ChildTable;
    TRUNCATE TABLE Demo.ParentTable;
    TRUNCATE TABLE Demo.GrandParentTable;

    Which will fail (even if the constraint is disabled!)

    Msg 4712, Level 16, State 1, Line 52
    Cannot truncate table 'Demo.ParentTable' because it is being referenced by a FOREIGN KEY constraint.
    GO

    Now, run the following utility to drop the constraints:

    EXEC utility.foreign_key$batch_drop @table_schema = 'Demo';
    GO

    This creates a table in the utility schema that holds the scripts and object names:

    SELECT *
    FROM   Utility.foreign_key$batch_drop_toRestore

    Now, the TRUNCATE statements will execute.

    TRUNCATE TABLE Demo.ChildTable;
    TRUNCATE TABLE Demo.ParentTable;
    TRUNCATE TABLE Demo.GrandParentTable;
    GO

    Command(s) completed successfully.

    Execute the batch recreate procedure and it will restore the constraints.

    EXEC utility.foreign_key$batch_recreate;
    GO

    Check the foreign key metadata, it should match what you got from the first query of sys.foreign_keys. Next, I will demonstrate two other cases. I will use a TRANSACTION so we can do multiple tests without resetting our structures. In this example, I will show removing constraints one table at a time:

    --just remove constraints from ChildTable
    EXEC Utility.foreign_key$batch_drop @table_schema = 'Demo', @table_name = 'ChildTable';

    --To add more foreign keys to the table, you have to specify the @add_to_history_flag
    EXEC Utility.foreign_key$batch_drop @table_schema = 'Demo', @table_name = 'ParentTable';

    This causes the following error:

    Msg 50000, Level 16, State 1, Procedure foreign_key$batch_drop, Line 32 [Batch Start Line 164]
    Parameter @add_to_history_flag set to only allow initialize case

    Adding the @add_to_history_flag = 1 parameter value, and it will work:

    --This works, adding an additional table
    EXEC Utility.foreign_key$batch_drop @table_schema = 'Demo', @table_name = 'ParentTable',
        @add_to_history_flag = 1;

    --Now put the constraints back
    EXEC utility.foreign_key$batch_recreate;
    GO

    Finally, you can also change the constraint's enabled status using the @force_replace_status

    --the script that is saved off will be for enabled constraints.
    EXEC Utility.foreign_key$batch_drop @table_schema = 'Demo',@force_replace_status = 'ENABLED';
    GO
    EXEC utility.foreign_key$batch_recreate;
    GO

    Now check the metadata, and you will see the constraints are all trusted:

    SELECT is_not_trusted, is_disabled, delete_referential_action_desc, update_referential_action_desc, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo'
    ORDER BY name;

    Which you should now see:

    is_not_trusted is_disabled delete_referential_action_desc update_referential_action_desc  name
    -------------- ----------- ------------------------------ ------------------------------- ------------------------------------------
    0              0           CASCADE                        SET_NULL                        ChildTable$ref$ParentTable_Disabled
    0              0           NO_ACTION                      NO_ACTION                       ChildTable$ref$ParentTable_NotTrusted
    0              0           NO_ACTION                      NO_ACTION                       ParentTable$ref$GrandParentTable_Enabled

  • Utility to script a FOREIGN KEY Constraint

    Note: Amended to include the word include in the first Note (I am a terrible editor of my own writing!) and to fix a bug in the code with a misplaced parenthesis
    Note: Amended to include cascading and NOT FOR REPLICATION.

    As noted in my previous post, I am in the middle of building a utility (for work, and for my next SQLBLOG post), that will help when you need to drop the foreign key constraints on a table, but then replace them without having to save off the script manually. In that post, I showed how to manually create a foreign key constraint in three possible states.

    Next, in order to create the utility to script the FOREIGN KEY constraints before dropping them, I need a way to script the constraint. In this post I present a user defined function that will generate a script that does the scripting of a foreign key constraint. The code is largely based on some code from Aaron Bertrand in this blog (with his permission naturally!) with a few edits to script constraints as enabled, disabled or untrusted if the source constraint was in that condition (or you can force the constraints to a certain way if you so desire as well.)

    In this blog entry, I present the code for this function. The code has some comments to illuminate most of what is going on, but I am not going to do too much coverage of the code. Just the code, and some test cases.

    utility.foreign_key$script (download the code here)

    CREATE DATABASE TestRebuildConstraints;
    GO
    USE TestRebuildConstraints
    GO

    IF DB_ID() = DB_ID('TestRebuildConstraints') --helps me not create stuff in master
        EXEC('CREATE SCHEMA utility');
    GO

    CREATE OR ALTER FUNCTION utility.foreign_key$script(
        @schema_name sysname,
        @foreign_key_name sysname,
        @constraint_status VARCHAR(20) = 'AS_WAS' --ENABLED, UNTRUSTED, DISABLED
                                                   --ANY OTHER VALUES RETURN NULL
    )
    --------------------------------------------------
    -- Use to script a foreign key constraint
    --
    -- 2017  Louis Davidson  drsql.org
    --   Thanks to Aaron Bertrand and John Paul Cook's code
    --------------------------------------------------
    RETURNS NVARCHAR(MAX)
    AS
    BEGIN
        --based on code to gen list of FK constraints from this article by Aaron Bertrand
        --
    https://www.mssqltips.com/sqlservertip/3347/drop-and-recreate-all-foreign-key-constraints-in-sql-server/

        --and code from John Paul Cook:
        --
    https://social.technet.microsoft.com/wiki/contents/articles/2958.script-to-create-all-foreign-keys.aspx

        DECLARE @script NVARCHAR(MAX);

        IF @constraint_status NOT IN ('AS_WAS','ENABLED','UNTRUSTED','DISABLED')
            RETURN NULL;

        SELECT @script
            =  N'ALTER TABLE ' + QUOTENAME(cs.name) + '.' + QUOTENAME(ct.name) + CHAR(13) + CHAR(10) + '   '
                --code added to set the constraint's status if it is not to be checked (and
                --in the case of disabled, you create it not trusted and disable it
              + CASE
                    WHEN(is_not_trusted = 1
                         OR fk.is_disabled = 1
                          OR @constraint_status IN ( 'UNTRUSTED', 'DISABLED' ))
                        --not forcing it to be enabled
                         AND @constraint_status <> 'ENABLED' THEN
                         'WITH NOCHECK '
                    ELSE
                         ''
                END
              + 'ADD CONSTRAINT ' + QUOTENAME(fk.name) + CHAR(13) + CHAR(10) +
              '      FOREIGN KEY ('
              + STUFF((SELECT   ',' + QUOTENAME(c.name)
                        -- get all the columns in the constraint table
                        FROM     sys.columns c
                                INNER JOIN sys.foreign_key_columns fkc
                                     ON fkc.parent_column_id = c.column_id
                                        AND fkc.parent_object_id = c.object_id
                        WHERE    fkc.constraint_object_id = fk.object_id
                        ORDER BY fkc.constraint_column_id
            FOR XML PATH(N''), TYPE).value(N'.[1]', N'nvarchar(max)'),1,1,N'')
               + ')' + CHAR(13) + CHAR(10) + '         REFERENCES ' + QUOTENAME(rs.name) + '.' + QUOTENAME(rt.name)
              + '('
               + STUFF((SELECT   ',' + QUOTENAME(c.name)
                        -- get all the referenced columns
                       FROM     sys.columns c
                                INNER JOIN sys.foreign_key_columns fkc
                                    ON fkc.referenced_column_id = c.column_id
                                       AND fkc.referenced_object_id = c.object_id
                       WHERE    fkc.constraint_object_id = fk.object_id
                       ORDER BY fkc.constraint_column_id
            FOR XML PATH(N''), TYPE).value(N'.[1]', N'nvarchar(max)'),1,1, N'') + ')'
             + CASE fk.update_referential_action
                    WHEN 1 THEN CHAR(13) + CHAR(10) + '         ON UPDATE CASCADE '
                    WHEN 2 THEN CHAR(13) + CHAR(10) + '         ON UPDATE SET NULL '
                     WHEN 3 THEN CHAR(13) + CHAR(10) + '         ON UPDATE SET DEFAULT '
                    ELSE '' --could also say "no action" which is the default
               END
              + CASE fk.delete_referential_action
                    WHEN 1 THEN CHAR(13) + CHAR(10) + '         ON DELETE CASCADE '
                    WHEN 2 THEN CHAR(13) + CHAR(10) + '         ON DELETE SET NULL '
                     WHEN 3 THEN CHAR(13) + CHAR(10) + '         ON DELETE SET DEFAULT '
                    ELSE '' --could also say "no action" which is the default
                END
              + CASE fk.is_not_for_replication
                    WHEN 1 THEN CHAR(13) + CHAR(10) + '         NOT FOR REPLICATION '
                    ELSE ''
                 END
              + ';'
              + CASE
                    WHEN(fk.is_disabled = 1 AND @constraint_status IN ( 'DISABLED', 'AS_WAS' ))
                         OR @constraint_status = 'DISABLED'
                         THEN CHAR(13) + CHAR(10)+  CHAR(13) + CHAR(10)+   'ALTER TABLE ' + QUOTENAME(cs.name) + '.'
                              + QUOTENAME(ct.name) + CHAR(13) + CHAR(10)
                               + '   NOCHECK CONSTRAINT ' + QUOTENAME(fk.name) + ';'
                     ELSE
                        ''
                END
        FROM   sys.foreign_keys fk
               INNER JOIN sys.tables rt
                    -- referenced table
                   ON fk.referenced_object_id = rt.object_id
               INNER JOIN sys.schemas rs
                    ON rt.schema_id = rs.schema_id
               INNER JOIN sys.tables ct
                   -- constraint table
                   ON fk.parent_object_id = ct.object_id
               INNER JOIN sys.schemas cs
                   ON ct.schema_id = cs.schema_id
        WHERE  OBJECT_SCHEMA_NAME(fk.object_id) = @schema_name
               AND fk.name = @foreign_key_name;
        RETURN @script;
    END;

    Now, to test the code, I will create a few tables:

    --To test, using these tables, I will create three tables (which will anchor the tests of the
    --drop and recreate utilities as well.
    CREATE SCHEMA Demo;
    GO
    CREATE TABLE Demo.GrandParentTable
    (  
        GrandParentTableId INT NOT NULL
            CONSTRAINT PKGrandParentTable PRIMARY KEY
    );

    CREATE TABLE Demo.ParentTable
    (
        ParentTableId INT NOT NULL
            CONSTRAINT PKParentTable PRIMARY KEY,
        GrandParentTableId INT NULL,
        CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
            FOREIGN KEY (GrandParentTableId)
            REFERENCES Demo.GrandParentTable (GrandParentTableId)
    );

    CREATE TABLE Demo.ChildTable
    (
        ChildTableId INT NOT NULL
            CONSTRAINT PKChildTable PRIMARY KEY,
        ParentTableId INT NULL,
    );

    --an untrusted constraint
    ALTER TABLE Demo.ChildTable WITH NOCHECK
      ADD CONSTRAINT [ChildTable$ref$ParentTable_NotTrusted]
          FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable (ParentTableId);

    --adding a second constraint. Not typical (or a great idea) but good enough for this exercise
    --disabled constraint
    ALTER TABLE Demo.ChildTable WITH NOCHECK
        ADD CONSTRAINT [ChildTable$ref$ParentTable_Disabled]
            FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable (ParentTableId);

    ALTER TABLE Demo.ChildTable
        NOCHECK CONSTRAINT [ChildTable$ref$ParentTable_Disabled];
    GO

    Now, check that the constraints are as expected:

    SELECT is_not_trusted, is_disabled, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo';

    is_not_trusted is_disabled name
    -------------- ----------- ----------------------------------------------
    0              0           ParentTable$ref$GrandParentTable_Enabled
    1              0           ChildTable$ref$ParentTable_NotTrusted
    1              1           ChildTable$ref$ParentTable_Disabled

    Next, I will test the constraints in several different was, sometimes using AS_WAS, and others forcing the different configurations:

    SELECT utility.foreign_key$script('Demo','ParentTable$ref$GrandParentTable_Enabled','AS_WAS') AS Original;

    Original
    ------------------------------------------------------------------------
    ALTER TABLE [Demo].[ParentTable]
       ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
          FOREIGN KEY ([GrandParentTableId])
             REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]);


    SELECT utility.foreign_key$script('Demo','ChildTable$ref$ParentTable_NotTrusted','AS_WAS') AS Untrusted;

    Untrusted
    ------------------------------------------------------------------------
    ALTER TABLE [Demo].[ChildTable]
       WITH NOCHECK ADD CONSTRAINT [ChildTable$ref$ParentTable_NotTrusted]
          FOREIGN KEY ([ParentTableId])
             REFERENCES [Demo].[ParentTable]([ParentTableId]);

    SELECT utility.foreign_key$script('Demo','ChildTable$ref$ParentTable_Disabled','AS_WAS') AS Original;


    Original
    ------------------------------------------------------------------------
    ALTER TABLE [Demo].[ChildTable]
       WITH NOCHECK ADD CONSTRAINT [ChildTable$ref$ParentTable_Disabled]
          FOREIGN KEY ([ParentTableId])
             REFERENCES [Demo].[ParentTable]([ParentTableId]);

    ALTER TABLE [Demo].[ChildTable]
       NOCHECK CONSTRAINT [ChildTable$ref$ParentTable_Disabled];

    SELECT utility.foreign_key$script('Demo','ChildTable$ref$ParentTable_Disabled','AS_WERS') AS Fails;

    Fails
    ------------------------------------------------------------------------
    NULL

    Untrusted
    ------------------------------------------------------------------------
    ALTER TABLE [Demo].[ParentTable]
       WITH NOCHECK ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
          FOREIGN KEY ([GrandParentTableId])
             REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]);

    SELECT utility.foreign_key$script('Demo','ParentTable$ref$GrandParentTable_Enabled','DISABLED') AS Disabled;

    Disabled
    ------------------------------------------------------------------------
    ALTER TABLE [Demo].[ParentTable]
       WITH NOCHECK ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
          FOREIGN KEY ([GrandParentTableId])
             REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]);

    ALTER TABLE [Demo].[ParentTable]
       NOCHECK CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled];

    --Now we test cascade and not for replication

    --First UPDATE and DELETE cascading

    ALTER TABLE [Demo].[ParentTable]
       DROP CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]

    ALTER TABLE [Demo].[ParentTable]
       ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
          FOREIGN KEY ([GrandParentTableId])
              REFERENCES [Demo].[GrandParentTable]([GrandParentTableId])
              ON UPDATE CASCADE
             ON DELETE SET DEFAULT;

    SELECT utility.foreign_key$script('Demo','ParentTable$ref$GrandParentTable_Enabled','AS_WAS') AS CascadeTest;

    CascadeTest
    --------------------------------------------------------------------------------------
    ALTER TABLE [Demo].[ParentTable]
       ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
          FOREIGN KEY ([GrandParentTableId])
             REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]
             ON UPDATE CASCADE
              ON DELETE SET DEFAULT );


    --Next Add Not For Replication
    ALTER TABLE [Demo].[ParentTable]
       DROP CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]

    ALTER TABLE [Demo].[ParentTable]
        ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
           FOREIGN KEY ([GrandParentTableId])
             REFERENCES [Demo].[GrandParentTable]([GrandParentTableId])
             ON UPDATE CASCADE
             ON DELETE SET DEFAULT
             NOT FOR REPLICATION;

    SELECT utility.foreign_key$script('Demo','ParentTable$ref$GrandParentTable_Enabled','AS_WAS') AS CascadeNotForRepTest;

    Note that NOT FOR REPLICATION makes the constraint not trusted

    CascadeNotForRepTest
    ------------------------------------------------------------------------------------------
    ALTER TABLE [Demo].[ParentTable]
       WITH NOCHECK ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
          FOREIGN KEY ([GrandParentTableId])
             REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]
             ON UPDATE CASCADE
             ON DELETE SET DEFAULT
             NOT FOR REPLICATION );

    Finally, to make sure that the code does deal with composite key references (which I didn't really doubt, but hey, you really need to test this stuff right?

    CREATE TABLE Demo.MultiKey
    (
        Column1 INT NOT NULL,
        Column2 INT NOT NULL,
        PRIMARY KEY (Column1, Column2)
    )
    CREATE TABLE Demo.MultiKeyRef
    (
        Column1 INT NOT NULL,
        Column2 INT NOT NULL,
        FOREIGN KEY (Column1, Column2) REFERENCES Demo.MultiKey (Column1, Column2) --no name for key here
    )
       
    First, need to look up the name to get the system generated name:

    SELECT is_not_trusted, is_disabled, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo';

    Now, execute the three different ways we can script:

    SELECT utility.foreign_key$script('Demo','FK__MultiKeyRef__31EC6D26','AS_WAS') AS Original;
    SELECT utility.foreign_key$script('Demo','FK__MultiKeyRef__31EC6D26','DISABLED') AS Disabled;
    SELECT utility.foreign_key$script('Demo','FK__MultiKeyRef__31EC6D26','UNTRUSTED') AS Untrusted;

    Original
    ------------------------------------------------------------------
    ALTER TABLE [Demo].[MultiKeyRef]
       ADD CONSTRAINT [FK__MultiKeyRef__31EC6D26]
          FOREIGN KEY ([Column1],[Column2])
             REFERENCES [Demo].[MultiKey]([Column1],[Column2]);

    Disabled
    ------------------------------------------------------------------
    ALTER TABLE [Demo].[MultiKeyRef]
       WITH NOCHECK ADD CONSTRAINT [FK__MultiKeyRef__31EC6D26]
          FOREIGN KEY ([Column1],[Column2])
             REFERENCES [Demo].[MultiKey]([Column1],[Column2]);

    ALTER TABLE [Demo].[MultiKeyRef]
       NOCHECK CONSTRAINT [FK__MultiKeyRef__31EC6D26];

    Untrusted
    ------------------------------------------------------------------
    ALTER TABLE [Demo].[MultiKeyRef]
       WITH NOCHECK ADD CONSTRAINT [FK__MultiKeyRef__31EC6D26]
          FOREIGN KEY ([Column1],[Column2])
             REFERENCES [Demo].[MultiKey]([Column1],[Column2]);

    Hopefully this script can be of some use to you, I will use it in the next blog where I build the utility to drop and recreate FOREIGN KEY constraints.

    (Note: This works with memory optimized tables as well, since the syntax is the same)

  • Creating FOREIGN KEY constraints as enabled, trusted, non-trusted, and/or disabled

    I am in the middle of building a utility (for work, and for my next SQLBLOG post), that will help when you need to drop the foreign key constraints on a table, but then replace them without having to save off the script manually. Part of the process is to generate the script for the foreign key constraint, so the first thing I need to do is to make sure I have all of the scripting possibilities understood.

    When I started hunting around to remember how to create a disabled constraint, I couldn't easily find anything, so I figures I would make this a two-parter. (My blogging rule is if I look for something and find a good article about it, reference it, then tweet the article out. If it is too hard to find, blog about it!) So today I will review how to create a FOREIGN KEY constraint in three ways:

    • Enabled, and Trusted - Just as you would normally create one
    • Enabled, Not Trusted - The "quick" way, not checking data to see if any wrong data already exists, but not allowing new, bad data in
    • Disabled, Not Trusted - The constraint is basically documentation of the relationship, but you are on your own to make sure the data matches the constraint

    First, create a couple of tables that can be used for the demonstration, and then show how to create the constraints as trusted, non-trusted, and disabled. (Note: CHECK constraints are very similar in the ways you can set them to the same three states, if you are needing to do the same for CHECKs).

    --Here is the script that will build the database, and then recreate the objects

    CREATE DATABASE TestRebuildConstraints;
    GO
    USE TestRebuildConstraints;
    GO
    --a safeguard against building stuff in master is to never use dbo schema, and
    --then create the schema only in the db you are expecting to
    IF DB_ID() = DB_ID('TestRebuildConstraints')
        EXEC ('CREATE SCHEMA Demo');
    GO

    Then I will create a set of tables (which can be dropped and recreated over and over as you test different configurations;

    --From here, you can rerun if you want to try the code:
    DROP TABLE IF EXISTS demo.ChildTable, demo.ParentTable, demo.GrandParentTable;

    CREATE TABLE Demo.GrandParentTable
    (
        GrandParentTableId    INT CONSTRAINT PKGrandParentTable PRIMARY KEY
    )

    CREATE TABLE Demo.ParentTable
    (
        ParentTableId    INT CONSTRAINT PKParentTable PRIMARY KEY,
        GrandParentTableId INT
    )

    CREATE TABLE Demo.ChildTable
    (
        ChildTableId    INT CONSTRAINT PKChildTable PRIMARY KEY,
        ParentTableId INT
    )
    GO

    Enabled, and Trusted

    To create your constraint enabled and trusted, it is the base, simple ALTER command, telling the column in the child table that references the parent (this could also be a part of the CREATE TABLE statement.)

    ALTER TABLE Demo.ChildTable
        ADD CONSTRAINT ChildTable$ref$ParentTable
            FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable(ParentTableId);


    ALTER TABLE Demo.ParentTable
        ADD CONSTRAINT ParentTable$ref$GrandParentTable
            FOREIGN KEY (GrandParentTableId) REFERENCES Demo.GrandParentTable(GrandParentTableId);

    GO

    The data in the table will be checked, and as such the constraints will be trusted, and enabled. It never hurts to check your constraint's status using the system catalog views:

    SELECT is_not_trusted, is_disabled, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo';

    is_not_trusted is_disabled name
    -------------- ----------- -----------------------------------
    0              0           ChildTable$ref$ParentTable
    0              0           ParentTable$ref$GrandParentTable

    Enabled, Not Trusted

    Now, if you want to create the constraints as not trusted, not checking any existing data, you use WITH NOCHECK option on the ALER TABLE statement:

    --Drop the existing constraint for the demo
    ALTER TABLE Demo.ChildTable
        DROP CONSTRAINT ChildTable$ref$ParentTable;

    --Add the constraint:
    ALTER TABLE Demo.ChildTable
        WITH NOCHECK --<<< Added this
            ADD CONSTRAINT ChildTable$ref$ParentTable
                FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable(ParentTableId);

    Now, check the system catalog, and the ChildTable$ref$ParentTable constraint is not trusted, but it is enabled:

    SELECT is_not_trusted, is_disabled, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo';

    is_not_trusted is_disabled name
    -------------- ----------- -----------------------------------
    1              0           ChildTable$ref$ParentTable
    0              0           ParentTable$ref$GrandParentTable


    Disabled, Not Trusted

    This is where things were kind of confusing in the syntax. Disabling a constraint is pretty straightforward (though slightly confusing with the NOCHECK showing up again with a slightly different meaning:

    ALTER TABLE Demo.ParentTable
        NOCHECK CONSTRAINT ParentTable$ref$GrandParentTable; 
    GO 

    You can see in the metadata that it is not not trusted nor enabled:

    SELECT is_not_trusted, is_disabled, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo';

    is_not_trusted is_disabled name
    -------------- ----------- -----------------------------------
    1              0           ChildTable$ref$ParentTable
    1              1           ParentTable$ref$GrandParentTable

    You re-enable the constraint, using the CHECK argument on the ALTER TABLE statement:

    --For not trusted, simply use CHECK:
    ALTER TABLE Demo.ParentTable
        CHECK CONSTRAINT ParentTable$ref$GrandParentTable; 
    GO 

    --To enable and make trusted add WITH CHECK, leading to my FAVORITE bit of DDL syntax, WITH CHECK CHECK:
    ALTER TABLE Demo.ParentTable
        WITH CHECK CHECK CONSTRAINT ParentTable$ref$GrandParentTable; 
    GO 

    But how to create it disabled? Answer, you can't, based on how SQL Server tools scripted the disabled constraint. I guessed the answer, but that didn't make it seem weirder to me. It is a two step process. First, create the constraint WITH NOCHECK, and then disable it:

    --DROP the existing constraint on the Demo.ParentTable:
    ALTER TABLE Demo.ParentTable
        DROP CONSTRAINT ParentTable$ref$GrandParentTable;

    --The create the disabled constraint by adding the non-trusted version, the disabling it 
    ALTER TABLE Demo.ParentTable
        WITH NOCHECK
            ADD CONSTRAINT ParentTable$ref$GrandParentTable
                FOREIGN KEY (GrandParentTableId) REFERENCES Demo.GrandParentTable(GrandParentTableId);

    ALTER TABLE Demo.ParentTable
        NOCHECK CONSTRAINT ParentTable$ref$GrandParentTable;
    GO 

    Of course, you don't have to create it as non-trusted to disable it, but it is probably not the desired process to create a constraint, check all of the data, just to have it be turned off immediately.

    Note: changed first sentence to be a bit more clear…

  • Speaking at the Nashville SQL Server User's Group on April 28th.

    And they say you can't go home again.. On the 28th of April I will be returning back to my old home of Nashville to speak at the Nashville SQL Server User's Group. (And where I moved to was very near the home I lived in when I was in high school.)

    The session I will be doing is a one off session of demos of some of the new T-SQL features that are in SQL Server 2016 SP1, using code that I have culled from the blogs I have done here on SQLBlog.com: Temporal, Row Level Security, Dynamic Data Masking, and a few queries to demonstrate JSON and some of the other new features like CREATE OR ALTER, DROP … IF EXISTS, etc.

    Here is the abstract:

    A Survey of New T-SQL Features in SQL Server 2016 (SP1)

    Last year, Robert Verell gave a presentation to the group on 2016 features where he gave examples of the new features in SQL Server 2016 at the product level. Our promise then was to give deeper dive sessions on the features, so this presentation is demo filled on the T-SQL features that are new for 2016. The features covered will include:

    Temporal Tables - The new table structures and syntax that let you capture history of all changes to a table, allowing you to query the table as it existed at a certain point in time in a very straightforward manner.

    Row Level Security - A mechanism to filter a user's access to data at the row level, in a manner that your code (stored procedures and views included) will honor with no changes to your existing code.

    Data Masking - Lets you define a pattern to mask a column's output in your T-SQL statements to obfuscate sensitive data.

    JSON - Allows you to format the output of a query as a JSON document (much like the FOR XML clause does for XML).

    New Syntax and Commands - Even if you don't use any of these new features in your coding, I will also show a selection of new functions and syntax constructs that you will definitely want to know about.

  • SQL Saturday Birmingham #593 is this weekend!

    It has been a while since I was last in Birmingham (Alabama) for a SQL Saturday. They didn't have one last year, and the event they did have was on the same weekend as Music City Code, so I wasn't able to attend. This year, I can't make it to SQL Saturday in Richmond due to this event, so I am fairly sure things have balanced themselves out.

    I will be doing two sessions (my two current sessions that I have been doing recently, the first one being my favorite session ever to give. Lots of demos that work, and material I actually know from heart. The second has no prep whatsoever, so I don't hate it either.

    Let Me Finish... Isolating Write Operations

    OLTP databases can be constantly written to and reporting databases are written to at least periodically. In order to ensure consistent results, connections must be isolated from one another while executing, ideally with the lowest possible cost to concurrency.  How this isolation is handled is based on the isolation level, whether the classic lock based or the newer optimistic scheme of the in-memory OLTP engine is used, or even if both engines are enlisted in the same transaction. In this session we will look at examples of how SQL Server isolates reading and writing operations from other writing operations to explore how this may affect your application through error messages and performance hits.

    Then I am doing a second professional development session with Robert Verell, and any other career nerd types that are there for the conference:

    Discuss Your Career (Panel)

    As a database professional, you have plenty of choice in your career path. Should you stay at the same company for 20 years, or change every few years? Much depends on what matters most to you. Do you want higher income, regular work hours, training, vacation time, group insurance, job security, lots of travel, telecommuting, or the possibility to advance to management?  The panel will give their observations and experience over decades of experience. Discussion will include at least one person who has spent over 19 years with the same employer, one person who has changed careers very recently, and everyone in attendance who is willing to give an opinion.

    The goal of this session will be to have a discussion about career matters. If attendees have questions about how to manage their career, we will discuss. Or there will be slides with different priorities to discuss if no one has questions. I would note that this session is not specifically about whether or not to change companies, positions etc. The goal is discuss the pros and cons of staying in one place, changing frequently, etc. There are so many factors to consider that it is maddening.  If you don't learn anything, maybe you will teach someone else something about their career. And if you neither learn, nor teach, the prize giveaways start right after this session, so it is a now lose situation.

More Posts Next page »

This Blog

Syndication

Links to my other sites

Archives

Privacy Statement