Audit the SQL server logins

Here is a trigger that will enable you to log the alter database login activity on a SQL instance. The trigger will insert a record for each alter login activity. It will log which username has changed the login and when. For this to work, Audit table needs to be created before hand.

Script to create Audit table:

USE [Enter DB Name here]

CREATE TABLE [dbo].[Audit](
	[ID] [int] IDENTITY(1,1) NOT NULL,
	[QueryText] [varchar](260) NULL,
	[EventDescription] [nvarchar](260) NULL,
	[eventtime] [datetime] NULL,

Script to create the trigger:

Use Master

If Exists (Select * From sys.server_triggers where [name] = 'DDL_Trigger_Login_Changed')
    DROP TRIGGER [DDL_Trigger_Login_Changed] ON ALL SERVER

CREATE TRIGGER [DDL_Trigger_Login_Changed]
Declare @results varchar(max)
Declare @subjectText varchar(max)
SET @subjectText = '*ALERT* DATABASE LOGIN changed on ' + @@SERVERNAME + ' by ' + SUSER_SNAME() 
SET @results = (SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)'))

/* Insert the change event into the Audit table */
INSERT INTO [Enter DB Name here].[dbo].[Audit]

/* Send an email when a login is altered */
EXEC msdb.dbo.sp_send_dbmail
    @profile_name = 'Email profile',
    @recipients = 'Enter Email Address here',
    @body = @results,
    @subject = @subjectText,
    @exclude_query_output = 1 --Suppress 'Mail Queued' message

Perform a Silent Install of SQL Server Management Studio (SSMS)

Here is how to silently install SQL Server Management Studio:

  1. Download the installer from
  2. Run the silent installer using the following command line:
SSMS-Setup-ENU.exe /install /quiet /norestart

How did you figure out what switches to use ? By running

SSMS-Setup-ENU.exe /?

you will be presented with a helpful dialog documenting the supported switches.

SSMS installer help showing the supported command line switches

Other supported modes are uninstall, repair, and layout. It is also possible to set the location of the log file created by the installer.

Copy data between SQL server tables

For adhoc data copy requests, I have always used the SSMS GUI to copy data from one instance to another. That is until I have recently thought to use dbatools to replace my workflow. I am glad I came across the Copy-DbaTableData command. This replaces my workflow with a one line command. The more I use dbatools, I love the simplicity of it and the constant improvements that go into this open source tool.

Copy-DbaDbTableData -SqlInstance SQLSERVER01 -Destination SQLSERVER02 -Database DB1 -Table dbo.test_data -DestinationTable dbo.test_data1 -AutoCreateTable

AutoCreateTable – Allows to create a table on destination database if it does not exist already

DestinationTable – Allows you to give a different table name if you dont want to use the same name as the source table

Copy-DbaDbTableData -SqlInstance SQLSERVER01 -Destination SQLSERVER02 -Database DB1 -Table dbo.test_data -DestinationTable dbo.test_data1 -Truncate -keepIdentity

Use the truncate option if you would like to truncate the destination table before copying the rows.

Use the keepidentity option if you would like to retain the identity column values in the destination table.

Isolation levels in SQL Server

An isolation level specifies how much a transaction must be protected from resource or data modifications made by other transactions.

For example, Paul is trying to read a list of users from a table, meanwhile Jennifer comes along and changes some of the product information in the table. As part of their same transaction, Paul comes back to try and read the product table, but it has changed. Do you want Paul to read the new information or not?

Isolation levels allow you to decide what would happen in scenarios like the one I’ve just described and so it should come as no surprise that they are described in terms of which concurrency side effects they allow. The following is a brief description of each of the possible unwanted concurrency side effects:

Dirty reads:

In this scenario, Paul reads an uncommitted set of data that Jennifer is working on. This can be problematic if Jennifer’s transaction fails or is rolled back.

Non-repeatable reads:

This is the situation where a piece of data, which is read twice inside the same transaction, cannot be guaranteed to contain the same information. From the example this would mean that the second read picked up the data that Jennifer had changed in the product table

Phantom reads:

This is a case when Paul inserts or deletes a row from a set of data, that Jennifer is currently reading.

Missing and double reads:

Paul can be performing a range scan on a table and Jennifer can come along and move a row that Paul reads it twice or misses it.

Lost Updates:

This can happen when two processes read the same data and then both try and update it at the same time, but with different values. Only one of them will succeed and the other will be lost.

Halloween effect:

A situation where data moves position in the result set and thus could be updated multiple times.

Many people do not fully understand isolation levels and how they control locking and protect data within their environment. Because of this you will often see codes full of hints, where the locks required are explicitly declared. Often this involves throwing in the NOLOCK hint, for that query speed boost. Using that hint is the same as using the read committed isolation level. Understanding the isolation levels and how they use locking will help you understand the suitable use cases and the side effects of using them.

That’s not to say that using hint is always bad, there are many use cases where they work, however it is useful to understand what is happening behind the scenes so that you don’t have any unexpected consequences.

What do the isolation levels control and how do they prevent some of the unintended consequences?

The way isolation levels vary is in the locks they decide to take. Each level uses different approaches in deciding which locks are taken when data is read and how long the locks are held for. The lower isolation levels increase the ability of multiple users to access the same data, but they also increase the number of concurrency effects. Isolation levels are focused on the locks used when reading and do not interfere with the locks acquired to protect data modification.

Isolation levels can be set at server level, database or transaction level depending on the one you pick. To change the isolation level in a query is a simple as adding: SET TRANSACTION ISOLATION LEVEL <isolation level> at the start of you query. Please note that you can also change your isolation level during the transaction if required. However, you should carefully consider the impacts of it before doing so.

What are the different isolation levels?

The different isolation levels are normally split into two groups, the ones that are described as pessimistic and the ones that falls under optimistic. The main difference being that optimistic levels try to reduce the amount of locks needed, but as a consequence suffer other overhead, such as increased tempdb usage. Optimistic levels use row versioning to prevent reads from being blocked by data modification locks. We’ll start with 4 pessimistic ones:

Read Uncommitted:

The isolation level specifies that statements can read rows that have been modified by other transaction, but not yet committed. This is the lowest isolation level and consequently, many side effects are present. Reads are not blocked by exclusive locks and do not need to take shared locks. This means that it will allow a lot of concurrency, but you’ll sacrifice the reliability of the data.

Read Committed:

This is the default isolation level for SQL Server. It stops statements from reading data that has been modified but not yet committed by other transactions. This prevents dirty reads from taking place, but not phantom or non-repeatable reads. It does this by using shared locks for reads.

Repeatable Read:

The isolation level stops statements from reading data that has been modified but not yet committed by other transactions. It also prevents other transactions from modifying data that has been read by the current transaction until has completed. It does this by generating shared locks on all data that is read and holding these locks until the transaction is finished.


Statements are prevented from reading data that has been modified but not yet committed by other transactions. Transactions cannot modify data that has been read by the current transaction until the current transaction completes. Other transaction aren’t allowed to insert new rows into a table read by the current transaction, if their key values fall in the range of keys read by any statements in the current transaction. So they are blocked until the transaction completes. Range locks are placed on the range of key values that match the search conditions of each statement executed in a transaction.

Due to this the serializable isolation level allows for the lowest level of concurrency.

We will now move onto the 2 optimistic levels. It is important to note that these settings need to be enabled at the database level.


There is no locking since it uses row versioning in tempdb. It will read a transactionally consistent version of the data that existed at the start of the transaction. Therefore, it can only recognize data modifications that were committed before the start of the transaction.

Read Committed Snapshot:

Similar to read committed, but uses row versioning instead. Each statement uses a transactionally consistent snapshot of the data showing how it existed at the time the transaction started. This means that locks are not used or needed to protect the data from updates by other transactions.

Below is a summary of isolation levels and their side effects in a tabular format:

LevelDirty readsNon-repeatablePhantomMissing or double readsLost UpdatesHalloween
Read UncommittedYesYesYesYesYesNo
Read committedNoYesYesYesYesNo
Repeatable readNoNoYesNoNoNo
Read committed snapshotNoYesYesYesYesNo
%d bloggers like this: