Run same command on all SQL server databases

There are times when i find myself needing to run a SQL command against each database on one of my SQL server instances. There is a handy undocumented stored procedure that allows you to do this with a single line statement and does not need using cursors either.

Syntax for this undocumented procedure is:

EXEC sp_MSforeachdb @command

Example: Query information from all databases on a SQL instance

--This query will return a listing of all tables in all databases on a SQL instance: 
DECLARE @command varchar(1000) 
SELECT @command = 'USE ? SELECT name FROM sysobjects WHERE xtype = ''U'' ORDER BY name' 
EXEC sp_MSforeachdb @command 

You can exclude the @command variable. The command below also behaves the same as above

--This query will return a listing of all tables in all databases on a SQL instance: 
EXEC sp_MSforeachdb 'USE ? SELECT name FROM sysobjects WHERE xtype = ''U'' ORDER BY name' 

When I inherit a new SQL server to manage, one of the things i look for is the recovery model of the databases. This has caused me calls from sys admin teams stating that the disk is full. Few times the culprit is that the recovery model is set to the default as FULL. So you are logging every single transaction.

Unless you are running regular transaction log backups, it is recommended to switch the recovery model to SIMPLE. This option will discard the log as soon as a transaction is committed and written to the disk.

The following command will set the database recovery model on all user databases to SIMPLE.

exec sp_MSforeachdb 'IF ''?'' NOT IN (''master'',''tempdb'',''model'',''msdb'') BEGIN ALTER AUTHORIZATION on database::? SET RECOVERY SIMPLE END'

You are welcome to take the above one line command and modify the filtering above to add or remove the databases per your needs.

Fixing Problematic SQL Server Indexes

Indexes reduces (not always) the performance of Insert, Update, Delete and SELECT statements.

As a general rule, try to have around 5 indexes per tables, 5 columns per index. This is not a hard and fast rule. Just a general good starting point.

Indexes reduce the performance of SELECT statement even when an index is not being used.

How to download attachment from Salesforce using DBAmp

SF_DownloadBlobs downloads the binary content of a salesforce object (Attachment, Knowledge Article etc.) into a directory on the SQL Server machine. SF_DownloadBlobs uses the salesforce Bulk API and consumes 1API call per file downloaded. Make sure you have enough daily API calls in your salesforce org prior to running SF_DownloadBlobs.

SF_DownloadBlobs takes as input a local SQL Server table that is designated as the “input” table:
Input Table
• Holds the records Ids of a Salesforce object that contains binary content on salesforce.
• Input table name prefix must be the name of a valid Salesforce object. (Ex.- Attachment and Attachment_Test are valid, AttachmentTest is not valid)
• Input table must contain the Id field of the Salesforce object, all other fields are ignored.
• Input table must contain at least one record
• The input table can be the table created by SF_Replicate or a table you create manually.


SF_DownloadBlobs is a stored procedure that creates files in a local directory with the contents of the binary field(s) of a Salesforce object.

File Name
The file name is based on the following template:
Id_fieldName.File
For example, consisder the following file name: 00P6000000BR8e1EAD_body.File
This file belongs to the attachment with id 00P6000000BR8e1EAD and is the binary contents of the body field.

Syntax
exec SF_DownloadBlobs ‘table_server‘,’input_table’
where table_server is the name of your linked server and input_table is the name of a valid input table.

The following example downloads the binary files(s) of the Attachment table into in a local directory on the server called the “Blob Directory“. This example uses SF_Replicate to create the input table.

Instructions

  1. Create the input table using SF_Replicate. Normally, the Body column of the local Attachment table is null because the SF_Replicate does not download the binary content.
    1. exec SF_Replicate ‘SALESFORCE’, ‘Attachment’
  2. Create the Blob Directory:
    1. Run the DBAmp Configuration Program
    2. Navigate to Configuration/Options Dialog
    3. Create a Blob Directory using the browse button
  3. Run the SF_DownloadBlobs stored procedure to create files containing the binary field(s) of the Attachment object in the Blob Directory: exec SF_DownloadBlobs ‘SALESFORCE’, ‘Attachment’
  4. After execution, the Blob directory contains the individual Attachment files.

Note
The Base64 Maximum Field Size registry setting in the Registry Settings dialog on the DBAmp Configuration Program must be set to 0.

How does an index fill factor affect fragmentation?

When you set fill factor below 100%, you are actually setting fragmentation.

You’re purposely leaving empty space on every page in an index, which:

  1. Makes your database larger
  2. Makes scans take longer
  3. Makes backups, checkdb take longer
  4. Makes index maintenance take longer

SQL Server IO is not fixed size

A peak underneath the hood

OperationIO Block Size
Transaction log write512 bytes – 60 KB
Checkpoint/Lazywriter8KB – 1MB
Read-Ahead Scans128KB – 512KB
Bulk Loads256KB
Backup/Restore1MB
ColumnStore Read-Ahead8MB
File Initialization8MB
In-Memory OLTP Checkpoint1MB