Fixing Problematic SQL Server Indexes

Indexes reduces (not always) the performance of Insert, Update, Delete and SELECT statements.

As a general rule, try to have around 5 indexes per tables, 5 columns per index. This is not a hard and fast rule. Just a general good starting point.

Indexes reduce the performance of SELECT statement even when an index is not being used.

How to download attachment from Salesforce using DBAmp

SF_DownloadBlobs downloads the binary content of a salesforce object (Attachment, Knowledge Article etc.) into a directory on the SQL Server machine. SF_DownloadBlobs uses the salesforce Bulk API and consumes 1API call per file downloaded. Make sure you have enough daily API calls in your salesforce org prior to running SF_DownloadBlobs.

SF_DownloadBlobs takes as input a local SQL Server table that is designated as the “input” table:
Input Table
• Holds the records Ids of a Salesforce object that contains binary content on salesforce.
• Input table name prefix must be the name of a valid Salesforce object. (Ex.- Attachment and Attachment_Test are valid, AttachmentTest is not valid)
• Input table must contain the Id field of the Salesforce object, all other fields are ignored.
• Input table must contain at least one record
• The input table can be the table created by SF_Replicate or a table you create manually.


SF_DownloadBlobs is a stored procedure that creates files in a local directory with the contents of the binary field(s) of a Salesforce object.

File Name
The file name is based on the following template:
Id_fieldName.File
For example, consisder the following file name: 00P6000000BR8e1EAD_body.File
This file belongs to the attachment with id 00P6000000BR8e1EAD and is the binary contents of the body field.

Syntax
exec SF_DownloadBlobs ‘table_server‘,’input_table’
where table_server is the name of your linked server and input_table is the name of a valid input table.

The following example downloads the binary files(s) of the Attachment table into in a local directory on the server called the “Blob Directory“. This example uses SF_Replicate to create the input table.

Instructions

  1. Create the input table using SF_Replicate. Normally, the Body column of the local Attachment table is null because the SF_Replicate does not download the binary content.
    1. exec SF_Replicate ‘SALESFORCE’, ‘Attachment’
  2. Create the Blob Directory:
    1. Run the DBAmp Configuration Program
    2. Navigate to Configuration/Options Dialog
    3. Create a Blob Directory using the browse button
  3. Run the SF_DownloadBlobs stored procedure to create files containing the binary field(s) of the Attachment object in the Blob Directory: exec SF_DownloadBlobs ‘SALESFORCE’, ‘Attachment’
  4. After execution, the Blob directory contains the individual Attachment files.

Note
The Base64 Maximum Field Size registry setting in the Registry Settings dialog on the DBAmp Configuration Program must be set to 0.

How does an index fill factor affect fragmentation?

When you set fill factor below 100%, you are actually setting fragmentation.

You’re purposely leaving empty space on every page in an index, which:

  1. Makes your database larger
  2. Makes scans take longer
  3. Makes backups, checkdb take longer
  4. Makes index maintenance take longer

How often should you do corruption checks if your databases are large?

Usually we skip corruption until weekend since it takes really long time especially when your databases sizes is larger than 400GB.

For example, a server with 8 cores and 64GB RAM with 200GB database would take about 15 minutes to perform a complete DBCC CHECK with logical check.

Instead, I would suggest the following:

DBCC CHECKDB (Physical_Only)Weekdays
DBCC CHECKDB (Extended checks)Weekends

This way, physical_only checks takes about 5 minutes instead of 15 minutes for a full check. You are still covered in some way rather than no checks whatsoever.

%d bloggers like this: