MySQL DDL Diff

#!/bin/bash
# Expects:
# If no *.last exists, make one.
# if one does, diff it and report

# Set this manually. Everything else will branch off this.
BASEDIR="/data/backups/ddl/"
DBUSER="svc_dba_adhoc"
DBPASS="PETRDKZLbgdvcB24nvPu"
TZ="America/Los_Angeles"
EMAILTO="vare@example.com"

if [ "$2" == "" ]
then
echo " "
echo "USAGE: ddldiff "
echo " "
fi

HOST=${1}
SCHEMA=${2}
TMPFILE="/tmp/${HOST}-${SCHEMA}.tmp"
/bin/rm ${TMPFILE} 2>/dev/null

echo " " > ${TMPFILE}

BACKUPDIR="${BASEDIR}${HOST}/${SCHEMA}/"

mkdir "${BASEDIR}${HOST}" &> /dev/null
mkdir "${BASEDIR}${HOST}/${SCHEMA}" &>/dev/null

# Rename older .sql files

cd ${BACKUPDIR}

for file in *.19 ; do mv -f $file ${file//.19/.20} 2> /dev/null ; done
for file in *.18 ; do mv -f $file ${file//.18/.19} 2> /dev/null ; done
for file in *.17 ; do mv -f $file ${file//.17/.18} 2> /dev/null ; done
for file in *.16 ; do mv -f $file ${file//.16/.17} 2> /dev/null ; done
for file in *.15 ; do mv -f $file ${file//.15/.16} 2> /dev/null ; done
for file in *.14 ; do mv -f $file ${file//.14/.15} 2> /dev/null ; done
for file in *.13 ; do mv -f $file ${file//.13/.14} 2> /dev/null ; done
for file in *.12 ; do mv -f $file ${file//.12/.13} 2> /dev/null ; done
for file in *.11 ; do mv -f $file ${file//.11/.12} 2> /dev/null ; done
for file in *.10 ; do mv -f $file ${file//.10/.11} 2> /dev/null ; done
for file in *.9 ; do mv -f $file ${file//.9/.10} 2> /dev/null ; done
for file in *.8 ; do mv -f $file ${file//.8/.9} 2> /dev/null ; done

for file in *.7 ; do mv -f $file ${file//.7/.8} 2> /dev/null ; done
for file in *.6 ; do mv -f $file ${file//.6/.7} 2> /dev/null ; done
for file in *.5 ; do mv -f $file ${file//.5/.6} 2> /dev/null ; done
for file in *.4 ; do mv -f $file ${file//.4/.5} 2> /dev/null ; done
for file in *.3 ; do mv -f $file ${file//.3/.4} 2> /dev/null ; done
for file in *.2 ; do mv -f $file ${file//.2/.3} 2> /dev/null ; done
for file in *.1 ; do mv -f $file ${file//.1/.2} 2> /dev/null ; done
for file in *.sql ; do mv -f $file ${file//sql/sql.1} 2> /dev/null ; done

#test -d "$BACKUPDIR" || mkdir -p "$BACKUPDIR"
# Get the database list, exclude information_schema

for table in $(mysql -B -s -h ${HOST} -u $DBUSER --password=$DBPASS -e "SELECT TABLE_NAME FROM information_schema.TABLES where TABLE_SCHEMA='${SCHEMA}' ORDER BY TABLE_NAME")
do
#echo "${SCHEMA} - ${table}"
#touch "${BACKUPDIR}/${table}.sql"
# dump each database in a separate file
/bin/rm "${table}.sql" 2> /dev/null
tmpfile="${table}.tmp"
/usr/bin/mysqldump -u $DBUSER --password=$DBPASS --no-data -h ${HOST} ${SCHEMA} ${table} > ${tmpfile}

while IFS= read -r line
do
if grep -q "AUTO_INCREMENT=" <<<"${line}"; then echo "${line}" | cut -d" " -f1-2,4- >> ${table}.sql
else
echo "${line}" >> ${table}.sql
fi
done <"$tmpfile" /usr/bin/diff ${table}.sql ${table}.sql.1 | egrep -v "Dump completed|^---" | sed '1d;$d' > ${table}.diff
lastchangedsec=$(stat -c%Y "${table}.sql.1")
lastchanged=`TZ="America/Los_Angeles" date -d "@${lastchangedsec}"`

diffsize=$(stat -c%s "${table}.diff")

if (( diffsize > 10 )); then
echo " " >> ${TMPFILE}
echo "------------------------------------------------------------------------------------------" >> ${TMPFILE}
echo "${SCHEMA}.${table} has changed since ${lastchanged}:" >> ${TMPFILE}
echo "------------------------------------------------------------------------------------------" >> ${TMPFILE}
sed 's//Old: /' >> ${TMPFILE}
fi

done

# Now, send email, if needed

tmpsize=$(stat -c%s "${TMPFILE}")

if (( tmpsize > 5 )); then
echo "Sending email"
echo "
">${TMPFILE}.mail;cat ${TMPFILE}>>${TMPFILE}.mail;echo "
">>${TMPFILE}.mail
cat ${TMPFILE}.mail | mail -s "${HOST} ${SCHEMA} DDL audit: Changes detected" -a "Content-Type: text/html" ${EMAILTO}
# /usr/bin/mail ${EMAILTO} -s"${HOST} ${SCHEMA} DDL audit: Changes detected" < ${TMPFILE} fi

MySQL Fragmentation Calculator

-- SQL snippet to calculate table fragmentation.
SELECT
  table_schema,
  TABLE_NAME,
  ROUND(DATA_LENGTH / 1024 / 1024) AS data_length,
  ROUND(INDEX_LENGTH / 1024 / 1024) AS index_length,
  ROUND(DATA_FREE / 1024 / 1024) AS data_free,
  CONCAT(
    ROUND(
      (
        data_free / (index_length + data_length)
      ) * 100
    ),
    '%'
  ) AS frag_ratio
FROM
  information_schema.tables
WHERE DATA_FREE > 0
AND TABLE_SCHEMA = '%%SCHEMA%%'
ORDER BY data_free / (index_length + data_length) DESC;

MySQL Lockhunter Query

This requires that metadata lock instrumentation be enabled.

UPDATE performance_schema.setup_instruments SET ENABLED = 'YES' WHERE NAME = 'wait/lock/metadata/sql/mdl';

Once that’s in place, you can do this:

SELECT
  ml.object_schema,
  ml.object_name,
  p.id,
  p.user,
  p.state,
  ml.lock_type,
  ml.lock_duration,
  ml.lock_status,
  p.time,
  LEFT(p.info, 100)
FROM
  performance_schema.metadata_locks ml
  JOIN performance_schema.threads t
    ON t.thread_id = ml.owner_thread_id
  JOIN information_schema.processlist p
    ON p.id = t.processlist_id
WHERE ml.object_name = 'dim_member'
ORDER BY p.time DESC;

Screen Fix

The screen utility is not included on some server builds. You *have* to have screen or tmux as a DBA or long-running scripts and DML will timeout and the ssh daemon will boot you off the server.

As a workaround, I copy the “screen” binary from /usr/bin/screen on a server that does have it installed and put it in my ~/bin directory.  Ordinarily, screen requires write access to /var/run/screen — but there’s a workaround.

Copy the screen binary from a compatible system to your personal bin directory.

Fortunately, screen reads a environment variable SCREENDIR to utilize an alternative socket directory.

So to work around it, you can create a directory, such as ~/.screen:

mkdir ~/.screen && chmod 700 ~/.screen

and export the SCREENDIR to point to that directory:

export SCREENDIR=$HOME/.screen

You can also put this line into you ~/.bashrc or ~./profile so that it will also take effect afterwards.

Setting up DBeaver for use with Redshift

Step by step 

Setting up DBeaver for use with Redshift is not the most intuitive thing you’ll ever do. A common misconception is that since Redshift is (sorta) built on Postgres, then a Postgres driver is the correct choice. Alas, nope.

Here is a quick how-to for setting up DBeaver correctly as possible for Redshift.

Here’s the standard DBeaver opening screen

Right-click on your Redshift connection and choose “Edit Connection (F4)”

That will present you with the Connection Settings dialog:

Where it says “Driver name” it’s gotta be AWS / Redshift. If it doesn’t, then click the “Edit Driver Settings” button. You’ll get this:

Choose the AWS category and the ID as shown–if you do not have the driver installed, or if you do, but want to upgrade it, click the website link and DBeaver will get the most recent stable version for your OS and install it. Then you can continue with setting the host, port, database, user, etc. in the previous screen.

And as Steve used to say, “oh, and one more thing.”

Back on the first Edit Connection screen, there’s a menu choice called “Initialization.” Go back after your driver is configured and click THAT.

SET. THAT. KEEP ALIVE.

I’d recommend something relatively LOW, say, 55 secs. Try that for a while and if it helps, gradually increase the value until it’s around 5-10 minutes; enough to keep your connection alive, but not so low as to be annoying.