MySQL Lockhunter Query

This requires that metadata lock instrumentation be enabled.

UPDATE performance_schema.setup_instruments SET ENABLED = 'YES' WHERE NAME = 'wait/lock/metadata/sql/mdl';

Once that’s in place, you can do this:

SELECT
  ml.object_schema,
  ml.object_name,
  p.id,
  p.user,
  p.state,
  ml.lock_type,
  ml.lock_duration,
  ml.lock_status,
  p.time,
  LEFT(p.info, 100)
FROM
  performance_schema.metadata_locks ml
  JOIN performance_schema.threads t
    ON t.thread_id = ml.owner_thread_id
  JOIN information_schema.processlist p
    ON p.id = t.processlist_id
WHERE ml.object_name = 'dim_member'
ORDER BY p.time DESC;

Screen Fix

The screen utility is not included on some server builds. You *have* to have screen or tmux as a DBA or long-running scripts and DML will timeout and the ssh daemon will boot you off the server.

As a workaround, I copy the “screen” binary from /usr/bin/screen on a server that does have it installed and put it in my ~/bin directory.  Ordinarily, screen requires write access to /var/run/screen — but there’s a workaround.

Copy the screen binary from a compatible system to your personal bin directory.

Fortunately, screen reads a environment variable SCREENDIR to utilize an alternative socket directory.

So to work around it, you can create a directory, such as ~/.screen:

mkdir ~/.screen && chmod 700 ~/.screen

and export the SCREENDIR to point to that directory:

export SCREENDIR=$HOME/.screen

You can also put this line into you ~/.bashrc or ~./profile so that it will also take effect afterwards.

Setting up DBeaver for use with Redshift

Step by step 

Setting up DBeaver for use with Redshift is not the most intuitive thing you’ll ever do. A common misconception is that since Redshift is (sorta) built on Postgres, then a Postgres driver is the correct choice. Alas, nope.

Here is a quick how-to for setting up DBeaver correctly as possible for Redshift.

Here’s the standard DBeaver opening screen

Right-click on your Redshift connection and choose “Edit Connection (F4)”

That will present you with the Connection Settings dialog:

Where it says “Driver name” it’s gotta be AWS / Redshift. If it doesn’t, then click the “Edit Driver Settings” button. You’ll get this:

Choose the AWS category and the ID as shown–if you do not have the driver installed, or if you do, but want to upgrade it, click the website link and DBeaver will get the most recent stable version for your OS and install it. Then you can continue with setting the host, port, database, user, etc. in the previous screen.

And as Steve used to say, “oh, and one more thing.”

Back on the first Edit Connection screen, there’s a menu choice called “Initialization.” Go back after your driver is configured and click THAT.

SET. THAT. KEEP ALIVE.

I’d recommend something relatively LOW, say, 55 secs. Try that for a while and if it helps, gradually increase the value until it’s around 5-10 minutes; enough to keep your connection alive, but not so low as to be annoying.

Redshift encryption migration

Using the AWS console.

  1. Login to the AWS Management Console.
  2. Navigate to Redshift dashboard at https://console.aws.amazon.com/redshift/.
  3. In the left navigation panel, under Redshift Dashboard, click Clusters.
  4. Click Launch Cluster button from the dashboard top menu to start the cluster setup process.
  5. On the Cluster Details configuration page, enter a unique name for your new cluster in the Cluster Identifier field and fill out the rest of the fields available on this page with the information taken from your existing (unencrypted) cluster.
  6. Click the Continue button to continue the setup process.
  7. On the Node Configuration page, select the appropriate node type for the new cluster from the Node Type dropdown list and configure the number of nodes used to match the existing (unencrypted) cluster configuration.
  8. Click Continue to carry on with the process.
  9. On the Additional Configuration page, in the first configuration section, select KMS next to Encrypt Database. Choose (default) aws/redshift option from the Master Key dropdown list to encrypt the new cluster using the default master key provided and managed by AWS. The default master key is an AWS-managed key that is created automatically for Redshift clusters within your AWS account. Configure the rest of the options available on the page such as availability zone, VPC security group(s) and associated IAM role in order to reflect the unencrypted cluster configuration.
  10. Click Continue to load the next page.
  11. On the Review page, review the cluster properties, its database details, the security and encryption configuration, then click Launch Cluster to build the new (encrypted) AWS Redshift cluster.
  12. On the confirmation page click Close to return to the Redshift dashboard. Once the Cluster Status value changes to available and the DB Health status changes to healthy, the new cluster can used to load the existing data.
  13. Now unload your data from the unencrypted Redshift cluster and reload it into the newly created cluster using the Amazon Redshift Unload/Copy utility. With this utility tool you can unload (export) your data from the unencrypted cluster (source) to an AWS S3 bucket, encrypt it, then import the data into your new cluster (destination) and clean up the S3 bucket used. All the necessary instructions to install, configure and use the Amazon Redshift Unload/Copy tool can be found at this URL.
  14. As soon as the migration process is completed and all the data is loaded into your new (encrypted) Redshift cluster, you can update your application configuration to refer to the new cluster endpoint:
  15. Once the Redshift cluster endpoint is changed within your application configuration, you can remove the unencrypted cluster from your AWS account by performing the following actions:
    1. In the navigation panel, under Redshift Dashboard, click Clusters.
    2. Choose the Redshift cluster that you want to remove then click on its identifier link listed in the Cluster column.
    3. On the selected cluster Configuration tab, click the Cluster dropdown button from the dashboard main menu then select Delete from the dropdown list:
    4. Inside the Delete Cluster dialog box, enter a unique name for the final snapshot in the Snapshot name box then click Delete to confirm the action. Once the snapshot is created the unencrypted cluster removal process begins.
  16. Repeat steps no. 4 – 15 to enable data-at-rest encryption for other Redshift clusters launched in the current region.
  17. Change the AWS region from the navigation bar and repeat the entire process for other regions.    

Dump MySQL Triggers And Routines, No Data

mysqldump will backup by default all the triggers but NOT the stored procedures/functions. There are 2 mysqldump parameters that control this behavior:

   * –routines – FALSE by default
   * –triggers – TRUE by default

This means that if you want to include in an existing backup script also the triggers and stored procedures you only need to add the –routines command line parameter:

mysqldump  --routines > outputfile.sql

Let’s assume we want to backup ONLY the stored procedures and triggers and not the mysql tables and data (this can be useful to import these in another db/server that has already the data but not the stored procedures and/or triggers), then we should run something like:

mysqldump --routines --no-create-info --no-data --no-create-db --skip-opt  > outputfile.sql

and this will save only the procedures/functions/triggers of the . If you need to import them to another db/server you will have to run something like:

mysql  < outputfile.sql