Powershell to mirror the folder structure of an entire drive to another drive

To mirror the folder structure of an entire drive to another drive using PowerShell, you can modify the script to iterate through all folders on the source drive. Here’s the updated script:

# Define source and destination drives
$sourceDrive = "E:"
$destinationDrive = "S:"

# Function to recursively mirror directory structure
function Mirror-DriveFolders {
    param (
        [string]$sourceDrive,
        [string]$destinationDrive
    )

    # Get all folders on the source drive
    $sourceFolders = Get-ChildItem -Path $sourceDrive -Directory -Recurse

    foreach ($folder in $sourceFolders) {
        # Construct the corresponding destination path
        $destinationPath = $folder.FullName.Replace($sourceDrive, $destinationDrive)

        # Create the directory in the destination if it doesn't exist
        if (!(Test-Path -Path $destinationPath)) {
            New-Item -ItemType Directory -Path $destinationPath -Force
        }
    }
}

# Mirror the directory structure of the entire drive
Mirror-DriveFolders -sourceDrive $sourceDrive -destinationDrive $destinationDrive

Write-Host "Folder structure of drive mirrored successfully."

How do you check if logical replication is running on a postgres instance?

  1. Check if logical replication is enabled in the PostgreSQL configuration file:Open the PostgreSQL configuration file (postgresql.conf), usually located in the PostgreSQL data directory. Look for the wal_level parameter. Logical replication requires at least wal_level to be set to logical or replica. If it’s set to logical, then logical replication is enabled. You can find the location of the PostgreSQL data directory by querying the pg_settings system catalog table: SELECT name, setting FROM pg_settings WHERE name = 'data_directory';
  2. Check if logical replication slots are being used:Logical replication uses replication slots to keep track of the status of replication. You can check if any replication slots are in use by querying the pg_replication_slots system catalog table: SELECT slot_name, plugin, slot_type, database FROM pg_replication_slots; If there are slots with slot_type as logical, then logical replication is being used.
  3. Check if there are any active subscriptions:Logical replication involves publishers and subscribers. Publishers publish changes to the replication stream, and subscribers consume these changes. To see if there are any active subscriptions, you can query the pg_subscription system catalog table:SELECT * FROM pg_subscription; If there are subscriptions listed here, then logical replication is in use.
  4. Check if there are any replication connections:Logical replication typically involves connections from publishers to subscribers. You can check if there are any active replication connections by querying the pg_stat_replication system view:SELECT * FROM pg_stat_replication; If there are entries listed here, then logical replication connections are active.

By following these steps, you can determine if logical replication is running on your PostgreSQL instance.

Shell script to create a new user and add their SSH public key to an Ubuntu VM

Below is a shell script that creates a new user and adds their SSH public key to an Ubuntu VM:

#!/bin/bash

# Function to create a new user and add SSH public key
create_user_and_add_ssh_key() {
    username=$1
    ssh_key_path=$2

    # Create user
    sudo adduser --disabled-password --gecos "" $username

    # Create .ssh directory if not exists
    sudo -u $username mkdir -p /home/$username/.ssh

    # Copy SSH public key to authorized_keys file
    cat $ssh_key_path >> /home/$username/.ssh/authorized_keys

    # Set proper permissions
    sudo chown -R $username:$username /home/$username/.ssh
    sudo chmod 700 /home/$username/.ssh
    sudo chmod 600 /home/$username/.ssh/authorized_keys
}

# Example usage
new_username="newuser"
ssh_key_path="/path/to/public_key.pub"

create_user_and_add_ssh_key $new_username $ssh_key_path

Make sure to replace newuser with the desired username and /path/to/public_key.pub with the path to the SSH public key file you want to add.

Save this script to a file (e.g., create_user_and_add_ssh_key.sh), make it executable (chmod +x create_user_and_add_ssh_key.sh), and then run it (./create_user_and_add_ssh_key.sh).

This script will create a new user without a password prompt and add their SSH public key to the authorized_keys file.

How do you set binlog retention period on MySQL?

In MySQL, you can control the retention period of binary log files using the expire_logs_days system variable. This variable determines how long binary log files are retained before they are automatically purged. Here’s how you can set up the binary log retention period:

  1. Check Current Configuration: You can check the current value of expire_logs_days by executing the following SQL query:sqlCopy codeSHOW VARIABLES LIKE 'expire_logs_days';
  2. Set Binary Log Retention Period: To set the binary log retention period to a specific number of days, you can use the following SQL statement:sqlCopy codeSET GLOBAL expire_logs_days = <number_of_days>; Replace <number_of_days> with the desired retention period.
  3. Update Configuration File (Optional): To make the change persistent across MySQL server restarts, you can add or update the expire_logs_days setting in your MySQL configuration file (e.g., my.cnf or my.ini). Add the following line:iniCopy codeexpire_logs_days = <number_of_days>
  4. Manually Purge Binary Logs (Optional): If you want to manually purge binary logs before the specified retention period, you can use the PURGE BINARY LOGS statement. For example, to purge logs older than 7 days, you can run:sqlCopy codePURGE BINARY LOGS BEFORE NOW() - INTERVAL 7 DAY; Be cautious while manually purging logs, as this operation cannot be undone.
  5. Restart MySQL Server (Optional): If you updated the configuration file, you might need to restart the MySQL server for the changes to take effect.

Keep in mind that setting a very short retention period may lead to potential data loss during certain recovery scenarios, so choose a value that aligns with your backup and recovery strategy. Additionally, be aware of the disk space implications of retaining binary logs for an extended period.

𝗧𝗼𝗽 𝟮𝟬 𝗦𝗤𝗟 𝗾𝘂𝗲𝗿𝘆 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝘁𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀

Here is the list of the top 20 SQL query optimization techniques I found important:

1. Create an index on very large tables (>1.000.000) rows

2. Use EXIST() instead of COUNT() to find an element in the table

3. SELECT fields instead of using SELECT *

4. Avoid Subqueries in WHERE Clause

5. Avoid SELECT DISTINCT where possible

6. Use WHERE Clause instead of HAVING

7. Create joins with INNER JOIN (not WHERE)

8. Use LIMIT to sample query results

9. Use UNION ALL instead of UNION wherever possible

10. Use UNION where instead of WHERE … or … query.

11. Run your query during off-peak hours

12. Avoid using OR in join queries

14. Choose GROUP BY over window functions

15. Use derived and temporary tables

16. Drop the index before loading bulk data

16. Use materialized views instead of views

17. Avoid != or <> (not equal) operator

18. Minimize the number of subqueries

19. Try to use INNER join as little as possible when you can get the same output using LEFT/RIGHT join.

20. For retrieving the same dataset frequently try to use temporary sources.

Do you know what is 𝗤𝘂𝗲𝗿𝘆 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗲𝗿? Its primary function is to determine 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 𝘄𝗮𝘆 to execute a given SQL query by finding the best execution plan. The query optimizer works by taking the SQL query as input and analyzing it to determine how best to execute it. The first step is to parse the SQL query and create a syntax tree. The optimizer then analyzes the syntax tree to determine the various ways the query can be executed.

Next, the optimizer generates 𝗮𝗹𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝘃𝗲 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝗽𝗹𝗮𝗻𝘀, which are different ways of executing the same query. Each execution plan specifies the order in which the tables should be accessed, the join methods to use, and any filtering or sorting operations to be performed. The optimizer then assigns a 𝗰𝗼𝘀𝘁 to each execution plan based on factors such as the number of disk reads and the amount of CPU time required to execute the query.

Finally, the optimizer 𝗰𝗵𝗼𝗼𝘀𝗲𝘀 𝘁𝗵𝗲 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝗽𝗹𝗮𝗻 with the lowest cost as the optimal execution plan for the query. This plan is then used to execute the query.