Automating SQL Server User Removal with PowerShell and dbatools

Introduction

When an employee leaves or a service account is retired, it’s essential to remove their access cleanly and consistently from SQL Server.
Manually revoking access across multiple databases can be error-prone and time-consuming — especially in large environments.

In this post, we’ll look at how to use the dbatools PowerShell module to automatically remove a user from all databases (except system ones) and drop the server-level login, with full logging for audit purposes.


Prerequisites

  • Install dbatools (if not already installed): Install-Module dbatools -Scope CurrentUser -Force
  • Ensure you have sysadmin rights on the SQL instance.
  • Have the login name ready (domain or SQL account).

The PowerShell Script

<#
.SYNOPSIS
Removes a SQL Server login and its users from all user databases.
Works for both domain and SQL logins, with logging.
#>

param(
    [Parameter(Mandatory = $true)]
    [string]$SqlInstance,
    [Parameter(Mandatory = $true)]
    [string]$Login,
    [string]$LogFile = "$(Join-Path $PSScriptRoot ("UserRemovalLog_{0:yyyyMMdd_HHmmss}.txt" -f (Get-Date)))"
)

if (-not (Get-Module -ListAvailable -Name dbatools)) {
    Write-Error "Please install dbatools using: Install-Module dbatools -Scope CurrentUser -Force"
    exit 1
}

function Write-Log {
    param([string]$Message, [string]$Color = "White")
    $timestamp = (Get-Date).ToString("yyyy-MM-dd HH:mm:ss")
    $logEntry = "[$timestamp] $Message"
    Write-Host $logEntry -ForegroundColor $Color
    Add-Content -Path $LogFile -Value $logEntry
}

Write-Log "=== Starting cleanup for login: $Login on instance: $SqlInstance ===" "Cyan"

$UserDatabases = Get-DbaDatabase -SqlInstance $SqlInstance | Where-Object { -not $_.IsSystemObject }

foreach ($db in $UserDatabases) {
    try {
        $dbName = $db.Name
        $user = Get-DbaDbUser -SqlInstance $SqlInstance -Database $dbName -User $Login -ErrorAction SilentlyContinue
        if ($user) {
            Write-Log "Removing user [$Login] from [$dbName]" "Red"
            Remove-DbaDbUser -SqlInstance $SqlInstance -Database $dbName -User $Login -Confirm:$false -ErrorAction Stop
            Write-Log "✅ Removed from [$dbName]" "Green"
        }
        else {
            Write-Log "User [$Login] not found in [$dbName]" "DarkGray"
        }
    }
    catch {
        Write-Log "⚠️ Failed in [$dbName]: $_" "Yellow"
    }
}

try {
    $loginObj = Get-DbaLogin -SqlInstance $SqlInstance -Login $Login -ErrorAction SilentlyContinue
    if ($loginObj) {
        $loginType = $loginObj.LoginType
        Write-Log "Removing server-level login [$Login] ($loginType)" "Red"
        Remove-DbaLogin -SqlInstance $SqlInstance -Login $Login -Confirm:$false -ErrorAction Stop
        Write-Log "✅ Server-level login removed" "Green"
    }
    else {
        Write-Log "No server-level login [$Login] found" "DarkGray"
    }
}
catch {
    Write-Log "⚠️ Failed to remove login [$Login]: $_" "Yellow"
}

Write-Log "=== Completed cleanup for [$Login] on [$SqlInstance] ===" "Cyan"
Write-Log "Log file saved to: $LogFile" "Gray"


How It Works

  • Get-DbaDatabase lists all user databases.
  • Get-DbaDbUser / Remove-DbaDbUser checks for and removes the user from each DB.
  • Get-DbaLogin / Remove-DbaLogin cleans up the login from the instance.
  • All actions are written to a timestamped .txt log for compliance or auditing.

Example Usage

.\Remove-DbUserFromAllDatabases.ps1 -SqlInstance "SQLPROD01" -Login "Contoso\User123"

You can also specify a custom log path:

.\Remove-DbUserFromAllDatabases.ps1 -SqlInstance "SQLPROD01" -Login "appuser" -LogFile "C:\Logs\UserCleanup.txt"


Key Takeaways

  • Fully automated and non-interactive — perfect for offboarding workflows.
  • Handles both Windows and SQL logins gracefully.
  • Creates a detailed audit log for every action taken.
  • Safe to re-run — it skips users or logins that don’t exist.

Automate SQL Server Database Role Assignment with PowerShell and dbatools

Introduction

As a SQL Server DBA, one of the most repetitive administrative tasks is granting user access to multiple databases — especially in environments with dozens or even hundreds of databases.
Instead of manually connecting to each database and assigning roles, you can automate the process with the dbatools PowerShell module.

In this post, we’ll walk through how to automatically grant a user db_datareader and db_datawriter roles across all user databases, while excluding system databases.


Prerequisites

Before running the script:

  • Install the dbatools PowerShell module: Install-Module dbatools -Scope CurrentUser -Force
  • Ensure your account has sufficient permissions (sysadmin or equivalent).
  • Know the SQL instance name and the login you want to grant permissions to.

The PowerShell Script

# Requires dbatools
# Install-Module dbatools -Scope CurrentUser -Force

$SqlInstance = "MyServer\MyInstance"  # Replace with your SQL Server instance
$Login = "MyDomain\MyUser"            # Replace with your Windows or SQL login

# Get all user databases (excluding system DBs)
$UserDatabases = Get-DbaDatabase -SqlInstance $SqlInstance | Where-Object { -not $_.IsSystemObject }

foreach ($db in $UserDatabases) {
    Write-Host "Processing database: $($db.Name)" -ForegroundColor Cyan

    try {
        # Create the user if not already present
        $user = Get-DbaDbUser -SqlInstance $SqlInstance -Database $db.Name -User $Login -ErrorAction SilentlyContinue
        if (-not $user) {
            New-DbaDbUser -SqlInstance $SqlInstance -Database $db.Name -Login $Login -Username $Login -Confirm:$false | Out-Null
        }

        # Grant roles
        Add-DbaDbRoleMember -SqlInstance $SqlInstance -Database $db.Name -Role db_datareader -User $Login -Confirm:$false -ErrorAction Stop
        Add-DbaDbRoleMember -SqlInstance $SqlInstance -Database $db.Name -Role db_datawriter -User $Login -Confirm:$false -ErrorAction Stop

        Write-Host "✅ Granted db_datareader and db_datawriter in $($db.Name)" -ForegroundColor Green
    }
    catch {
        Write-Warning "Failed to process $($db.Name): $_"
    }
}

Write-Host "Completed assigning roles for $Login on all user databases." -ForegroundColor Green


Explanation

  • Get-DbaDatabase retrieves all databases and filters out system ones.
  • New-DbaDbUser ensures the login exists as a user in each DB.
  • Add-DbaDbRoleMember grants the necessary roles.
  • The script is non-interactive (-Confirm:$false), making it perfect for automation or CI/CD pipelines.

Example Usage

.\Grant-DbRoles.ps1 -SqlInstance "SQL01" -Login "Contoso\User123"


Key Takeaways

  • Save hours by automating repetitive access management tasks.
  • dbatools provides robust error handling and clean PowerShell syntax.
  • Works seamlessly with both Windows and SQL logins.
  • Ideal for onboarding new users or service accounts.

Deep Dive: Snowflake to Azure Private Link Service Connection

Break down in plain language, building from basics to the complete picture.

What is Azure Private Link Service? (The Analogy)

Think of Azure Private Link Service like a private, secure doorway between two buildings:

  • Building A (Snowflake’s building) is in a gated community that you don’t control
  • Building B (Your Azure resources) is your own building
  • You can’t build a bridge between the buildings (no VNet peering)
  • You can’t use the public street (no internet for PHI data)

Azure Private Link Service creates a private tunnel through the ground (Azure’s backbone network) that only connects these two specific buildings, and nobody else can use it.


1. The Purpose of Azure Private Link Service

The Problem It Solves:

Imagine you have a service (like your MySQL database) running in your private Azure network. Normally, there are only two ways for external services to reach it:

  1. Public Internet: Open a door to the internet (❌ Not acceptable for PHI)
  2. VNet Peering: Directly connect networks (❌ Not possible with Snowflake’s multi-tenant setup)

The Solution:

Azure Private Link Service provides a third way:

  • It creates a private endpoint that external services can connect to
  • Traffic flows over Microsoft’s private backbone network (never touches the internet)
  • The external service thinks it’s connecting to a simple endpoint
  • Your service remains completely private within your VNet

Real-World Analogy:

It’s like having a private phone line installed between two companies:

  • The caller (Snowflake) dials a special number (private endpoint)
  • The call routes through a private telephone network (Azure backbone)
  • It rings at your reception desk (Load Balancer)
  • Your receptionist transfers it internally (to ProxySQL, then MySQL)
  • No one can intercept the call or even know it’s happening

2. When is Azure Private Link Service Used?

Common Scenarios:

Scenario A: Third-Party SaaS Needs Private Access

  • You use a SaaS product (like Snowflake, Databricks, or vendor tools)
  • They need to access YOUR Azure resources
  • You want zero public internet exposure
  • Use Private Link Service

Scenario B: Customer Access to Your Services

  • You provide a service to customers
  • Customers are in their own Azure subscriptions
  • You want to offer private connectivity
  • Use Private Link Service

Scenario C: Cross-Region Private Access

  • Your resources are in different Azure regions
  • You want private connectivity without complex VNet peering chains
  • Use Private Link Service

Your Specific Use Case:

You’re using it because:

  1. Snowflake is external (different Azure subscription, multi-tenant)
  2. MySQL is private (VNet-integrated, no public access)
  3. PHI compliance (must avoid public internet)
  4. No VNet peering option (Snowflake’s network architecture doesn’t support it)

3. How Azure Private Link Service Connects with External Services Securely

Let me walk through this step-by-step with your Snowflake scenario:

Step 1: You Create the Private Link Service

In your Azure subscription:

Your Actions:
├─ Create Standard Load Balancer
│  └─ Backend pool: ProxySQL VMs
├─ Create Private Link Service
│  ├─ Attach to Load Balancer's frontend IP
│  ├─ Generate a unique "alias" (like a secret address)
│  └─ Set visibility controls

What happens:

  • Azure creates a special resource that “wraps” your Load Balancer
  • It generates an alias – think of this as a secret, unique address like:
    • mycompany-mysql-pls.abc123.azure.privatelinkservice
  • This alias is how external services find your Private Link Service

Step 2: You Share the Alias with Snowflake

You give Snowflake two things:

  1. The alias (the secret address)
  2. Permission (you’ll approve their connection request)

Important: The alias itself doesn’t grant access – it’s just the address. Think of it like knowing someone’s phone number doesn’t mean they’ll accept your call.

Step 3: Snowflake Creates a Private Endpoint in Their Network

In Snowflake’s Azure environment (which you don’t control):

Snowflake's Actions:
├─ Create Private Endpoint in their VNet
│  └─ Target: Your Private Link Service alias
├─ This generates a connection request
└─ Connection shows as "Pending" (waiting for your approval)

What happens:

  • Snowflake creates a network interface in THEIR VNet
  • This interface gets a PRIVATE IP address (like 10.x.x.x) in their network
  • Azure knows this interface wants to connect to YOUR Private Link Service
  • But it can’t connect yet – you must approve it

Step 4: You Approve the Connection

In your Azure portal:

You see:
├─ Connection Request from Snowflake
│  ├─ Shows their subscription ID
│  ├─ Shows requested resource (your Private Link Service)
│  └─ Status: "Pending"
└─ You click: "Approve"

What happens:

  • Azure creates a secure tunnel through its backbone network
  • This tunnel connects Snowflake’s private endpoint to your Load Balancer
  • The tunnel is encrypted and isolated at the network fabric level

Step 5: The Connection is Established

Now the complete path exists:

Snowflake OpenFlow
    ↓
Connects to: 10.x.x.x (private IP in Snowflake's VNet)
    ↓
[AZURE BACKBONE - ENCRYPTED TUNNEL]
    ↓
Arrives at: Your Load Balancer (in your VNet)
    ↓
Routes to: ProxySQL → MySQL


4. How the Secure Connection Actually Works (The Technical Magic)

Layer 1: Network Isolation

Physical Level:

  • Azure’s network uses software-defined networking (SDN)
  • Your traffic gets VXLAN encapsulation (a tunnel within the network)
  • It’s tagged with your specific Private Link connection ID
  • Other tenants’ traffic cannot see or intercept it

Analogy: It’s like sending a letter inside a locked box, which is then placed inside another locked box, with special routing instructions that only the Azure network fabric can read.

Layer 2: No Public IP Addresses

Key Point: Neither end has a public IP address

Snowflake side:
├─ Private Endpoint: 10.0.5.20 (private IP)
└─ NOT exposed to internet

Your side:
├─ Load Balancer: 10.1.2.10 (private frontend IP)
└─ NOT exposed to internet

Why this matters:

  • No IP address exists that could be attacked from the Internet
  • Port scanners can’t find it
  • DDoS attacks can’t target it

Layer 3: Azure Backbone Routing

The path traffic takes:

  1. Snowflake’s VM sends packets to its private endpoint (10.0.5.20)
  2. Azure SDN intercepts these packets
  3. Encapsulation happens: packet gets wrapped in Azure’s internal routing protocol
  4. Backbone transit: Travels through Microsoft’s private fiber network
  5. Decapsulation: Arrives at your Private Link Service
  6. Delivery: Forwarded to your Load Balancer

The security here:

  • Traffic never routes through public internet routers
  • Doesn’t traverse untrusted networks
  • Uses Microsoft’s dedicated fiber connections
  • Encrypted at the transport layer

Layer 4: Encryption in Transit

Even though it’s on a private network, data is still encrypted:

Application Layer (MySQL):
└─ TLS/SSL encryption (MySQL protocol with SSL)
   └─ Your ProxySQL and MySQL enforce encrypted connections

Transport Layer:
└─ Private Link provides network-level isolation
   └─ Even if someone accessed the physical network, data is encrypted


5. How We Ensure Azure Private Link Service is Secure

Let me give you a comprehensive security checklist:

A. Access Control (Who Can Connect?)

1. Approval Requirement

Setting: Auto-approval vs Manual approval
Your choice: MANUAL APPROVAL ✓

Why:
- You explicitly approve each connection request
- See who's requesting (subscription ID, service name)
- Can reject suspicious requests
- Can revoke access anytime

Action Item: Always use manual approval for production PHI data.

2. Visibility Settings

Options:
├─ Role-Based (Subscription): Only specific subscriptions can see your service
├─ Role-Based (Explicit list): You list exact subscription IDs
└─ Public: Anyone can request (you still approve)

Your choice: Role-Based with Snowflake's subscription ID ✓

Why:
- Limits who can even discover your Private Link Service
- Reduces surface area for connection attempts

Action Item: Configure visibility to allow ONLY Snowflake’s Azure subscription ID.

3. Connection State Monitoring

Monitor:
├─ Active connections (who's connected right now)
├─ Pending requests (who's trying to connect)
├─ Rejected connections (what you've denied)
└─ Connection history (audit trail)

Action Item: Set up Azure Monitor alerts for:

  • New connection requests
  • Approved connections
  • Connection removals

B. Network Security (Traffic Control)

1. Network Security Groups (NSGs)

On your Load Balancer subnet:

Inbound Rules:
├─ ALLOW: From Private Link Service (Azure service tag)
│  └─ Port: 3306 (MySQL)
├─ DENY: Everything else
└─ Priority: Specific to general

Why:
- Even though traffic comes via Private Link, add NSG layer
- Defense in depth principle
- Logs any unexpected traffic patterns

2. Load Balancer Restrictions

Load Balancer Configuration:
├─ Frontend: INTERNAL only (not public)
├─ Backend pool: Only specific ProxySQL VMs
├─ Health probes: Ensure only healthy backends receive traffic
└─ Session persistence: Configure based on OpenFlow needs

Action Item: Use Internal Load Balancer (not public-facing).

3. ProxySQL VM Security

ProxySQL VM:
├─ No public IP address ✓
├─ NSG allows ONLY from Load Balancer subnet ✓
├─ OS firewall (firewalld/iptables) allows only port 3306 ✓
├─ Managed identity for authentication (no passwords) ✓
└─ Regular security patching ✓


C. Data Security (Protecting the Payload)

1. Force TLS/SSL Encryption

On MySQL Flexible Server:

-- Require SSL connections
REQUIRE SSL;

-- Verify setting
SHOW VARIABLES LIKE 'require_secure_transport';
-- Should return: ON

On ProxySQL:

-- Configure SSL for backend connections
UPDATE mysql_servers 
SET use_ssl=1 
WHERE hostgroup_id=0;

-- Force SSL for users
UPDATE mysql_users 
SET use_ssl=1;

Action Item: Never allow unencrypted MySQL connections, even over Private Link.

2. Certificate Validation

MySQL SSL Configuration:
├─ Use Azure-managed certificates
├─ Enable certificate validation
├─ Rotate certificates regularly
└─ Monitor certificate expiration

ProxySQL Configuration:
├─ Validate MySQL server certificates
├─ Use proper CA bundle
└─ Log SSL handshake failures


D. Authentication & Authorization

1. MySQL User Permissions

Create a dedicated user for Snowflake OpenFlow:

-- Create dedicated user
CREATE USER 'snowflake_cdc'@'%' 
IDENTIFIED WITH mysql_native_password BY 'strong_password'
REQUIRE SSL;

-- Grant ONLY necessary permissions
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT 
ON your_database.* 
TO 'snowflake_cdc'@'%';

-- Restrict to specific tables if possible
GRANT SELECT 
ON your_database.specific_table 
TO 'snowflake_cdc'@'%';

Key Principle: Least privilege – only grant what OpenFlow needs for CDC.

2. ProxySQL Authentication Layer

-- ProxySQL can add another auth layer
-- Different credentials externally vs internally

INSERT INTO mysql_users (
  username,
  password,
  default_hostgroup,
  max_connections
) VALUES (
  'snowflake_cdc',
  'external_password',
  0,  -- backend hostgroup
  100 -- connection limit
);

Benefit: You can change backend MySQL credentials without updating Snowflake.


E. Monitoring & Auditing (Detect Issues)

1. Azure Monitor Logs

Enable diagnostics on:
├─ Private Link Service
│  └─ Log: Connection requests, approvals, data transfer
├─ Load Balancer
│  └─ Metrics: Connection count, bytes transferred, health status
├─ Network Security Groups
│  └─ Flow logs: All traffic patterns
└─ ProxySQL VM
   └─ Azure Monitor Agent: OS and application logs

2. MySQL Audit Logging

-- Enable audit log
SET GLOBAL audit_log_policy = 'ALL';

-- Monitor for:
├─ Failed authentication attempts
├─ Unusual query patterns
├─ Large data exports
└─ Access outside business hours

3. Alert on Anomalies

Set up alerts for:
├─ New Private Link connection requests
├─ High data transfer volumes (possible data exfiltration)
├─ Failed authentication attempts
├─ ProxySQL health check failures
├─ Load Balancer backend unhealthy
└─ NSG rule matches (blocked traffic attempts)


F. Compliance & Governance (PHI Requirements)

1. Data Residency

Ensure:
├─ Private Link Service: Same region as MySQL
├─ Load Balancer: Same region
├─ ProxySQL VMs: Same region
└─ Snowflake region: Appropriate for your PHI requirements

Why: Minimize data travel, maintain compliance boundaries

2. Encryption at Rest

MySQL Flexible Server:
├─ Enable: Customer-managed keys (CMK) with Azure Key Vault
├─ Rotate: Keys regularly
└─ Audit: Key access logs

Why: PHI must be encrypted at rest

3. Access Logs & Retention

Retention policy:
├─ Connection logs: 90 days minimum
├─ Query audit logs: Per HIPAA requirements
├─ NSG flow logs: 30 days minimum
└─ Azure Activity logs: 90 days minimum

Why: Compliance audits, incident investigation


6. Security Validation Checklist

Before going live, verify:

Pre-Production Checklist:

☐ Private Link Service visibility: Restricted to Snowflake subscription
☐ Manual approval: Enabled (no auto-approval)
☐ NSG rules: Properly configured on all subnets
☐ Load Balancer: Internal only (no public IP)
☐ ProxySQL VMs: No public IPs, managed identities configured
☐ MySQL SSL: Required for all connections
☐ Certificate validation: Enabled
☐ Least privilege: MySQL user has minimal permissions
☐ Audit logging: Enabled on MySQL and ProxySQL
☐ Azure Monitor: Diagnostic logs enabled on all components
☐ Alerts: Configured for security events
☐ Network flow logs: Enabled for NSGs
☐ Encryption at rest: Customer-managed keys configured
☐ Backup encryption: Verified
☐ Disaster recovery: Tested with Private Link maintained
☐ Documentation: Architecture diagram, runbooks created
☐ Incident response: Plan includes Private Link scenarios

Ongoing Security Practices:

☐ Weekly: Review connection logs
☐ Weekly: Check for pending connection requests
☐ Monthly: Audit MySQL user permissions
☐ Monthly: Review NSG rules for drift
☐ Quarterly: Certificate expiration check
☐ Quarterly: Penetration testing (authorized)
☐ Annually: Full security audit
☐ Continuous: Monitor alerts in real-time


Common Security Concerns & Answers

Q: Can someone intercept traffic on the Azure backbone?

A: No, for multiple reasons:

  1. Network isolation: Your Private Link traffic is isolated using VXLAN tunneling
  2. Logical separation: Azure’s SDN ensures tenant isolation at the hypervisor level
  3. Encryption: Even if someone accessed the physical layer (impossible), MySQL traffic is TLS-encrypted
  4. Microsoft’s security: Azure backbone is not accessible to customers or internet

Q: How do I know Snowflake is really who they say they are?

A: Multiple verification layers:

  1. Subscription ID: You see Snowflake’s Azure subscription ID in the connection request
  2. Manual approval: You explicitly approve the connection
  3. Authentication: Even after the connection, Snowflake must authenticate to MySQL
  4. Certificate validation: TLS certificates verify server identity
  5. Coordination: You coordinate with Snowflake support to verify the timing of the connection

Q: What if someone compromises Snowflake’s environment?

A: Defense in depth protects you:

  1. Authentication required: Attacker still needs MySQL credentials
  2. Least privilege: CDC user can only SELECT and read binlog, cannot modify data
  3. Audit logging: Any unusual activity is logged
  4. Revocable access: You can instantly reject/delete the Private Link connection
  5. Network monitoring: Unusual data transfer patterns trigger alerts

Q: Is Private Link more secure than VPN or ExpressRoute?

A: Yes, for several reasons:

  1. No customer routing: You don’t manage routes or BGP
  2. No gateway management: No VPN gateways to patch/secure
  3. No shared circuits: Unlike ExpressRoute, Private Link is point-to-point
  4. Simpler security model: Less complexity = fewer vulnerabilities
  5. Azure-managed: Microsoft handles underlying infrastructure security

Summary: The Security Model

Azure Private Link Service creates a multi-layered secure connection:

Security Layers (from outer to inner):
│
├─ Layer 1: Access Control
│   └─ Manual approval, subscription restrictions
│
├─ Layer 2: Network Isolation  
│   └─ Private IPs, no internet exposure, Azure backbone
│
├─ Layer 3: Traffic Control
│   └─ NSGs, Load Balancer rules, firewall policies
│
├─ Layer 4: Encryption in Transit
│   └─ TLS/SSL for MySQL protocol
│
├─ Layer 5: Authentication
│   └─ MySQL user credentials, certificate validation
│
├─ Layer 6: Authorization
│   └─ Least privilege MySQL permissions
│
├─ Layer 7: Monitoring & Auditing
│   └─ Logs, alerts, anomaly detection
│
└─ Layer 8: Encryption at Rest
    └─ Customer-managed keys for MySQL data

Each layer provides protection, so even if one layer is compromised, others maintain security.


Private Link Service for Snowflake OpenFlow: Technical Overview

Understanding Private Link Service vs Private Endpoint

Key Distinction

  • Private Endpoint: Allows resources within your VNet to connect outbound to Azure services
  • Private Link Service: Allows external services to connect inbound to resources in your VNet

The Challenge with VNet-Injected MySQL Flexible Server

When MySQL Flexible Server is VNet-injected:

  • It exists entirely within your private VNet address space
  • Traditional private endpoints are not supported (as your engineering team noted)
  • External services like Snowflake OpenFlow cannot directly reach it

How Private Link Service Solves This

Architecture Flow

Snowflake OpenFlow → Private Endpoint (Snowflake's VNet) → Private Link Service (Your VNet) → MySQL Flexible Server

Step-by-Step Process

  1. Private Link Service Creation
  • You create a Private Link Service in your VNet
  • This service acts as a secure “front door” to your MySQL server
  • It gets a unique service identifier (alias)
  1. Load Balancer Integration
  • Private Link Service requires a Standard Load Balancer
  • Load balancer backend pool contains your MySQL Flexible Server
  • Traffic routing is handled transparently
  1. Connection Establishment
  • Snowflake creates a Private Endpoint in their VNet
  • This Private Endpoint connects to your Private Link Service
  • Connection request appears in your Azure portal for approval
  1. Traffic Flow
  • OpenFlow sends requests to their Private Endpoint
  • Traffic routes through the Private Link connection to your Private Link Service
  • Your Load Balancer forwards traffic to MySQL Flexible Server
  • Responses follow the reverse path

Traffic Direction Analysis

Inbound Connection Requirement

YES, the inbound connection shown in your diagram is necessary because:

  • OpenFlow Architecture: Snowflake OpenFlow runs in Snowflake’s infrastructure and must connect TO your database
  • CDC Requirements: Change Data Capture requires persistent connections from OpenFlow to monitor MySQL binlogs
  • Connection Initiation: The connection is always initiated from Snowflake’s side, making it inherently inbound to your infrastructure

Traffic Flow Breakdown

PhaseDirectionDescription
Connection SetupSnowflake → Your VNetOpenFlow establishes persistent connection
Binlog MonitoringSnowflake → MySQLContinuous monitoring for changes
Change NotificationMySQL → SnowflakeData changes sent back
Heartbeat/HealthBidirectionalConnection maintenance

Security Benefits

Network Isolation

  • No public IP addresses required on MySQL
  • Traffic never traverses the public internet
  • Connection uses Azure’s backbone network

Access Control

  • You control which services can connect via Private Link Service
  • Connection requests require your explicit approval
  • NSG rules can further restrict traffic

Monitoring

  • All connections are logged and auditable
  • Private Link Service provides connection metrics
  • Standard Azure monitoring applies

Implementation Requirements

Prerequisites

  • Standard Load Balancer (required for Private Link Service)
  • MySQL Flexible Server in VNet-injected mode
  • Appropriate NSG rules
  • Resource permissions for Private Link Service creation

Configuration Steps

  1. Create Standard Load Balancer with MySQL in backend pool
  2. Create Private Link Service linked to the Load Balancer
  3. Configure NSG rules to allow traffic from Private Link Service subnet
  4. Share Private Link Service alias with Snowflake team
  5. Approve connection request when it appears
  6. Configure OpenFlow connector with connection details

Why This Approach Works

The Private Link Service architecture elegantly solves the fundamental challenge:

  • Your Constraint: VNet-injected MySQL cannot have traditional private endpoints
  • Snowflake’s Need: OpenFlow requires inbound connectivity for CDC
  • The Solution: Private Link Service provides secure inbound connectivity without compromising your network isolation

This is Microsoft and Snowflake’s recommended pattern for exactly this scenario, allowing enterprise-grade security while enabling real-time data integration.

Remediating Redshift User Permissions

Overview

This guide covers the complete process for remediating Redshift user permissions as part of quarterly user access reviews. When users leave the company or their access needs change, we receive tickets with specific Schema-Permission attributes that need to be removed.

Out of the box, Redshift doesn’t make user permission management easy – especially when dealing with default privileges, object ownership, and the various ways permissions can be granted. This guide provides a systematic approach to handle all the edge cases you’ll encounter.

Complete Remediation Process

Step 1: Comprehensive User Audit

Always start by understanding the current state. Run this comprehensive audit to see all permissions:

-- Replace 'john.doe' with the target username
select user_name, schema_name, super_user, has_create, has_insert, has_update, has_delete, has_select, has_references, valuntil from ( select u.usename user_name, u.usesuper super_user, s.schemaname schema_name, has_schema_privilege(u.usename,s.schemaname,'create') has_create, has_table_privilege(u.usename,s.schemaname||'.'||s.tablename,'insert') has_insert, has_table_privilege(u.usename,s.schemaname||'.'||s.tablename,'update') has_update, has_table_privilege(u.usename,s.schemaname||'.'||s.tablename,'delete') has_delete, has_table_privilege(u.usename,s.schemaname||'.'||s.tablename,'select') has_select, has_table_privilege(u.usename,s.schemaname||'.'||s.tablename,'references') has_references, valuntil from pg_user u CROSS join ( SELECT DISTINCT schemaname, tablename FROM pg_tables where schemaname not like 'pg_%' and tablename not like '%newsletter_exp_prior_lookback_temptable%' ) s where (super_user = 1 or has_create = 1 or has_insert = 1 or has_update = 1 or has_delete = 1 or has_select = 1 or has_references = 1 ) and (u.valuntil > NOW() or u.valuntil is NULL) and u.usename = 'john.doe' ) group by user_name, schema_name, super_user, has_create, has_insert, has_update, has_delete, has_select, has_references, valuntil order by user_name, schema_name, has_select, has_create, has_insert, has_update, has_delete;

Step 2: Check for Object Ownership

Critical: If the user owns any tables, views, or functions, they need to be reassigned before permissions can be fully revoked:

select * from ( SELECT n.nspname AS schema_name, c.relname AS rel_name, c.relkind AS rel_kind, pg_get_userbyid(c.relowner) AS owner_name FROM pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace UNION ALL SELECT n.nspname AS schema_name, p.proname, 'p', pg_get_userbyid(p.proowner) FROM pg_proc p JOIN pg_namespace n ON n.oid = p.pronamespace ) sub1 where owner_name = 'john.doe';

If this returns results, you’ll need to reassign ownership:

-- Example: Reassign table ownership to a service account
ALTER TABLE schema_name.table_name OWNER TO service_account;

Step 3: Check for Default Privileges

Critical for ETL accounts: Users with default privileges cannot be dropped until these are cleaned up. Check for default privileges:

select pg_get_userbyid(d.defacluser) as user, n.nspname as schema, decode(d.defaclobjtype, 'r', 'tables', 'f', 'functions') as object_type, array_to_string(d.defaclacl, ' + ') as default_privileges from pg_catalog.pg_default_acl d left join pg_catalog.pg_namespace n on n.oid = d.defaclnamespace where array_to_string(defaclacl, ',') like '%john.doe%';

If default privileges exist, you must clean them up using this workaround (due to a PostgreSQL bug):

-- Grant temporary CREATE permission (required due to PostgreSQL bug)
grant create on schema ops to john.doe;

-- Revoke the default privileges
alter default privileges for user john.doe in schema ops revoke all privileges on tables from group ops, group engineering; 

-- Remove the temporary CREATE permission
revoke create on schema ops from john.doe;

Step 4: Identify Group Memberships

Find all groups the user belongs to and generate removal statements:

SELECT u.usesysid, g.groname, u.usename, 'ALTER GROUP "' || g.groname || '" DROP USER "' || u.usename || '";' as drop_statement FROM pg_user u LEFT JOIN pg_group g ON u.usesysid = ANY (g.grolist) WHERE u.usename = 'john.doe';

Step 5: Remove Group Memberships

Execute the group removal statements. Based on typical Schema-Permission patterns:

-- Remove user from read-only groups (SELECT permissions)
ALTER GROUP "ops" DROP USER "john.doe"; 
ALTER GROUP "person" DROP USER "john.doe"; 
-- Remove user from groups with broader permissions
ALTER GROUP "nonpii" DROP USER "john.doe"; 
ALTER GROUP "nonpii_readwrite" DROP USER "john.doe";

Step 6: Handle Direct Table Permissions

Some users may have been granted direct permissions on specific tables. This query will find them and generate REVOKE statements:

with users as ( select 'john.doe'::text as username ) select 'REVOKE ALL PRIVILEGES ON TABLE ' || pg_namespace.nspname || '.' || pg_class.relname || ' FROM ' || u.username || ';' as revoke_statement, pg_namespace.nspname as schemaname, pg_class.relname as tablename, array_to_string(pg_class.relacl, ',') as acls from pg_class left join pg_namespace on pg_class.relnamespace = pg_namespace.oid join users u on (array_to_string(pg_class.relacl, ',') like '%' || u.username || '=%') where pg_class.relacl is not null and pg_namespace.nspname not in ('pg_catalog', 'pg_toast', 'information_schema');

Execute the generated REVOKE statements:

-- Example output from above query
REVOKE ALL PRIVILEGES ON TABLE ops.claims_grading FROM john.doe; REVOKE ALL PRIVILEGES ON TABLE person.user_segments FROM john.doe;

Step 7: Handle Schema-Level Permissions

Remove any direct schema-level permissions:

-- Revoke CREATE permissions
REVOKE CREATE ON SCHEMA ops FROM "john.doe"; 
REVOKE CREATE ON SCHEMA person FROM "john.doe"; 

-- Revoke USAGE permissions
REVOKE USAGE ON SCHEMA ops FROM "john.doe"; 
REVOKE USAGE ON SCHEMA person FROM "john.doe";

Step 8: Comprehensive Verification

After remediation, verify all permissions have been removed:

-- Check for any remaining table permissions
SELECT n.nspname AS schema_name, c.relname AS table_name, u.usename AS username, has_table_privilege(u.usename, c.oid, 'SELECT') AS has_select, has_table_privilege(u.usename, c.oid, 'INSERT') AS has_insert, has_table_privilege(u.usename, c.oid, 'UPDATE') AS has_update, has_table_privilege(u.usename, c.oid, 'DELETE') AS has_delete, 'REVOKE ALL ON "' || n.nspname || '"."' || c.relname || '" FROM "' || u.usename || '";' as cleanup_statement, 'SHOW GRANTS ON TABLE "' || n.nspname || '"."' || c.relname || '";' as verification_statement FROM pg_catalog.pg_namespace n JOIN pg_catalog.pg_class c ON n.oid = c.relnamespace CROSS JOIN pg_catalog.pg_user u WHERE u.usename = 'john.doe' AND n.nspname IN ('ops', 'personalization') AND (has_table_privilege(u.usename, c.oid, 'SELECT') = true OR has_table_privilege(u.usename, c.oid, 'INSERT') = true OR has_table_privilege(u.usename, c.oid, 'UPDATE') = true OR has_table_privilege(u.usename, c.oid, 'DELETE') = true) AND c.relkind = 'r' ORDER BY n.nspname ASC;

If this query returns no rows, the remediation was successful.

Step 9: Final Verification

Run these final checks to ensure complete cleanup:

-- Verify no group memberships remain
SELECT u.usesysid, g.groname, u.usename FROM pg_user u LEFT JOIN pg_group g ON u.usesysid = ANY (g.grolist) WHERE u.usename = 'john.doe' AND g.groname IS NOT NULL;
-- Show any remaining grants
SHOW GRANTS FOR "john.doe";
-- Check specific tables if needed
SHOW GRANTS ON TABLE "ops"."claims_grading";

Advanced Troubleshooting

Case 1: User Still Has Access After All Revocations

This usually means permissions were granted to PUBLIC. Check and revoke:

-- Check for PUBLIC grants on specific problematic tables 
SHOW GRANTS ON TABLE ops.claims_grading;
-- If PUBLIC has access, revoke it (this is the "nuclear option")
REVOKE ALL ON ops.claims_grading FROM PUBLIC;

Case 2: Cannot Drop User Due to Default Privileges

If you see this error:

ERROR: user "username" cannot be dropped because some objects depend on it DETAIL: owner of default privileges on new relations belonging to user username in schema schema_name

Follow the default privileges cleanup process in Step 3. This is a known PostgreSQL limitation that requires the temporary CREATE permission workaround.

Case 3: Permission Denied When Revoking Default Privileges

If you get “permission denied for schema” when trying to revoke default privileges, you need to temporarily grant CREATE permissions first (see Step 3). This is due to a semi-bug in PostgreSQL.

Case 4: Complex Permission Inheritance

Sometimes users inherit permissions through multiple group memberships or nested groups. In these cases:

  1. Run the comprehensive audit query multiple times during remediation
  2. Check for indirect permissions through role inheritance
  3. Verify that group memberships are removed (some systems cache group information)

Best Practices

  1. Always follow the order: Object ownership → Default privileges → Group memberships → Direct permissions → Schema permissions → Verification
  2. Document edge cases: Each remediation teaches you something new. Keep notes on unusual patterns.
  3. Test in non-production first: For complex users (especially ETL accounts), test the remediation process in a non-production environment.
  4. Handle default privileges immediately: Don’t wait until user departure to clean up default privileges – they’re the most significant source of complications.
  5. Use the verification queries: The verification step isn’t optional – it’s the only way to be specific remediation was successful.
  6. Check PUBLIC permissions last: If a user still has unexpected access, PUBLIC permissions are usually the culprit.

Emergency Procedures

If You Need to Immediately Revoke All Access

For urgent security situations, you can disable a user account immediately:

-- Disable the account (prevents login but doesn't remove permissions)ALTER USER "john.doe" VALID UNTIL '1900-01-01';
-- Then follow the normal remediation process when time permits

If You Accidentally Revoke Too Much

If you accidentally remove permissions that should remain:

  1. Check the original ticket carefully – what should actually be removed?
  2. Re-grant the appropriate group memberships
  3. Verify using the audit query that permissions match expectations
  4. Document the mistake to prevent future occurrences

This comprehensive approach ensures that user permissions are properly remediated while handling all the edge cases that make Redshift user management challenging.