Private Link Service for Snowflake OpenFlow: Technical Overview

Understanding Private Link Service vs Private Endpoint

Key Distinction

  • Private Endpoint: Allows resources within your VNet to connect outbound to Azure services
  • Private Link Service: Allows external services to connect inbound to resources in your VNet

The Challenge with VNet-Injected MySQL Flexible Server

When MySQL Flexible Server is VNet-injected:

  • It exists entirely within your private VNet address space
  • Traditional private endpoints are not supported (as your engineering team noted)
  • External services like Snowflake OpenFlow cannot directly reach it

How Private Link Service Solves This

Architecture Flow

Snowflake OpenFlow → Private Endpoint (Snowflake's VNet) → Private Link Service (Your VNet) → MySQL Flexible Server

Step-by-Step Process

  1. Private Link Service Creation
  • You create a Private Link Service in your VNet
  • This service acts as a secure “front door” to your MySQL server
  • It gets a unique service identifier (alias)
  1. Load Balancer Integration
  • Private Link Service requires a Standard Load Balancer
  • Load balancer backend pool contains your MySQL Flexible Server
  • Traffic routing is handled transparently
  1. Connection Establishment
  • Snowflake creates a Private Endpoint in their VNet
  • This Private Endpoint connects to your Private Link Service
  • Connection request appears in your Azure portal for approval
  1. Traffic Flow
  • OpenFlow sends requests to their Private Endpoint
  • Traffic routes through the Private Link connection to your Private Link Service
  • Your Load Balancer forwards traffic to MySQL Flexible Server
  • Responses follow the reverse path

Traffic Direction Analysis

Inbound Connection Requirement

YES, the inbound connection shown in your diagram is necessary because:

  • OpenFlow Architecture: Snowflake OpenFlow runs in Snowflake’s infrastructure and must connect TO your database
  • CDC Requirements: Change Data Capture requires persistent connections from OpenFlow to monitor MySQL binlogs
  • Connection Initiation: The connection is always initiated from Snowflake’s side, making it inherently inbound to your infrastructure

Traffic Flow Breakdown

PhaseDirectionDescription
Connection SetupSnowflake → Your VNetOpenFlow establishes persistent connection
Binlog MonitoringSnowflake → MySQLContinuous monitoring for changes
Change NotificationMySQL → SnowflakeData changes sent back
Heartbeat/HealthBidirectionalConnection maintenance

Security Benefits

Network Isolation

  • No public IP addresses required on MySQL
  • Traffic never traverses the public internet
  • Connection uses Azure’s backbone network

Access Control

  • You control which services can connect via Private Link Service
  • Connection requests require your explicit approval
  • NSG rules can further restrict traffic

Monitoring

  • All connections are logged and auditable
  • Private Link Service provides connection metrics
  • Standard Azure monitoring applies

Implementation Requirements

Prerequisites

  • Standard Load Balancer (required for Private Link Service)
  • MySQL Flexible Server in VNet-injected mode
  • Appropriate NSG rules
  • Resource permissions for Private Link Service creation

Configuration Steps

  1. Create Standard Load Balancer with MySQL in backend pool
  2. Create Private Link Service linked to the Load Balancer
  3. Configure NSG rules to allow traffic from Private Link Service subnet
  4. Share Private Link Service alias with Snowflake team
  5. Approve connection request when it appears
  6. Configure OpenFlow connector with connection details

Why This Approach Works

The Private Link Service architecture elegantly solves the fundamental challenge:

  • Your Constraint: VNet-injected MySQL cannot have traditional private endpoints
  • Snowflake’s Need: OpenFlow requires inbound connectivity for CDC
  • The Solution: Private Link Service provides secure inbound connectivity without compromising your network isolation

This is Microsoft and Snowflake’s recommended pattern for exactly this scenario, allowing enterprise-grade security while enabling real-time data integration.

Remediating Redshift User Permissions

Overview

This guide covers the complete process for remediating Redshift user permissions as part of quarterly user access reviews. When users leave the company or their access needs change, we receive tickets with specific Schema-Permission attributes that need to be removed.

Out of the box, Redshift doesn’t make user permission management easy – especially when dealing with default privileges, object ownership, and the various ways permissions can be granted. This guide provides a systematic approach to handle all the edge cases you’ll encounter.

Complete Remediation Process

Step 1: Comprehensive User Audit

Always start by understanding the current state. Run this comprehensive audit to see all permissions:

-- Replace 'john.doe' with the target username
select user_name, schema_name, super_user, has_create, has_insert, has_update, has_delete, has_select, has_references, valuntil from ( select u.usename user_name, u.usesuper super_user, s.schemaname schema_name, has_schema_privilege(u.usename,s.schemaname,'create') has_create, has_table_privilege(u.usename,s.schemaname||'.'||s.tablename,'insert') has_insert, has_table_privilege(u.usename,s.schemaname||'.'||s.tablename,'update') has_update, has_table_privilege(u.usename,s.schemaname||'.'||s.tablename,'delete') has_delete, has_table_privilege(u.usename,s.schemaname||'.'||s.tablename,'select') has_select, has_table_privilege(u.usename,s.schemaname||'.'||s.tablename,'references') has_references, valuntil from pg_user u CROSS join ( SELECT DISTINCT schemaname, tablename FROM pg_tables where schemaname not like 'pg_%' and tablename not like '%newsletter_exp_prior_lookback_temptable%' ) s where (super_user = 1 or has_create = 1 or has_insert = 1 or has_update = 1 or has_delete = 1 or has_select = 1 or has_references = 1 ) and (u.valuntil > NOW() or u.valuntil is NULL) and u.usename = 'john.doe' ) group by user_name, schema_name, super_user, has_create, has_insert, has_update, has_delete, has_select, has_references, valuntil order by user_name, schema_name, has_select, has_create, has_insert, has_update, has_delete;

Step 2: Check for Object Ownership

Critical: If the user owns any tables, views, or functions, they need to be reassigned before permissions can be fully revoked:

select * from ( SELECT n.nspname AS schema_name, c.relname AS rel_name, c.relkind AS rel_kind, pg_get_userbyid(c.relowner) AS owner_name FROM pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace UNION ALL SELECT n.nspname AS schema_name, p.proname, 'p', pg_get_userbyid(p.proowner) FROM pg_proc p JOIN pg_namespace n ON n.oid = p.pronamespace ) sub1 where owner_name = 'john.doe';

If this returns results, you’ll need to reassign ownership:

-- Example: Reassign table ownership to a service account
ALTER TABLE schema_name.table_name OWNER TO service_account;

Step 3: Check for Default Privileges

Critical for ETL accounts: Users with default privileges cannot be dropped until these are cleaned up. Check for default privileges:

select pg_get_userbyid(d.defacluser) as user, n.nspname as schema, decode(d.defaclobjtype, 'r', 'tables', 'f', 'functions') as object_type, array_to_string(d.defaclacl, ' + ') as default_privileges from pg_catalog.pg_default_acl d left join pg_catalog.pg_namespace n on n.oid = d.defaclnamespace where array_to_string(defaclacl, ',') like '%john.doe%';

If default privileges exist, you must clean them up using this workaround (due to a PostgreSQL bug):

-- Grant temporary CREATE permission (required due to PostgreSQL bug)
grant create on schema ops to john.doe;

-- Revoke the default privileges
alter default privileges for user john.doe in schema ops revoke all privileges on tables from group ops, group engineering; 

-- Remove the temporary CREATE permission
revoke create on schema ops from john.doe;

Step 4: Identify Group Memberships

Find all groups the user belongs to and generate removal statements:

SELECT u.usesysid, g.groname, u.usename, 'ALTER GROUP "' || g.groname || '" DROP USER "' || u.usename || '";' as drop_statement FROM pg_user u LEFT JOIN pg_group g ON u.usesysid = ANY (g.grolist) WHERE u.usename = 'john.doe';

Step 5: Remove Group Memberships

Execute the group removal statements. Based on typical Schema-Permission patterns:

-- Remove user from read-only groups (SELECT permissions)
ALTER GROUP "ops" DROP USER "john.doe"; 
ALTER GROUP "person" DROP USER "john.doe"; 
-- Remove user from groups with broader permissions
ALTER GROUP "nonpii" DROP USER "john.doe"; 
ALTER GROUP "nonpii_readwrite" DROP USER "john.doe";

Step 6: Handle Direct Table Permissions

Some users may have been granted direct permissions on specific tables. This query will find them and generate REVOKE statements:

with users as ( select 'john.doe'::text as username ) select 'REVOKE ALL PRIVILEGES ON TABLE ' || pg_namespace.nspname || '.' || pg_class.relname || ' FROM ' || u.username || ';' as revoke_statement, pg_namespace.nspname as schemaname, pg_class.relname as tablename, array_to_string(pg_class.relacl, ',') as acls from pg_class left join pg_namespace on pg_class.relnamespace = pg_namespace.oid join users u on (array_to_string(pg_class.relacl, ',') like '%' || u.username || '=%') where pg_class.relacl is not null and pg_namespace.nspname not in ('pg_catalog', 'pg_toast', 'information_schema');

Execute the generated REVOKE statements:

-- Example output from above query
REVOKE ALL PRIVILEGES ON TABLE ops.claims_grading FROM john.doe; REVOKE ALL PRIVILEGES ON TABLE person.user_segments FROM john.doe;

Step 7: Handle Schema-Level Permissions

Remove any direct schema-level permissions:

-- Revoke CREATE permissions
REVOKE CREATE ON SCHEMA ops FROM "john.doe"; 
REVOKE CREATE ON SCHEMA person FROM "john.doe"; 

-- Revoke USAGE permissions
REVOKE USAGE ON SCHEMA ops FROM "john.doe"; 
REVOKE USAGE ON SCHEMA person FROM "john.doe";

Step 8: Comprehensive Verification

After remediation, verify all permissions have been removed:

-- Check for any remaining table permissions
SELECT n.nspname AS schema_name, c.relname AS table_name, u.usename AS username, has_table_privilege(u.usename, c.oid, 'SELECT') AS has_select, has_table_privilege(u.usename, c.oid, 'INSERT') AS has_insert, has_table_privilege(u.usename, c.oid, 'UPDATE') AS has_update, has_table_privilege(u.usename, c.oid, 'DELETE') AS has_delete, 'REVOKE ALL ON "' || n.nspname || '"."' || c.relname || '" FROM "' || u.usename || '";' as cleanup_statement, 'SHOW GRANTS ON TABLE "' || n.nspname || '"."' || c.relname || '";' as verification_statement FROM pg_catalog.pg_namespace n JOIN pg_catalog.pg_class c ON n.oid = c.relnamespace CROSS JOIN pg_catalog.pg_user u WHERE u.usename = 'john.doe' AND n.nspname IN ('ops', 'personalization') AND (has_table_privilege(u.usename, c.oid, 'SELECT') = true OR has_table_privilege(u.usename, c.oid, 'INSERT') = true OR has_table_privilege(u.usename, c.oid, 'UPDATE') = true OR has_table_privilege(u.usename, c.oid, 'DELETE') = true) AND c.relkind = 'r' ORDER BY n.nspname ASC;

If this query returns no rows, the remediation was successful.

Step 9: Final Verification

Run these final checks to ensure complete cleanup:

-- Verify no group memberships remain
SELECT u.usesysid, g.groname, u.usename FROM pg_user u LEFT JOIN pg_group g ON u.usesysid = ANY (g.grolist) WHERE u.usename = 'john.doe' AND g.groname IS NOT NULL;
-- Show any remaining grants
SHOW GRANTS FOR "john.doe";
-- Check specific tables if needed
SHOW GRANTS ON TABLE "ops"."claims_grading";

Advanced Troubleshooting

Case 1: User Still Has Access After All Revocations

This usually means permissions were granted to PUBLIC. Check and revoke:

-- Check for PUBLIC grants on specific problematic tables 
SHOW GRANTS ON TABLE ops.claims_grading;
-- If PUBLIC has access, revoke it (this is the "nuclear option")
REVOKE ALL ON ops.claims_grading FROM PUBLIC;

Case 2: Cannot Drop User Due to Default Privileges

If you see this error:

ERROR: user "username" cannot be dropped because some objects depend on it DETAIL: owner of default privileges on new relations belonging to user username in schema schema_name

Follow the default privileges cleanup process in Step 3. This is a known PostgreSQL limitation that requires the temporary CREATE permission workaround.

Case 3: Permission Denied When Revoking Default Privileges

If you get “permission denied for schema” when trying to revoke default privileges, you need to temporarily grant CREATE permissions first (see Step 3). This is due to a semi-bug in PostgreSQL.

Case 4: Complex Permission Inheritance

Sometimes users inherit permissions through multiple group memberships or nested groups. In these cases:

  1. Run the comprehensive audit query multiple times during remediation
  2. Check for indirect permissions through role inheritance
  3. Verify that group memberships are removed (some systems cache group information)

Best Practices

  1. Always follow the order: Object ownership → Default privileges → Group memberships → Direct permissions → Schema permissions → Verification
  2. Document edge cases: Each remediation teaches you something new. Keep notes on unusual patterns.
  3. Test in non-production first: For complex users (especially ETL accounts), test the remediation process in a non-production environment.
  4. Handle default privileges immediately: Don’t wait until user departure to clean up default privileges – they’re the most significant source of complications.
  5. Use the verification queries: The verification step isn’t optional – it’s the only way to be specific remediation was successful.
  6. Check PUBLIC permissions last: If a user still has unexpected access, PUBLIC permissions are usually the culprit.

Emergency Procedures

If You Need to Immediately Revoke All Access

For urgent security situations, you can disable a user account immediately:

-- Disable the account (prevents login but doesn't remove permissions)ALTER USER "john.doe" VALID UNTIL '1900-01-01';
-- Then follow the normal remediation process when time permits

If You Accidentally Revoke Too Much

If you accidentally remove permissions that should remain:

  1. Check the original ticket carefully – what should actually be removed?
  2. Re-grant the appropriate group memberships
  3. Verify using the audit query that permissions match expectations
  4. Document the mistake to prevent future occurrences

This comprehensive approach ensures that user permissions are properly remediated while handling all the edge cases that make Redshift user management challenging.

How to Extract and Replicate PostgreSQL Permissions When Migrating to a New Instance

Migrating a PostgreSQL database isn’t just about moving data—getting the right roles and permissions in place is critical for security and proper application function. This post demonstrates how to extract roles and permissions from your source instance and apply them to a new PostgreSQL environment.

Understanding Permissions in PostgreSQL

PostgreSQL controls access via roles. A role can represent either a database user or a group, and roles are granted permissions (privileges) on objects (tables, databases, schemas, etc.) using GRANT and REVOKE commands. These permissions can be viewed and managed at various levels:

  • Database: Control who can connect
  • Schema: Control access to groups of tables, functions, etc.
  • Objects: Control what actions (SELECT, INSERT, UPDATE, etc.) users can perform on tables, functions, or sequences.

You can view permissions using PostgreSQL’s psql meta-commands:

  • \l+ — Show database privileges
  • \dn+ — Show schema privileges
  • \dp+ — Show table and other object privileges

Step 1: Extract Roles from the Source Instance

To extract roles (users and groups) from your current PostgreSQL server:

pg_dumpall --roles-only > roles.sql

Note:

  • This command will export all roles (but not their passwords in managed services like Azure Database for PostgreSQL).
  • In cloud managed systems, you might not have the ability to extract passwords; you’ll need to set them manually on the target instance.

Step 2: Extract Role & Object Permissions

Extracting object-level permissions (like all GRANT and REVOKE statements) can be done while dumping database schema:

pg_dump -h <source_server> -U <username> -d <dbname> -s > db_schema.sql

Next, filter permission statements:

If you’re working in PowerShell (Windows), run:

pg_dump -h "psql-prod-01.postgres.database.azure.com" -U pgadmin -d "prod_db" -s | Select-String -Pattern "^(GRANT|REVOKE|ALTER DEFAULT PRIVILEGES)" -Path "C:\Path\to\db_schema.sql" | ForEach-Object { $_.Line } > C:\Path\to\permissions.sql

If you’re working on Mac/Linux, run:

pg_dump -h "psql-prod-01.postgres.database.azure.com" -U pgadmin -d "prod_db" -s | grep -E '^(GRANT|REVOKE|ALTER DEFAULT PRIVILEGES)' > perms.sql

This extracts all lines related to granting or revoking privileges and puts them in perms.sql.

Step 3: Prepare and Edit Scripts

  • Review the extracted roles.sql and `permissions.sql:
    • Remove any references to unsupported roles (like postgres superuser in cloud environments).
    • Plan to set user passwords manually if they weren’t included.

Step 4: Copy Roles and Permissions to the New Instance

  1. Recreate roles:textpsql -h <target_server> -U <admin_user> -f roles.sql
    • Remember to set or update passwords for each user after creation.
  2. Apply object-level permissions:textpsql -h <target_server> -U <admin_user> -d <target_db> -f perms.sql

Step 5: Validate Permissions

Connect as each role or user to ensure operations work as expected:

  • Use \dp tablename in psql to check table permissions.
  • Use the information_schema views (e.g., role_table_grants) to query permissions programmatically:SELECT grantee, privilege_type, table_name FROM information_schema.role_table_grants;

Decoding a Python Script: An Improv-Inspired Guide for Beginners

By Vinay Rahul Are, Python Enthusiast \& Improv Comedy Fan


Introduction

Learning Python can feel intimidating—unless you approach it with a sense of play! Just like improv comedy, Python is about saying “yes, and…” to new ideas, experimenting, and having fun. In this post, I’ll walk you through a real-world Python script, breaking down each part so you can understand, explain, and even perform it yourself!


The Script’s Purpose

The script we’ll explore automates the process of running multiple SQL files against an Amazon Redshift database. For each SQL file, it:

  • Executes the file’s SQL commands on Redshift
  • Logs how many rows were affected, how long it took, and any errors
  • Moves the file to a “Done” folder when finished

It’s a practical tool for data engineers, but the structure and logic are great for any Python beginner to learn from.


1. The “Show Description” (Docstring)

At the top, you’ll find a docstring—a big comment block that tells you what the script does, what you need to run it, and how to use it.

"""
Batch Redshift SQL Script Executor with Per-Script Logging, Timing, and Post-Execution Archiving

Pre-requisites:
---------------
1. Python 3.x installed on your machine.
2. The following Python packages must be installed:
    - psycopg2-binary
3. (Recommended) Use a virtual environment to avoid dependency conflicts.
4. Network access to your Amazon Redshift cluster.

Installation commands:
----------------------
python -m venv venv
venv\Scripts\activate        # On Windows
pip install psycopg2-binary

Purpose:
--------
This script automates the execution of multiple .sql files against an Amazon Redshift cluster...
"""

2. Importing the “Cast and Crew” (Modules)

Every show needs its cast. In Python, that means importing modules:

import os
import glob
import psycopg2
import getpass
import shutil
import time
  • os, glob, shutil: Handle files and folders
  • psycopg2: Talks to the Redshift database
  • getpass: Securely prompts for passwords
  • time: Measures how long things take

3. The “Stage Directions” (Configuration)

Before the curtain rises, set your stage:

HOST = '<redshift-endpoint>'
PORT = 5439
USER = '<your-username>'
DATABASE = '<your-database>'
SCRIPT_DIR = r'C:\redshift_scripts'
DONE_DIR = os.path.join(SCRIPT_DIR, 'Done')
  • Replace the placeholders with your actual Redshift details and script folder path.

4. The “Comedy Routine” (Function Definition)

The main function, run_sql_script, is like a well-rehearsed bit:

def run_sql_script(script_path, conn):
    log_path = os.path.splitext(script_path)[0] + '.log'
    with open(script_path, 'r', encoding='utf-8') as sql_file, open(log_path, 'w', encoding='utf-8') as log_file:
        sql = sql_file.read()
        log_file.write(f"Running script: {script_path}\n")
        start_time = time.perf_counter()
        try:
            with conn.cursor() as cur:
                cur.execute(sql)
                end_time = time.perf_counter()
                elapsed_time = end_time - start_time
                rows_affected = cur.rowcount if cur.rowcount != -1 else 'Unknown'
                log_file.write(f"Rows affected: {rows_affected}\n")
                log_file.write(f"Execution time: {elapsed_time:.2f} seconds\n")
                conn.commit()
                log_file.write("Execution successful.\n")
        except Exception as e:
            end_time = time.perf_counter()
            elapsed_time = end_time - start_time
            log_file.write(f"Error: {str(e)}\n")
            log_file.write(f"Execution time (until error): {elapsed_time:.2f} seconds\n")
            conn.rollback()
  • Reads the SQL file
  • Logs what’s happening
  • Measures execution time
  • Handles success or errors gracefully

5. The “Main Event” (main function)

This is the showrunner, making sure everything happens in order:

def main():
    password = getpass.getpass("Enter your Redshift password: ")
    if not os.path.exists(DONE_DIR):
        os.makedirs(DONE_DIR)
    sql_files = glob.glob(os.path.join(SCRIPT_DIR, '*.sql'))
    conn = psycopg2.connect(
        host=HOST,
        port=PORT,
        user=USER,
        password=password,
        dbname=DATABASE
    )
    for script_path in sql_files:
        print(f"Running {script_path} ...")
        run_sql_script(script_path, conn)
        try:
            shutil.move(script_path, DONE_DIR)
            print(f"Moved {script_path} to {DONE_DIR}")
        except Exception as move_err:
            print(f"Failed to move {script_path}: {move_err}")
    conn.close()
    print("All scripts executed.")
  • Prompts for your password (no peeking!)
  • Makes sure the “Done” folder exists
  • Finds all .sql files
  • Connects to Redshift
  • Runs each script, logs results, and moves the file when done

6. The “Curtain Call” (Script Entry Point)

This line ensures the main event only happens if you run the script directly:

if __name__ == "__main__":
    main()

7. Explaining the Script in Plain English

“This script automates running a bunch of SQL files against a Redshift database. For each file, it logs how many rows were affected, how long it took, and any errors. After running, it moves the file to a ‘Done’ folder so you know it’s finished. It’s organized with clear sections for setup, reusable functions, and the main execution flow.”


8. Why This Structure?

  • Imports first: So all your helpers are ready before the show starts.
  • Functions: Keep the code neat, reusable, and easy to understand.
  • Main block: Keeps your script from running accidentally if imported elsewhere.
  • Comments and docstrings: Make it easy for others (and future you) to understand what’s going on.

9. Final Thoughts: Python is Improv!

Just like improv, Python is best learned by doing. Try things out, make mistakes, and remember: if your code “crashes,” it’s just the computer’s way of saying, “Yes, and…let’s try that again!”

If you want to dig deeper into any part of this script, just ask in the comments below. Happy coding—and yes, and… keep learning!


How Increasing Azure PostgreSQL IOPS Supercharged Our Bulk Insert Performance

Loading millions of records into a cloud database can be a frustratingly slow task—unless you identify where your bottlenecks are. In this post, I will share how we significantly improved our insertion speeds on Azure Database for PostgreSQL Flexible Server by adjusting a single, often-overlooked setting: provisioned IOPS.


The Challenge: Slow Inserts Despite Low CPU

We were running a large data migration from Databricks to Azure Database for PostgreSQL Flexible Server. Our setup:

  • Instance: Memory Optimized, E8ds_v4 (8 vCores, 64 GiB RAM, 256 GiB Premium SSD)
  • Insert Method: 8 parallel threads from Databricks, each batching 50,000 rows

Despite this robust configuration, our insert speeds were disappointing. Monitoring showed:

  • CPU usage: ~10%
  • Disk IOPS: 100% utilization

Clearly, our CPU wasn’t the problem—disk I/O was.


The Bottleneck: Disk IOPS Saturation

Azure Database for PostgreSQL Flexible Server ties write performance directly to your provisioned IOPS (Input/Output Operations Per Second). PostgreSQL is forced to queue up write operations when your workload hits this limit, causing inserts to slow down dramatically.

Key signs you’re IOPS-bound:

  • Disk IOPS metric at or near 100%
  • Low CPU and memory utilization
  • Inserts (and possibly other write operations) are much slower than expected

The Fix: Increase Provisioned IOPS

We increased our provisioned IOPS from 1,100 to 5,000 using the Azure Portal:

  1. Go to your PostgreSQL Flexible Server in Azure.
  2. Select Compute + storage.
  3. Adjust the IOPS slider (or enter a higher value if using Premium SSD v2).
  4. Save changes—no downtime required.

Result:
Insert speeds improved immediately and dramatically. Disk performance no longer throttled the database, and we could fully utilize our CPU and memory resources.


Lessons Learned & Best Practices

  • Monitor your bottlenecks: Always check disk IOPS, CPU, and memory during heavy data loads.
  • Scale IOPS with workload: Azure lets you increase IOPS on the fly. For bulk loads, temporarily raising IOPS can save hours or days of processing time.
  • Batch and parallelize wisely: Match your parallel threads to your vCPU count, but remember that IOPS is often the true limiter for bulk writes.
  • Optimize indexes and constraints: Fewer indexes mean fewer writes per insert. Drop non-essential indexes before bulk loads and recreate them afterward.

Conclusion:
If your PostgreSQL inserts are slow on Azure, check your disk IOPS. Increasing provisioned IOPS can unlock the performance your hardware is capable of—sometimes, it’s the simplest tweak that makes the biggest difference.