Back to Notes

SQL Query Optimization: Real-World Techniques

SQLDatabasePerformanceOptimization

SQL Query Optimization: Real-World Techniques

Quick reference for identifying and fixing slow SQL queries in production systems.

Finding Slow Queries

SQL Server Query Store

-- Top 10 slowest queries by average duration
SELECT TOP 10
    qt.query_sql_text,
    rs.avg_duration / 1000 AS avg_duration_ms,
    rs.count_executions,
    rs.last_execution_time
FROM sys.query_store_query_text qt
JOIN sys.query_store_query q ON qt.query_text_id = q.query_text_id
JOIN sys.query_store_plan p ON q.query_id = p.query_id
JOIN sys.query_store_runtime_stats rs ON p.plan_id = rs.plan_id
ORDER BY rs.avg_duration DESC;

Check for Missing Indexes

-- Find missing indexes
SELECT 
    migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) 
        * (migs.user_seeks + migs.user_scans) AS improvement_measure,
    mid.statement AS table_name,
    mid.equality_columns,
    mid.inequality_columns,
    mid.included_columns
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs 
    ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid 
    ON mig.index_handle = mid.index_handle
ORDER BY improvement_measure DESC;

Indexing Strategies

Covering Indexes

-- Bad: Table scan on every query
SELECT CustomerName, Email, City
FROM Customers
WHERE City = 'New York';

-- Good: Create covering index
CREATE NONCLUSTERED INDEX IX_Customers_City_Covering
ON Customers (City)
INCLUDE (CustomerName, Email);

Result: Query time dropped from 800ms to 5ms for 1M rows.

Composite Indexes (Order Matters!)

-- Query pattern
SELECT * FROM Orders
WHERE CustomerId = @customerId
  AND OrderDate > @startDate
  AND Status = 'Active';

-- ✅ Good: Most selective column first
CREATE INDEX IX_Orders_Customer_Date_Status
ON Orders (CustomerId, OrderDate, Status);

-- ❌ Bad: Wrong order
CREATE INDEX IX_Orders_Status_Date_Customer
ON Orders (Status, OrderDate, CustomerId);

Avoid Over-Indexing

-- Too many indexes hurt INSERT/UPDATE performance
-- Rule of thumb: 5-7 indexes per table max
-- Remove unused indexes:

SELECT 
    OBJECT_NAME(i.object_id) AS TableName,
    i.name AS IndexName,
    i.index_id,
    us.user_seeks,
    us.user_scans,
    us.user_updates
FROM sys.indexes i
LEFT JOIN sys.dm_db_index_usage_stats us 
    ON i.object_id = us.object_id 
    AND i.index_id = us.index_id
WHERE us.user_seeks = 0 
  AND us.user_scans = 0
  AND i.is_primary_key = 0;

Query Optimization Techniques

Use EXISTS Instead of IN

-- ❌ Slow with large subquery
SELECT * FROM Orders o
WHERE o.CustomerId IN (
    SELECT CustomerId FROM Customers WHERE City = 'New York'
);

-- ✅ Faster with EXISTS
SELECT * FROM Orders o
WHERE EXISTS (
    SELECT 1 FROM Customers c 
    WHERE c.CustomerId = o.CustomerId 
      AND c.City = 'New York'
);

Avoid SELECT *

-- ❌ Retrieves unnecessary data
SELECT * FROM Orders WHERE CustomerId = @id;

-- ✅ Select only needed columns
SELECT OrderId, OrderDate, TotalAmount
FROM Orders
WHERE CustomerId = @id;

Use NOLOCK Carefully

-- Use for reporting queries on read replicas
SELECT CustomerName, SUM(OrderTotal) AS Total
FROM Orders WITH (NOLOCK)
GROUP BY CustomerName;

-- ⚠️ Warning: May read uncommitted data (dirty reads)

Batch Updates

-- ❌ Slow: Row-by-row updates
WHILE EXISTS (SELECT 1 FROM Orders WHERE Status = 'Pending')
BEGIN
    UPDATE TOP (1) Orders
    SET Status = 'Processed'
    WHERE Status = 'Pending';
END

-- ✅ Fast: Batch update
UPDATE Orders
SET Status = 'Processed'
WHERE Status = 'Pending';

Avoiding Common Anti-Patterns

Implicit Conversions

-- ❌ Implicit conversion prevents index usage
SELECT * FROM Orders
WHERE OrderNumber = 12345; -- OrderNumber is VARCHAR

-- ✅ Explicit conversion
SELECT * FROM Orders
WHERE OrderNumber = '12345';

Functions on Indexed Columns

-- ❌ Function prevents index usage
SELECT * FROM Orders
WHERE YEAR(OrderDate) = 2024;

-- ✅ Use range query
SELECT * FROM Orders
WHERE OrderDate >= '2024-01-01'
  AND OrderDate < '2025-01-01';

Parameter Sniffing

-- Issue: Query plan optimized for first parameter value
CREATE PROCEDURE GetOrdersByCustomer
    @CustomerId INT
AS
BEGIN
    SELECT * FROM Orders
    WHERE CustomerId = @CustomerId;
END

-- Solution: Use OPTION (RECOMPILE) for varying data distribution
SELECT * FROM Orders
WHERE CustomerId = @CustomerId
OPTION (RECOMPILE);

-- Or use local variables to avoid parameter sniffing
DECLARE @CustId INT = @CustomerId;
SELECT * FROM Orders WHERE CustomerId = @CustId;

Pagination

Efficient Offset Pagination

-- ❌ Slow for large offsets
SELECT * FROM Orders
ORDER BY OrderId
OFFSET 1000000 ROWS
FETCH NEXT 50 ROWS ONLY;

-- ✅ Keyset pagination (faster)
SELECT TOP 50 * FROM Orders
WHERE OrderId > @LastOrderId
ORDER BY OrderId;

Monitoring and Statistics

Update Statistics

-- Outdated statistics cause poor query plans
UPDATE STATISTICS Orders WITH FULLSCAN;

-- Check when statistics were last updated
SELECT 
    OBJECT_NAME(object_id) AS TableName,
    name AS IndexName,
    STATS_DATE(object_id, stats_id) AS LastUpdated
FROM sys.stats
WHERE STATS_DATE(object_id, stats_id) < DATEADD(DAY, -7, GETDATE());

Query Execution Plan Analysis

-- Enable actual execution plan
SET STATISTICS IO ON;
SET STATISTICS TIME ON;

-- Look for:
-- - Table scans (should be seeks)
-- - High row counts in loops
-- - Warnings (implicit conversions, missing indexes)
-- - Sort operations (consider indexes)

Transaction Best Practices

Keep Transactions Short

-- ❌ Long transaction holds locks
BEGIN TRANSACTION
    SELECT * FROM Orders; -- Slow query
    -- Business logic
    UPDATE Inventory SET Quantity = Quantity - 1;
COMMIT

-- ✅ Transaction only around modifications
SELECT * FROM Orders; -- Outside transaction
-- Business logic
BEGIN TRANSACTION
    UPDATE Inventory SET Quantity = Quantity - 1;
COMMIT

Use Appropriate Isolation Levels

-- READ UNCOMMITTED for reports
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;

-- READ COMMITTED SNAPSHOT for OLTP (default in modern apps)
ALTER DATABASE MyDb SET READ_COMMITTED_SNAPSHOT ON;

-- SNAPSHOT for consistency without blocking
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;

Quick Performance Checklist

  • ✅ Add indexes on foreign keys
  • ✅ Include WHERE/JOIN columns in indexes
  • ✅ Use covering indexes for frequently queried columns
  • ✅ Avoid functions on indexed columns
  • ✅ Update statistics regularly
  • ✅ Monitor execution plans
  • ✅ Batch large operations
  • ✅ Use parameterized queries (prevents SQL injection too)
  • ✅ Partition large tables (>50M rows)
  • ✅ Archive old data

Useful Queries

-- Current running queries
SELECT 
    session_id,
    start_time,
    status,
    command,
    database_name = DB_NAME(database_id),
    wait_type,
    wait_time,
    blocking_session_id,
    text = (SELECT TEXT FROM sys.dm_exec_sql_text(sql_handle))
FROM sys.dm_exec_requests
WHERE session_id <> @@SPID;

-- Index fragmentation
SELECT 
    OBJECT_NAME(ips.object_id) AS TableName,
    i.name AS IndexName,
    ips.avg_fragmentation_in_percent,
    ips.page_count
FROM sys.dm_db_index_physical_stats(
    DB_ID(), NULL, NULL, NULL, 'SAMPLED') ips
INNER JOIN sys.indexes i 
    ON ips.object_id = i.object_id 
    AND ips.index_id = i.index_id
WHERE ips.avg_fragmentation_in_percent > 30
  AND ips.page_count > 1000
ORDER BY ips.avg_fragmentation_in_percent DESC;

Real-World Impact

Case Study: E-commerce Orders Query

  • Before: 4.2s average, table scan on 10M rows
  • After: 45ms average, covering index + query optimization
  • Solution: Added composite index, removed SELECT *, avoided functions on WHERE

Always measure before and after! Use execution plans and actual performance metrics, not intuition.


Found these tips helpful? Share your SQL optimization stories!