Blog
Jan 15, 2025 - 8 MIN READ
The 99% Query: How We Tuned MySQL 8.4 Across Millions of Rows

The 99% Query: How We Tuned MySQL 8.4 Across Millions of Rows

A practical walkthrough of how composite indexing, query rewriting, and systematic profiling turned our slowest database operations into sub-millisecond responses.

Clay Levering

Clay Levering

Engineering Leader at Blu Digital Group

When your multi-tenant Laravel application serves enterprise clients processing terabytes of media content, database performance isn't academic — it's existential. A slow query doesn't just frustrate users; it backs up transcoding pipelines, delays deliveries, and costs real money.

This is the story of how we took our worst-performing queries from seconds to sub-millisecond response times — a 99% improvement in the cases that mattered most.

The Problem

Our application had grown organically over several years. What started as a straightforward CRUD app had evolved into a complex multi-tenant system with dozens of interconnected tables, some containing millions of rows. The queries that powered our core workflows — asset searches, job status lookups, reporting aggregations — were getting slower by the week.

The classic symptoms were all there: EXPLAIN plans showing full table scans, temporary tables being created on disk, and join operations that made you wince. We had indexes, sure — but they'd been added reactively over the years without a coherent strategy.

The Approach

Rather than throwing hardware at the problem (the default move when you're on RDS), we took a systematic approach:

1. Instrument everything. Before touching a single index, we enabled the slow query log and built a simple dashboard that tracked our top 20 slowest queries by frequency and duration. This gave us a prioritized hit list.

2. Understand the data access patterns. We mapped out how the application actually queried each table — not how we thought it queried them. This revealed several queries that had been written for an earlier version of the schema and were doing unnecessary joins.

3. Composite indexing strategy. The biggest wins came from designing composite indexes that matched our actual query patterns. A single well-designed composite index often replaced three or four individual column indexes while dramatically improving performance.

4. Query rewriting. Some queries just needed to be rewritten. Subqueries that could be JOINs, OR clauses that could be UNION ALL, and SELECT * that could be specific column lists.

The Results

The numbers speak for themselves:

  • Asset search queries: 2.3s → 12ms (99.5% improvement)
  • Job status aggregation: 890ms → 8ms (99.1% improvement)
  • Reporting pipelines: 45s → 1.2s (97.3% improvement)
  • Overall RDS CPU utilization: Dropped from 78% average to 23%

The RDS CPU drop was particularly satisfying — it meant we could actually downsize our database instance, turning a performance project into a cost savings project too.

Lessons Learned

The biggest takeaway: database performance is a design problem, not a hardware problem. You can always throw bigger instances at slow queries, but that just delays the inevitable and burns money in the meantime.

Take the time to understand your actual query patterns, design indexes intentionally, and profile regularly. The compound effect of many small optimizations is often more impactful than any single silver bullet.

claylevering.com © 2026