MySQL: Building the best INDEX for a given SELECT

Table of Contents

The Problem
First, some examples
Algorithm, Step 1 (WHERE "column = const")
Algorithm, Step 2
Algorithm, Step 2a (one range)
Algorithm, Step 2b (GROUP BY)
Algorithm, Step 2c (ORDER BY)
Algorithm end
Stop after equality test
Flags and Low Cardinality
"Covering" Indexes
Redundant/Excessive Indexes
Optimizer picks ORDER BY
IN ( SELECT ... )
Many-to-Many Mapping table
Subqueries and UNIONs
Signs of a Newbie
Speeding up wp_postmeta
Optimizer Trace
Handler counts
Brought to you by Rick James

The Problem

You have a SELECT and you want to build the best INDEX for it. This blog is a "cookbook" on how to do that task.
    ⚈  A short algorithm that works for many simpler SELECTs and helps in complex queries.
    ⚈  Examples of the algorithm, plus digressions into exceptions and variants.
    ⚈  Finally a long list of "other cases".

The hope is that a newbie can quickly get up to speed, and his/her INDEXes will no longer smack of "newbie".

Many edge cases are explained, so even an expert may find something useful here.


Here's the way to approach creating an INDEX, given a SELECT. Follow the steps below, gathering columns to put in the INDEX in order. When the steps give out, you usually have the 'perfect' index.

    1.  Given a WHERE with a bunch of expressions connected by AND: Include the columns (if any), in any order, that are compared to a constant and not hidden in a function.
    2.  You get one more chance to add to the INDEX; do the first of these that applies:
    ⚈   2a. One column used in a 'range' -- BETWEEN, >, LIKE w/o leading wildcard, etc.
    ⚈   2b. All columns, in order, of the GROUP BY.
    ⚈   2c. All columns, in order, of the ORDER BY if there is no mixing of ASC and DESC.


This blog assumes you know the basic idea behind having an INDEX. Here is a refresher on some of the key points.

Virtually all INDEXes in MySQL are structured as
BTrees allow very efficient for
    ⚈  Given a key, find the corresponding row(s);
    ⚈  "Range scans" -- That is start at one value for the key and repeatedly find the "next" (or "previous") row.


InnoDB "clusters" the PRIMARY KEY with the data. Hence, given the value of the PK ("PRIMARY KEY"), after drilling down the BTree to find the index entry, you have all the columns of the row when you get there. A "secondary key" (any UNIQUE or INDEX other than the PK) in InnoDB first drills down the BTree for the secondary index, where it finds a copy of the PK Then it drills down the PK to find the row.

Every InnoDB table has a PRIMARY KEY. While there is a default if you do not specify one, it is best to explicitly provide a PK.

For completeness: MyISAM works differently. All indexes (including the PK) are in separate BTrees. The leaf node of such BTrees have a pointer (usually a byte offset) into the data file.

All discussion here assumes InnoDB tables, however most statements apply to other Engines.

First, some examples

Think of a list of names, sorted by last_name, then first_name. You have undoubtedly seen such lists, and they often have other information such as address and phone number. Suppose you wanted to look me up. If you remember my full name ('James' and 'Rick'), it is easy to find my entry. If you remembered only my last name ('James') and first initial ('R'). You would quickly zoom in on the Jameses and find the Rs in them. There, you might remember 'Rick' and ignore 'Ronald'. But, suppose you remembered my first name ('Rick') and only my last initial ('J'). Now you are in trouble. You would be scanning all the Js -- Jones, Rick; Johnson, Rick; Jamison, Rick; etc, etc. That's much less efficient.

Those equate to
    INDEX(last_name, first_name) -- the order of the list.
    WHERE last_name = 'James' AND first_name = 'Rick'  -- best case
    WHERE last_name = 'James' AND first_name LIKE 'R%' -- pretty good
    WHERE last_name LIKE 'J%' AND first_name = 'Rick'  -- pretty bad

Think about this example as I talk about "=" ('equality predicate') versus "range" in the Algorithm, below.

Algorithm, Step 1 (WHERE "column = const")

    ⚈  WHERE aaa = 123 AND ... : an INDEX starting with aaa is good.
    ⚈  WHERE aaa = 123 AND bbb = 456 AND ... : an INDEX starting with aaa and bbb is good. In this case, it does not matter whether aaa or bbb comes first in the INDEX.
    ⚈  xxx IS NULL : this acts like "= const" for this discussion.
    ⚈  WHERE t1.aa = 123 AND = 456 -- You must only consider columns in the current table.

Note that the expression must be of the form of column_name = (constant). The following do not work for step 1: DATE(dt) = '...', LOWER(s) = '...', CAST(s ...) = '...', x='...' COLLATE... -- They "hide the column in a function call". The index cannot be used.

(If there are no "=" parts AND'd in the WHERE clause, move on to step 2 without any columns in your putative INDEX.)

Algorithm, Step 2

Find the first of 2a / 2b / 2c that applies; use it; then quit. If none apply, then you are through gathering columns for the index.

In some cases it is optimal to do step 1 (all equals) plus step 2c (ORDER BY).

Algorithm, Step 2a (one range)

A "range" shows up as
    ⚈  aaa >= 123 -- any of <, <=, >=, >; but not <>, !=, IS NOT NULL
    ⚈  aaa BETWEEN 22 AND 44
    ⚈  sss LIKE 'blah%' -- but not sss LIKE '%blah'
    ⚈  xxx IS NOT NULL
Add the column in the range to your putative INDEX.

If there are more parts to the WHERE clause, you must stop now.

Complete examples (assume nothing else comes after the snippet)
    ⚈  WHERE aaa >= 123 AND bbb = 1   ⇒ INDEX(bbb, aaa) (WHERE order does not matter; INDEX order does)
    ⚈  WHERE aaa >= 123   ⇒ INDEX(aaa)
    ⚈  WHERE aaa >= 123 AND ccc > 'xyz'   ⇒ INDEX(aaa) or INDEX(ccc) (only one range)
    ⚈  WHERE aaa >= 123 ORDER BY aaa   ⇒ INDEX(aaa) -- Bonus: The ORDER BY will use the INDEX.
    ⚈  WHERE aaa >= 123 ORDER BY aaa   ⇒ INDEX(aaa) DESC -- Same Bonus.

(There are some cases where multiple ranges on a single column will work.)

Algorithm, Step 2b (GROUP BY)

If there is a GROUP BY, all the columns of the GROUP BY should now be added, in the specified order, to the INDEX you are building. (I do not know what happens if one of the columns is already in the INDEX.)

If you are GROUPing BY an expression (including function calls), you cannot use the GROUP BY; stop.

Complete examples (assume nothing else comes after the snippet)
    ⚈  WHERE aaa = 123 AND bbb = 1 GROUP BY ccc   ⇒ INDEX(bbb, aaa, ccc) or INDEX(aaa, bbb, ccc) (='s first, in any order; then the GROUP BY)
    ⚈  WHERE aaa >= 123 GROUP BY xxx   ⇒ INDEX(aaa) (You should have stopped with Step 2a)
    ⚈  GROUP BY x,y   ⇒ INDEX(x,y) (no WHERE)
    ⚈  WHERE aaa = 123 GROUP BY xxx,(a+b)   ⇒ INDEX(aaa) -- expression in GROUP BY, so no use including even xxx.

Algorithm, Step 2c (ORDER BY)

If there is a ORDER BY, all the columns of the ORDER BY should now be added, in the specified order, to the INDEX you are building.

If there are multiple columns in the ORDER BY, and there is a mixture of ASC and DESC, do not add the ORDER BY columns; they won't help; stop. (There is an exception in version 8.0.)

If you are ORDERing BY an expression (including function calls), you cannot use the ORDER BY; stop.

Complete examples (assume nothing else comes after the snippet)
    ⚈  WHERE aaa = 123 GROUP BY ccc ORDER BY ddd   ⇒ INDEX(aaa, ccc) -- should have stopped with Step 2b
    ⚈  WHERE aaa = 123 GROUP BY ccc ORDER BY ccc   ⇒ INDEX(aaa, ccc) -- the ccc will be used for both GROUP BY and ORDER BY
    ⚈  WHERE aaa = 123 ORDER BY xxx ASC, yyy DESC   ⇒ INDEX(aaa) -- mixture of ASC and DESC.

The following are especially good. Normally a LIMIT cannot be applied until after lots of rows are gathered and then sorted according to the ORDER BY. But, if the INDEX gets all they way through the ORDER BY, only (OFFSET + LIMIT) rows need to be gathered. So, in these cases, you win the lottery with your new index:
    ⚈  WHERE aaa = 123 GROUP BY ccc ORDER BY ccc LIMIT 10   ⇒ INDEX(aaa, ccc)
    ⚈  WHERE aaa = 123 ORDER BY ccc LIMIT 10   ⇒ INDEX(aaa, ccc)
    ⚈  ORDER BY ccc LIMIT 10   ⇒ INDEX(ccc)
    ⚈  WHERE ccc > 432 ORDER BY ccc LIMIT 10   ⇒ INDEX(ccc) -- This "range" is compatible with ORDER BY
    ⚈  WHERE ccc < 432 ORDER BY ccc LIMIT 10   ⇒ INDEX(ccc) -- also works

(It does not make much sense to have a LIMIT without an ORDER BY, so I do not discuss that case.)

Algorithm end

You have collected a few columns; put them in INDEX and ADDed that to the table. That will often produce a "good" index for the SELECT you have. Below are some other suggestions that may be relevant.

An example of the Algorithm being 'wrong':
    SELECT ... FROM t WHERE flag = true;
This would (according to the Algorithm) call for INDEX(flag). However, indexing a column that has two (or a small number of) values is almost always useless. This is called 'low cardinality'. The Optimizer would prefer to do a table scan than to bounce between the index BTree and the data.

On the other hand, the Algorithm is 'right' with
    SELECT ... FROM t WHERE flag = true AND date >= '2015-01-01';
That would call for a compound index starting with a flag: INDEX(flag, date). Such an index is likely to be very beneficial. And it is likely to be more beneficial than INDEX(date).

Even more striking is with a LIMIT:
    SELECT ... FROM t WHERE flag = true AND date >= '2015-01-01' LIMIT 10;
INDEX(flag, date) would be able to stop after 10 rows; INDEX(date) would not.

If your resulting INDEX include column(s) that are likely to be UPDATEd, note that the UPDATE will have extra work to remove a 'row' from one place in the INDEX's BTree and insert a 'row' back into the BTree. For example:
UPDATE t SET x = ... WHERE ...;
There are too many variables to say whether it is better to keep the index or to toss it.


(There are exceptions to some of these.)
    ⚈  In 5.6 / 10.0, you may not include a column that equates to bigger than 767 bytes: VARCHAR(255) CHARACTER SET utf8 or VARCHAR(191) CHARACTER SET utf8mb4.
    ⚈  In 5.7 / 10.2, you may not include a column that equates to bigger than 3072 bytes.
    ⚈  You can deal with big fields using "prefix" indexing; but see below.
    ⚈  You should not have more than 5 columns in an index. (This is just a Rule of Thumb; up to 16 is allowed.)
    ⚈  You should not have redundant indexes. (See below.)

Stop after equality test

Technically, it is more complex than testing for '=': The relevant part from the documentation: "The optimizer attempts to use additional key parts to determine the interval as long as the comparison operator is =, <=>, or IS NULL. If the operator is >, <, >=, <=, !=, <>, BETWEEN, or LIKE, the optimizer uses it but considers no more key parts." See
Range Optimization

Flags and Low Cardinality

INDEX(flag) is almost never useful if flag has very few values. More specifically, when you say WHERE flag = 1 and "1" occurs more than 20% of the time, such an index will be shunned. The Optimizer would prefer to scan the table instead of bouncing back and forth between the index and the data for more than 20% of the rows.

("20%" is really somewhere between 10% and 30%, depending on the phase of the moon.)

Example of 20% cutoff

Discussions about Cardinality: Cardinality & Range
Cardinality & composite
Single-column index

"Covering" Indexes

A "Covering" index is an index that contains all the columns in the SELECT. It is special in that the SELECT can be completed by looking only at the INDEX BTree. (Since InnoDB's PRIMARY KEY is clustered with the data, "covering" is of no benefit when considering at the PRIMARY KEY.)

    1.  Gather the list of column(s) according to the "Algorithm", above.
    2.  Add to the end of the list the rest of the columns seen in the SELECT, in any order.

    ⚈  SELECT x FROM tbl WHERE y = 5;   ⇒ INDEX(y,x) -- The algorithm said just INDEX(y)
    ⚈  SELECT x,z FROM tbl WHERE y = 5 AND q = 7;   ⇒ INDEX(y,q,x,z) -- y and q in either order (Algorithm), then x and z in either order (covering).
    ⚈  SELECT x FROM tbl WHERE y > 5 AND q > 7;   ⇒ INDEX(y,q,x) -- y or q first (that's as far as the Algorithm goes), then the other two fields afterwards.

The speedup you get might be minor, or it might be spectacular; it is hard to predict.

    ⚈  It is not wise to build an index with lots of columns. Let's cut it off at 5 (Rule of Thumb).
    ⚈  Prefix indexes cannot 'cover', so don't use them anywhere in a 'covering' index.
    ⚈  There are limits (3KB?) on how 'wide' an index can be, so "covering" may not be possible.

Redundant/Excessive Indexes

INDEX(a,b) can find anything that INDEX(a) could find. So you don't need both. Get rid of the shorter one.

If you have lots of SELECTs and they generate lots of INDEXes, this may cause a different problem. Each index must be updated (sooner or later) for each INSERT. More indexes   ⇒ slower INSERTs. Limit the number of indexes on a table to about 6 (Rule of Thumb).

Notice in the cookbook how it says "in any order" in a few places. If, for example, you have both of these (in different SELECTs):
    ⚈  WHERE a=1 AND b=2 begs for either INDEX(a,b) or INDEX(b,a)
    ⚈  WHERE a>1 AND b=2 begs only for INDEX(b,a)
Include only INDEX(b,a) since it handles both cases with only one INDEX.

Suppose you have a lot of indexes, including (a,b,c,dd) and (a,b,c,ee). Those are getting rather long. Consider either picking one of them, or having simply (a,b,c). Sometimes the selectivity of (a,b,c) is so good that tacking on 'dd' or 'ee' does make enough difference to matter.

Optimizer picks ORDER BY

The main cookbook skips over an important optimization that is sometimes used. The optimizer will sometimes ignore the WHERE and, instead, use an INDEX that matches the ORDER BY. This, of course, needs to be a perfect match -- all columns, in the same order. And all ASC or all DESC.

This becomes especially beneficial if there is a LIMIT.

But there is a problem. There could be two situations, and the Optimizer is sometimes not smart enough to see which case applies:
    ⚈  If the WHERE does very little filtering, fetching the rows in ORDER BY order avoids a sort and has little wasted effort (because of 'little filtering'). Using the INDEX matching the ORDER BY is better in this case.
    ⚈  If the WHERE does a lot of filtering, the ORDER BY is wasting a lot of time fetching rows only to filter them out. Using an INDEX matching the WHERE clause is better.

What should you do? If you think the "little filtering" is likely, then create an index with the ORDER BY columns in order and hope that the Optimizer uses it when it should.


    ⚈  WHERE a=1 OR a=2 -- This is turned into WHERE a IN (1,2) and optimized that way.
    ⚈  WHERE a=1 OR b=2 usually cannot be optimized.
    ⚈  WHERE x.a=1 OR y.b=2 This is even worse because of using two different tables.

A workaround is to use UNION. Each part of the UNION is optimized separately. For the second case:
   ( SELECT ... WHERE a=1 )   -- and have INDEX(a)
   UNION DISTINCT -- "DISTINCT" is assuming you need to get rid of dups
   ( SELECT ... WHERE b=2 )   -- and have INDEX(b)
   GROUP BY ... ORDER BY ...  -- whatever you had at the end of the original query
Now the query can take good advantage of two different indexes. Note: "Index merge" might kick in on the original query, but it is not necessarily any faster than the UNION.
Sister blog on compound indexes, including 'Index Merge'

The third case (OR across 2 tables) is similar to the second.

If you originally had a LIMIT, UNION gets complicated. If you started with ORDER BY z LIMIT 190, 10, then the UNION needs to be
   ( SELECT ... ORDER BY ... LIMIT 200 )   -- Note: OFFSET 0, LIMIT 190+10
   ( SELECT ... ORDER BY ... LIMIT 200 )
   ORDER BY ...
   LIMIT 190, 10              -- Same as originally

In old versions of MySQL, UNION always created a temp table. Newer versions (MySQL 5.7.3 / MariaDB 10.1) can avoid the temp table in selected situations of UNION ALL.

stackoverflow Example of OR to UNION


You cannot directly index a TEXT or BLOB or large VARCHAR or large BINARY column. However, you can use a "prefix" index: INDEX(foo(20)). This says to index the first 20 characters of foo. But... It is rarely useful.

Example of a prefix index:
    INDEX(last_name(2), first_name)
The index for me would contain 'Ja', 'Rick'. That's not useful for distinguishing between 'Jamison', 'Jackson', 'James', etc., so the index is so close to useless that the optimizer often ignores it.

Probably never do UNIQUE(foo(20)) because this applies a uniqueness constraint on the first 20 characters of the column, not the whole column!

More on prefix indexing


DATE, DATETIME, etc. are tricky to compare against.

Some tempting, but inefficient, techniques:
    date_col LIKE '2016-01%'      -- must convert date_col to a string, so acts like a function
    LEFT(date_col, 4) = '2016-01' -- hiding the column in function
    DATE(date_col) = 2016         -- hiding the column in function
All must do a full scan. (On the other hand, it can handy to use GROUP BY LEFT(date_col, 7) for monthly grouping, but that is not an INDEX issue.)

This is efficient, and can use an index:
        date_col >= '2016-01-01'
    AND date_col  < '2016-01-01' + INTERVAL 3 MONTH
This case works because both right-hand values are converted to constants, then it is a "range". I like the design pattern with INTERVAL because it avoids computing the last day of the month. And it avoids tacking on '23:59:59', which is wrong if you have microsecond times. (And other cases.)


Perform EXPLAIN SELECT... (and EXPLAIN FORMAT=JSON SELECT... if you have 5.6.5). Look at the Key that it chose, and the Key_len. From those you can deduce how many columns of the index are being used for filtering. (JSON makes it easier to get the answer.) From that you can decide whether it is using as much of the INDEX as you thought. Caveat: Key_len only covers the WHERE part of the action; the non-JSON output won't easily say whether GROUP BY or ORDER BY was handled by the index.


IN (1,99,3) is sometimes optimized as efficiently as "=", but not always. Older versions of MySQL did not optimize it as well as newer versions. (5.6 is possibly the main turning point.)

IN ( SELECT ... )

From version 4.1 through 5.5, IN ( SELECT ... ) was very poorly optimized. The SELECT was effectively re-evaluated every time. Often it can be transformed into a JOIN, which works much faster. Heres is a pattern to follow:
    FROM  a
    WHERE  test_a
      AND  x IN (
        SELECT  x
            FROM  b
            WHERE  test_b
    FROM  a
    JOIN  b USING(x)
    WHERE  test_a
      AND  test_b;
The SELECT expressions will need "a." prefixing the column names.

Alas, there are cases where the pattern is hard to follow.

5.6 does some optimizing, but probably not as good as the JOIN.

If there is a JOIN or GROUP BY or ORDER BY LIMIT in the subquery, that complicates the JOIN in new format. So, it might be better to use this pattern:
    FROM  a
    WHERE  test_a
      AND  x IN ( SELECT  x  FROM ... );
    FROM  a
    JOIN        ( SELECT  x  FROM ... ) b
    WHERE  test_a;
Caveat: If you end up with two subqueries JOINed together, note that neither has any indexes, hence performance can be very bad. (5.6 improves on it by dynamically creating indexes for subqueries.)

There is work going on in MariaDB and Oracle 5.7, in relation to "NOT IN", "NOT EXISTS", and "LEFT JOIN..IS NULL"; here is an
old discussion on the topic
So, what I say here may not be the final word.


When you have a JOIN and a GROUP BY, you may have the situation where the JOIN exploded more rows than the original query (due to many:many), but you wanted only one row from the original table, so you added the GROUP BY to implode back to the desired set of rows.

This explode + implode, itself, is costly. It would be better to avoid them if possible.

Sometimes the following will work.

Using DISTINCT or GROUP BY to counteract the explosion
    FROM a
    JOIN b
SELECT  a.*,
        ( SELECT GROUP_CONCAT(b.y) FROM b WHERE b.x = a.x ) AS ys
    FROM a

When using second table just to check for existence:
    FROM a
    JOIN b  ON b.x = a.x
SELECT  a.*,
    FROM a
    WHERE EXISTS ( SELECT *  FROM b  WHERE b.x = a.x )

Another variant

Many-to-Many Mapping table

Do it this way.
        # No surrogate id for this table
        x_id MEDIUMINT UNSIGNED NOT NULL,   -- For JOINing to one table
        y_id MEDIUMINT UNSIGNED NOT NULL,   -- For JOINing to the other table
        # Include other fields specific to the 'relation'
        PRIMARY KEY(x_id, y_id),            -- When starting with X
        INDEX      (y_id, x_id)             -- When starting with Y
    ) ENGINE=InnoDB;

    ⚈  Lack of an AUTO_INCREMENT id for this table -- The PK given is the 'natural' PK; there is no good reason for a surrogate.
    ⚈  "MEDIUMINT" -- This is a reminder that all INTs should be made as small as is safe (smaller ⇒ faster). Of course the declaration here must match the definition in the table being linked to.
    ⚈  "UNSIGNED" -- Nearly all INTs may as well be declared non-negative
    ⚈  "NOT NULL" -- Well, that's true, isn't it?
    ⚈  "InnoDB" -- More effecient than MyISAM because of the way the PRIMARY KEY is clustered with the data in InnoDB.
    ⚈  "INDEX(y_id, x_id)" -- The PRIMARY KEY makes it efficient to go one direction; this index makes the other direction efficient. No need to say UNIQUE; that would be extra effort on INSERTs.
    ⚈  In the secondary index, saying just INDEX(y_id) would work because it would implicit include x_id. But I would rather make it more obvious that I am hoping for a 'covering' index.

To conditionally INSERT new links, use

Note that if you had an AUTO_INCREMENT in this table, IODKU would "burn" ids quite rapidly.

Subqueries and UNIONs

Each subquery SELECT and each SELECT in a UNION can be considered separately for finding the optimal INDEX.

Exception: In a "correlated" ("dependent") subquery, the part of the WHERE that depends on the outside table is not easily factored into the INDEX generation. (Cop out!)


The first step is to decide what order the optimizer will go through the tables. If you cannot figure it out, then you may need to be pessimistic and create two indexes for each table -- one assuming the table will be used first, one assiming that it will come later in the table order.

The optimizer usually starts with one table and extracts the data needed from it. As it finds a useful (that is, matches the WHERE clause, if any) row, it reaches into the 'next' table. This is called NLJ ("Nested Loop Join"). The process of filtering and reaching to the next table continues through the rest of the tables.

The optimizer usually picks the "first" table based on these hints:
    ⚈  STRAIGHT_JOIN forces the the table order.
    ⚈  The WHERE clause limits which rows needed (whether indexed or not).
    ⚈  The table to the "left" in a LEFT JOIN usually comes before the "right" table. (By looking at the table definitions, the optimizer may decide that "LEFT" is irrelevant.)
    ⚈  The current INDEXes will encourage an order.
    ⚈  etc.

Running EXPLAIN tells you the table order that the Optimizer is very likely to use today. After adding a new INDEX, the optimizer may pick a different table order. You should anticipate the order changing, guess at what order makes the most sense, and build the INDEXes accordingly. Then rerun EXPLAIN to see if the Optimizer's brain was on the same wavelength you were on.

You should build the INDEX for the "first" table based on any parts of the WHERE, GROUP BY, and ORDER BY clauses that are relevant to it. If a GROUP/ORDER BY mentions a different table, you should ignore that clause.

The second (and subsequent) table will be reached into based on the ON clause. (Instead of using commajoin, please write JOINs with the JOIN keyword and ON clause!) In addition, there could be parts of the WHERE clause that are relevant. GROUP/ORDER BY are not to be considered in writing the optimal INDEX for subsequent tables.


PARTITIONing is rarely a substitute for a good INDEX.

PARTITION BY RANGE is a technique that is sometimes useful when indexing fails to be good enough. In a two-dimensional situation such as nearness in a geographical sense, one dimension can partially be handled by partition pruning; then the other dimension can be handled by a regular index (preferrably the PRIMARY KEY). This goes into more detail:
Find nearest 10 pizza parlors


FULLTEXT is now implemented in InnoDB as well as MyISAM. It provides a way to search for "words" in TEXT columns. This is much faster (when it is applicable) than col LIKE '%word%'.
    WHERE x = 1
      AND MATCH (...) AGAINST (...)
always(?) uses the FULLTEXT index first. That is, the whole Algorithm is invalidated when one of the ANDs is a MATCH.

Signs of a Newbie

    ⚈  No "compound" (aka "composite") indexes
    ⚈  Redundant indexes (especially blatant is PRIMARY KEY(id), KEY(id))
    ⚈  Most or all columns individually indexes ("But I indexed everything")
    ⚈  "Commajoin" -- That is FROM a, b WHERE a.x=b.x instead of FROM a JOIN b ON a.x=b.x

Speeding up wp_postmeta

The published table (see Wikipedia) is
    CREATE TABLE wp_postmeta (
      meta_id bigint(20) unsigned NOT NULL AUTO_INCREMENT,
      post_id bigint(20) unsigned NOT NULL DEFAULT '0',
      meta_key varchar(255) DEFAULT NULL,
      meta_value longtext,
      PRIMARY KEY (meta_id),
      KEY post_id (post_id),
      KEY meta_key (meta_key)

The problems:

    ⚈  The AUTO_INCREMENT provides no benefit; in fact it slows down most queries (because of having to look in secondary index to find auto_inc id, then looking in data for actual id you need)
    ⚈  The AUTO_INCREMENT is extra clutter - both on disk and in cache.
    ⚈  Much better is PRIMARY KEY(post_id, meta_key) -- clustered, handles both parts of usual JOIN.
    ⚈  BIGINT is overkill, but that can't be fixed without changing other tables.
    ⚈  VARCHAR(255) can be a problem in 5.6 with utf8mb4; see workarounds below.
    ⚈  When would meta_key or meta_value ever be NULL?

The solutions:
    CREATE TABLE wp_postmeta (
        meta_key VARCHAR(255) NOT NULL,
        meta_value LONGTEXT NOT NULL,
        PRIMARY KEY(post_id, meta_key),
        ) ENGINE=InnoDB;

The typical usage:
    JOIN wp_postmeta AS m  ON = m.post_id
    WHERE m.meta_key = '...'

    ⚈  The composite PRIMARY KEY goes straight to the desired row, no digression through secondary index, nor search through multiple rows.
    ⚈  INDEX(meta_key) may or may not be useful, depending on what other queries you have.
    ⚈  InnoDB is required for the 'clustering'.
    ⚈  Going forward, use utf8mb4, not utf8. However, you should be consistent across all WP tables and in your connection parameters.

The error "max key length is 767", which can happen in MySQL 5.6 when trying to use CHARACTER SET utf8mb4. Do one of the following (each has a drawback) to avoid the error:

    ⚈  Upgrade to 5.7.7 for 3072 byte limit -- your cloud may not provide this;
    ⚈  Change 255 to 191 on the VARCHAR -- you lose any keys longer than 191 characters (unlikely?);
    ⚈  ALTER .. CONVERT TO utf8 -- you lose Emoji and some of Chinese;
    ⚈  Use a "prefix" index -- you lose some of the performance benefits;
    ⚈  Reconfigure (if staying with 5.6.3 - 5.7.6) -- 4 things to change: Barracuda + innodb_file_per_table + innodb_large_prefix + dynamic or compressed.

If you need the ability to have multiple meta keys with the same key name for one post, then use this solution. It is nearly as good as the above usggestion.
    CREATE TABLE wp_postmeta (
        meta_key VARCHAR(255) NOT NULL,
        meta_value LONGTEXT NOT NULL,
        PRIMARY KEY(post_id, meta_key, meta_id),  -- to allow dup meta_key for a post
        INDEX(meta_id),    -- to keep AUTO_INCREMENT happy
        ) ENGINE=InnoDB;

WordPress forum - on slow postmeta

The standard schema for wp_postmeta provides poor indexes. This leads to performance problems.

By changing the schema to this, most references to meta data will be faster:
CREATE TABLE wp_postmeta (
    post_id …,
    meta_key …,
    meta_value …,
    PRIMARY KEY(post_id, meta_key),

    ⚈  The current AUTO_INCREMENT column is a waste of space, and slows down queries because it is the PRIMARY KEY, thereby eschewing the "natural" "composite" PK of (post_id, meta_key).
    ⚈  InnoDB further boosts the performance of that PK due to "clustering". (I hope you are not still using MyISAM!)
    ⚈  If you are using MySQL 5.6 (Or MariaDB 10.0 or 10.1), change meta_key from VARCHAR(255), not VARCHAR(191). (We can discuss the reasons, and workarounds, in a separate question, if 191 is not sufficient.)
    ⚈  INDEX(meta_key) is optional, but needed if you want to "find posts that have a particular key".

Caveat: These changes will speed up many uses of postmeta, but not all. I don't think it will slow down any use cases. (Please provide such queries if you encounter them. It could be a caching issue, not ar real degradation.) If you would like to present your CREATE TABLE, I can provide an ALTER to convert it to this.

If you need the ability to have multiple meta keys with the same key name for one post, then use this solution. It is nearly as good as the above suggestion.

meta_id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT, -- keep after all ... PRIMARY KEY(post_id, meta_key, meta_id), -- to allow dup meta_key for a post Source doc

Possible ALTER


    ⚈  I have no way to test this.
    ⚈  This does not address the 767 error
    ⚈  This keeps meta_id because some WP user pointed out that it is referenced by other tables.
    ⚈  It assumes you might have multiple rows for a (post_id, meta_key) combo. (This seems like poor schema design?)
    ⚈  All this does is speed up typical SELECTs involving postmeta.
    ⚈  This probably applies to woocommerce, too.
    ⚈  If you use this, please dump your database and be ready to reload it in case of trouble.
ALTER TABLE wp_postmeta
    ADD PRIMARY KEY(post_id, meta_key, meta_id),  -- to allow dup meta_key for a post
    INDEX(meta_id);    -- to keep AUTO_INCREMENT happy

Consider changing wp_commentmeta, wp_sitemeta, wp_termmeta, and wp_usermeta in similar ways.


The EXPLAIN command helps give you some insight into what is going on, but fails to direct you towrard betters indexing or query formulation. Anyway, here are the options:

    1.  EXPLAIN is quite limited. It does not handle LIMIT very well, nor does it necessarily say which step uses the filesort, or even if there are multiple filesorts. In general, the "Rows" column is useful, but in certain situations, it is useless. Simple rule: A big number is probably a bad sign.

    2.  EXPLAIN EXTENDED + SHOW WARNINGS; provides the rewritten version of the query. This does not do a lot. It does give clues of the distinction between ON and WHERE in JOINs. (In newer versions EXTENDED is not needed; you still get the WARNING.)

    3.  EXPLAIN FORMAT=JSON provide more details into the cost-based analysis and spells out various steps, including filesorts.

    4.  "Optimizer trace" goes a bit further. (It is rather tedious to read.) See next section.

Optimizer Trace

This is available beginning with 5.6.3.
SET optimizer_trace_offset=-11,
SELECT ...                        -- Your query
SET optimizer_trace='enabled=off;

This last thing will produce a "table" with 4 columns. Here is some PHP code to handle that if you get an array of arrays:
        foreach ($opt_traces as $opt_trace)
            list($query, $trace, $mbbmms, $priv) = $opt_trace;
            echo "Optimizer Trace:<pre>$trace</pre>\n";          // This will be JSON.
            if ($mbbmms)
                echo "$mbbmms bytes missing beyond max_mem_size<br>\n";
            if ($priv)
                echo "Insufficient privileges for optimizer trace<br>\n";

For even a moderately complex SELECT, you may exceed the current setting of optimizer_trace_max_mem_size; Set it higher in the initial sets if needed.

Handler counts

The "Rows" of EXPLAIN gives an estimate of the number of rows touched. Sometimes this is a terrible estimate. There is a way to get the exact number of rows: SHOW SESSION STATUS LIKE 'Handler%';. In some situations, this is an excellent way to get a feel for a specific query, or to compare two similar queries/indexes/etc.
FLUSH STATUS;  -- zero out most of SESSION STATUS (non existent on really old versions of MySQL)
SELECT ...;    -- run your query

Sum up the Handler_read_% values to get what is perhaps an exact count of how many rows (index and/or table) are touched in peforming the SELECT. If there are non-zero values for Handler_write_%, then temp table(s) were created for this query.

(Note: Using the equivalent SELECT ... FROM information_schema.session_status is ineffective since it includes some of this select itself.)


    ⚈  "Index merge intersection" is perhaps always not as good as a composite index.

    ⚈  "Index merge union" is almost never used. You might be able to turn OR into UNION effectively.

    ⚈  The newer Optimizer creates an index on the fly for "derived tables" (JOIN ( SELECT ... )). But this is rarely as efficient as rewriting the query to avoid such a subquery when it returns lots of rows. (Again, none of the EXPLAINs will point you this way.)

    ⚈  Something often forgotten about (but does show up as an unexplained large "Rows"): COLLATIONs must match. This may show up when JOINing on a string column but failing to use the same CHARACTER SET and/or COLLATION for the columns in both tables.

    ⚈  A trick to make use of the PK clustering: PRIMARY KEY(foo, id), INDEX(id). This comes in handy when many queries say WHERE foo = ... but need id for uniqueness. A table too big for the buffer_pool benefits especially.

    ⚈  Nothing (except experience) says how nearly useless "prefix" indexing is (INDEX(bar(10))).

    ⚈  FORCE INDEX is handy for experimenting, but almost always a bad idea for production. "It may help today, but the data will change tomorrow and make things worse."

    ⚈  When are a pair of UNIQUE/non-UNIQUE indexes redundant? See
this stackoverflow answer


Initial posting: March, 2015;   Refreshed: Feb, 2016;   Add DATE: June, 2016;   WP example: May and Sep, 2017;   Explain, Opt trace, and Miscellany: Jan, 2019

The tips in this document apply to MySQL, MariaDB, and Percona.

Basics of indexing

Good slide deck on BTrees

Percona 2015 Tutorial Slides

Sheeri Cabral's Index PDF
Sheeri Cabral's Index talk
Sheeri's EXPLAIN talk

Bill Karwin on designing indexes

Some info in the manual: ORDER BY Optimization
A short, but complicated, example

Manual page on range accesses in composite indexes

Some discussion of JOIN

Detecting whether an index is needed

This blog is the consolidation of a Percona tutorial I gave in 2013, plus many years of experience in fixing thousands of slow queries on hundreds of systems.

I appologize that this does not tell you how create INDEXes for all SELECTs. Some are just too complex.

Indexing 101: Optimizing MySQL queries on a single table
(Stephane Combaudon - Percona)

A complex query, well explained.

[[][What does Using filesort mean in MySQL?]

JOIN - with step-by-step explanation

Do not start index with a 'range'

Natural PK vs AUTO_INCREMENT debate

3-hour slides from Jynus

Which to pick - WHERE or ORDER BY

A moderately involved example

[[][Key_len inconsistencies]] - Also see the blogs referred to.

with good pictures on how a BTree works, the order of columns in an index, etc.

[[][Good slides on Optimization]] by Jaime Grespo (Percona Live Europe 2018)

-- Rick James

MySQL Documents by Rick James

HowTo Techniques for Optimizing Tough Tasks:

Partition Maintenance (DROP+REORG) for time series (includes list of PARTITION uses)
Big DELETEs - how to optimize -- and other chunking advice, plus a use for PARTITIONing
    Chunking lengthy DELETE/UPDATE/etc.
Data Warehouse techniques:
    Overview   Summary Tables   High speed ingestion  
Entity-Attribute-Value -- a common, poorly performing, design pattern (EAV); plus an alternative
Find the nearest 10 pizza parlors -- efficient searching on Latitude + Longitude (another PARITION use)
    Lat/Long representation choices
Pagination, not with OFFSET, LIMIT
Techniques on efficiently finding a random row (On beyond ORDER BY RAND())
GUID/UUID Performance (type 1 only)
IP Range Table Performance -- or other disjoint ranges
Rollup Unique User Counts
Alter of a Huge table -- Mostly obviated by 5.6
Latest 10 news articles -- how to optimize the schema and code for such
Build and execute a "Pivot" SELECT (showing rows as columns)
Find largest row for each group ("groupwise max")

Other Tips, Tuning, Debugging, Optimizations, etc...

Rick's RoTs (Rules of Thumb -- lots of tips)
Memory Allocation (caching, etc)
Character Set and Collation problem solver
    Trouble with UTF-8   If you want case folding, but accent sensitivity, please file a request at .
    Python tips,   PHP tips,   other language tips
    utf8 Collations   utf8mb4 Collations on 8.0
Converting from MyISAM to InnoDB -- includes differences between them
Compound INDEXes plus other insights into the mysteries of INDEXing
Cookbook for Creating Indexes
    Many-to-many mapping table   wp_postmeta   UNION+OFFSET
MySQL Limits -- built-in hard limits
    767-byte INDEX limit
Galera, tips on converting to (Percona XtraDB Cluster, MariaDB 10, or manually installed)
5.7's Query Rewrite -- perhaps 5.7's best perf gain, at least for this forum's users
Request for tuning / slowlog info
Best of MySQL Forum -- index of lots of tips, discussions, etc

Analyze MySQL Performance
    Analyze VARIABLEs and GLOBAL STATUS     Analyze SlowLog

My slides from conferences
Percona Live 4/2017 - Rick's RoTs (Rules of Thumb) - MySQL/MariaDB
Percona Live 4/2017 - Index Cookbook - MySQL/MariaDB
Percona Live 9/2015 - PARTITIONing - MySQL/MariaDB
(older ones upon request)

Contact me via LinkedIn; be sure to include a brief teaser in the Invite request:   View Rick James's profile on LinkedIn

Did my articles help you out? Like what you see? Consider donating:

☕️ Buy me a Banana Latte ($4) There is no obligation but it would put a utf8mb4 smiley 🙂 on my face, instead of the Mojibake "🙂"