PARTITION Maintenance in MySQL

Table of Contents


First, my Opinions on PARTITIONing

Why PARTITIONing is useless for speed

Use Cases for PARTITIONing


INDEXes in a PARTITIONed Table

PARTITION Maintenance for the Time-Series Case

High Level View of the Code


When to do the ALTERs?


Detailed code


Brought to you by Rick James


This blog covers
    ⚈  PARTITIONing uses and non-uses
    ⚈  How to Maintain a time-series PARTITIONed table
    ⚈  AUTO_INCREMENT secrets

    ⚈  To "shard" is to split data across multiple machines. (This document does not cover sharding.)
    ⚈  To "partition" is to split one table into multiple sub-tables (partitions) on a single MySQL instance.

First, my Opinions on PARTITIONing

Taken from
Rick's RoTs - Rules of Thumb for MySQL
    ⚈  #1: Don't use PARTITIONing until you know how and why it will help.
    ⚈  Don't use PARTITION unless you will have >1M rows
    ⚈  No more than 50 PARTITIONs on a table (open, show table status, etc, are impacted) (This may be fixed in 5.6.6, or maybe in 8.0)
    ⚈  PARTITION BY RANGE is the only useful method. (I include RANGE COLUMNS.)
    ⚈  SUBPARTITIONs are not useful.
    ⚈  The partition field should not be the field first in any key. (This is a Rule of Thumb, not an absolute.)
    ⚈  It is OK to have an AUTO_INCREMENT as the first part of a compound key, or in a non-UNIQUE index.

It is so tempting to believe that PARTITIONing will solve performance problems. But it is so often wrong.

PARTITIONing splits up one table into several smaller tables. But table size is rarely a performance issue. Instead, I/O time and indexes are the issues.

Why PARTITIONing is useless for speed

Well, mostly useless. And it rarely provides performance benefits. And sometimes even things down the query.

Let's look at PARTITION BY (X). This breaks up the table into 'sub-tables' based on values of the column X; different partition types (RANGE, LIST, HASH, etc) use different expressions for the splitting.

Let's look at various SELECTs/INSERTs/DELETEs that might try to take advantage of partitioning in the WHERE clause.

    ⚈  WHERE without mentioning X -- What's the advantage of partitioning? Now the query must check each and every partition for the rows in question. You can make the counter-argument that the BTree being searched is slightly shallower. The counter-counter-argument is that the "pruning" may cost more than the savings in the BTree being shallower. Partitioning is probably slower.

    ⚈  WHERE X = 1234 -- This lets "partition pruning" look only in that one partition. But that's no better than INDEX(x) on a non-partitioned table. And you probably need that index anyway; after first 'pruning' down to the desired partition, you still need the index. No faster.
A common fallacy: "Partitioning will make my queries run faster". It won't. Ponder what it takes for a 'point query'. Without partitioning, but with an appropriate index, there is a BTree (the index) to drill down to find the desired row. For a billion rows, this might be 5 levels deep. With partitioning, first the partition is chosen and "opened", then a smaller BTree (of say 4 levels) is drilled down. Well, the savings of the shallower BTree is consumed by having to open the partition. Similarly, if you look at the disk blocks that need to be touched, and which of those are likely to be cached, you come to the conclusion that about the same number of disk hits is likely. Since disk hits are the main cost in a query, Partitioning does not gain any performance (at least for this typical case). The 2D case (below) gives the main contradiction to this discussion.

    ⚈  WHERE X = 1234 AND ... -- There is possibly an index to help with the rest of the where clause. In the non-partitioned equivalent table, INDEX(X, ...) serves the same purpose and does it at least as fast.

    ⚈  WHERE X BETWEEN ... AND ... -- For BY LIST and BY RANGE, the above arguments apply. But for PARTITION BY HASH(X), any kind of "range" requires the Optimizer to look in all partitions. Slower!

And what about ORDER BY x? Well, the results of scanning each partition need to be combined and then sorted. In the equivalent non-partitioned table, there is at least a chance of using an index; not so with a partitioned table. Slower.

What about disk space? Do a SHOW TABLE STATUS and look at Data_free. It is probably a huge number because each partition has 4MB-7MB of 'free' space of its own. Ugh.

Now let's look at the very few cases that benefit from PARTITION BY RANGE. (I've been looking for more cases for a decade!)

Use Cases for PARTITIONing

Only two cases benefit SELECTs; the rest are benefitial for other reasons.

Use case #1 -- Time series. Perhaps the most common use case where PARTITIONing shines is in a dataset where "old" data is peroidically deleted from the table. RANGE PARTITIONing by day (or other unit of time) lets you do a nearly instantaneous DROP PARTITION plus REORGANIZE PARTITION instead of a much slower DELETE. Much of this blog is focused on this use case; see below. This use case is also discussed in

The big win for Case #1: DROP PARTITION is a lot faster than DELETEing a lot of rows.

Use case #2 -- 2-D index. INDEXes are inherently one-dimensional. If you need two "ranges" in the WHERE clause, try to migrate one of them to PARTITIONing.

Finding the nearest 10 pizza parlors on a map needs a 2D index. Partition pruning sort of gives a second dimension. See Latitude/Longitude Indexing
That uses PARTITION BY RANGE(latitude) together with PRIMARY KEY(longitude, ...) See also a comparison of "find nearest" techniques

The big win for Case #2: Scanning fewer rows.

Use case #3 -- Hot spot. This is a bit complicated to explain. Given this combination:
    ⚈  A table's index is too big to be cached, but the index for one partition is cacheable, and
    ⚈  The index is randomly accessed, and
    ⚈  Data ingestion would normally be I/O bound due to updating the index
Partitioning can keep all the index "hot" in RAM, thereby avoiding a lot of I/O.

The big win for Case #3: Improving caching to decrease I/O to speed up operations. Some use cases involve both Cases #1 and #3.

Use case #4 -- Transportable tablespace. Using EXPORT/IMPORT partition for quickly archiving or importing data. (IMPORTing could be tricky because of the partition key.) 5.7 Import partition
Copying Transportable Tablespaces for Partitioned Tables in MariaDB
The big win for Case #4: Quickly moving a partition in between tables (or servers).

Use case #5 -- Separate datasources. Similar to #4. You have multiple sources for the dataset and they have (or can be modified to) the same schema. Each source is inserted into one partition by partition name using LOAD DATA. This assumes a full replacement when loading -- truncate the partition, then reload. And this is the first useful case for PARTITION BY LIST. (Note: Still, there is probably no performance gain in SELECT.)

How to handle 20M+ records from tables with same structure in MySQL
(especially the answer by GMB)

Use case #6 -- I have yet to find a 6th use case.

Note that almost always, these use cases involve BY RANGE partitioning, not the other forms.


    ⚈  For AUTO_INCREMENT to work (in any table), it must be the first field in some index. Period. There are no other requirements on indexing it.
    ⚈  Being the first field in some index lets the engine find the 'next' value when opening the table.
    ⚈  AUTO_INCREMENT need not be UNIQUE. What you lose: prevention of explicitly inserting a duplicate id. (This is rarely needed, anyway.)

Examples (where id is AUTO_INCREMENT):
    ⚈  PRIMARY KEY (...), INDEX(id) -- to get clustering on something more useful than id
    ⚈  PRIMARY KEY (...), UNIQUE(id, partition_key) -- not useful
    ⚈  INDEX(id), INDEX(...) (but no UNIQUE keys)
    ⚈  PRIMARY KEY(id), ... -- works only if id is the partition key (not very useful)

INDEXes in a PARTITIONed Table

If you change table from non-PARTITIONed to PARTITIONed, or vice versa, you should rethink all the indexes. Here are some considerations:

    ⚈  Since an index's scope is limited to one partition, the uniqueness constraint of a UNIQUE index is useless. So use INDEX, not UNIQUE.
    ⚈  Similarly, the uniqueness constraint of a PRIMARY KEY is useless. Instead consider having a PK for its "clustering" effects.
    ⚈  If you don't have a 'natural' PK (a combination of column(s) that is naturally unique), and want the advantage of clustering, then do have an AUTO_INCREMENT id and put it last in the PK. Then have INDEX(id).
    ⚈  If you choose to have the partition key in an index, it is usually (but not always) best to put the partition key last in any index. Note that the "pruning" is done before looking at the index, so the partition key is effectively used before the rest of the column(s) in the index.

Without PARTITION The guidelines are here:
Cookbook for creating indexes from SELECTs

PARTITION Maintenance for the Time-Series Case

Let's focus on the maintenance task involved in Case #1, as described above.

You have a large table that is growing on one end and being purged on the other. Examples include news, logs, and other transient information. PARTITION BY RANGE is an excellent vehicle for such a table.
    ⚈  DROP PARTITION is much faster than DELETE. (This is the big reason for doing this flavor of partitioning.)
    ⚈  Queries often limit themselves to 'recent' data, thereby taking advantage of "partition pruning".

Depending on the type of data, and how long before it expires, you might have daily or weekly or hourly (etc) partitions.

There is no simple SQL statement to "drop partitions older than 30 days" or "add a new partition for tomorrow". It would be tedious to do this by hand every day.

High Level View of the Code

    DROP PARTITION from20120314;
        from20120415 VALUES LESS THAN (TO_DAYS('2012-04-16')),
After which you have...
    CREATE TABLE tbl (
        dt DATETIME NOT NULL,  -- or DATE
        PRIMARY KEY (..., dt),
        UNIQUE KEY  (..., dt),
        PARTITION start        VALUES LESS THAN (0),
        PARTITION from20120315 VALUES LESS THAN (TO_DAYS('2012-03-16')),
        PARTITION from20120316 VALUES LESS THAN (TO_DAYS('2012-03-17')),
        PARTITION from20120414 VALUES LESS THAN (TO_DAYS('2012-04-15')),
        PARTITION from20120415 VALUES LESS THAN (TO_DAYS('2012-04-16')),

Be sure to have innodb_file_per_table=ON.


Perhaps you noticed some odd things in the example. Let me explain them.
    ⚈  Partition naming: Make them useful.
    ⚈  from20120415 ... 04-16: Note that the LESS THAN is the next day's date
    ⚈  The "start" partition: See paragraph below.
    ⚈  The "future" partition: This is normally empty, but it can catch overflows; more later.
    ⚈  The range key (dt) must be included in any PRIMARY or UNIQUE key.
    ⚈  The range key (dt) should be last in any keys it is in -- You have already "pruned" with it; it is almost useless in the index, especially at the beginning.
    ⚈  Any column used for 'range' filtering should be at the end of an index.
    ⚈  DATETIME, etc -- I picked this datatype because it is typical for a time series. Newer MySQL versions allow TIMESTAMP. INT can be used. DECIMAL and FLOAT cannot.
    ⚈  There is an extra day (03-16 thru 04-16) to give you a full month: The latest day is only partially full.

Why the bogus "start" partition? If an invalid datetime (Feb 31) were to be used, the datetime would turn into NULL. NULLs are put into the first partition. Since any SELECT could have an invalid date (yeah, this stretching things), the partition pruner always includes the first partition in the resulting set of partitions to search. So, if the SELECT must scan the first partition, it would be slightly more efficient if that partition were empty. Hence the bogus "start" partition.
Longer discussion, by The Data Charmer
5.5 eliminates the bogus check, but only if you switch to a new syntax:
    PARTITION day_20100226 VALUES LESS THAN ('2010-02-27'), ...

More on the "future" partition. Sooner or later the cron/EVENT to add tomorrow's partition will fail to run. The worst that could happen is for tomorrow's data to be lost. The easiest way to prevent that is to have a partition ready to catch it, even if this partition is normally empty.

Having the "future" partition makes the ADD PARTITION script a little more complex. Instead, it needs to take tomorrow's data from "future" and put it into a new partition. This is done with the REORAGNIZE command shown. Normally nothing need be moved, so the ALTER is usually very fast.

When to do the ALTERs?

    ⚈  DROP if the oldest partition is "too old".
    ⚈  Add (using REORGANIZE) 'tomorrow' near the end of today, but don't try to add it twice.
    ⚈  Do not count partitions -- there are two extra ones. Use the partition names or information_schema.PARTITIONS.PARTITION_DESCRIPTION.
    ⚈  DROP/Add only once in the script. Rerun the script if you need more. This simplifies the script without losing functionality; and...
    ⚈  Run the script more often than necessary. For daily partitions, run the script twice a day, or even hourly. Why? Automatic repair.


As I have said many times, in many places, BY RANGE is perhaps the only useful variant. And a time series is the most common use for PARTITIONing.

    ⚈  (as discussed here) DATETIME/DATE with TO_DAYS()
    ⚈  DATETIME/DATE with TO_DAYS(), but with 7-day intervals
    ⚈  TIMESTAMP with TO_DAYS(). (version 5.1.43 or later)
    ⚈  PARTITION BY RANGE(TIMESTAMP) (version 5.5.15 / 5.6.3)
    ⚈  Note: MySQL does not allow arbitrary date functions to be used. As implied above, new functions are allowed in new versions; check your version.
    ⚈  INT UNSIGNED with constants computed as unix timestamps.
    ⚈  INT UNSIGNED with constants for some non-time-based series.
    ⚈  MEDIUMINT UNSIGNED containing an "hour id": FLOOR(FROM_UNIXTIME(timestamp) / 3600)
    ⚈  Months, Quarters, etc: Concoct a notation that works.

How many partitions?
    ⚈  Under, say, 5 partitions -- you get very little of the benefits.
    ⚈  Over, say, 50 partitions, and you hit inefficiencies elsewhere.
    ⚈  Certain operations (SHOW TABLE STATUS, opening the table, etc) open every partition.
    ⚈  MyISAM, before version 5.6.6, would lock all partitions before pruning!
    ⚈  Partition pruning does not happen on INSERTs (until Version 5.6.7), so INSERT needs to open all the partitions.
    ⚈  A possible 2-partition use case:,633179,633179
    ⚈  8192 partitions is a hard limit (was 1024 before 5.6.7 / MariaDB 10.0.4).
    ⚈  Before "native partitions" (5.7.6), each partition consumed a chunk of memory.

Detailed code

Reference implementation, in Perl, with demo of daily partitions

The complexity of the code is in the discovery of the PARTITION names, especially of the oldest and the 'next'.

To run the demo,
    ⚈  Install Perl and DBIx::DWIW (from CPAN).
    ⚈  copy the txt file (link above) to
    ⚈  execute perl to get the rest of the instructions

The program will generate and execute (when needed) either of these:
   INTO (
        PARTITION from20150606 VALUES LESS THAN (736121),

                    DROP PARTITION from20150603


Original writing -- Oct, 2012;   Use cases added: Oct, 2014;   Refreshed: June, 2015;   8.0: Sep, 2016;   Indexing: June, 2018;   Use case #5: Dec, 2020;   Why useless for speed, plus Refresh: July, 2021  


This link talks about 5.7, but has a section on a more complex way to do it in 5.6.
Transportable Tablespaces for InnoDB Partitions
Percona 5.6
FiveFarmers 5.6
5.7 import
Another technique to consider (in 5.6+) is Exchanging Partitions with Tables
Not available until 5.6.17: Flush tables for export
Transportable tablespaces best practices -- Percona
Tablespace copying
Possible example of Partitioning slowing queries down
A discussion of partitioning

Manipulating partitions should be rather fast in 5.6 and 5.7, but the jury is out on 8.0 due to major implementation changes.

Slides from Percona Amsterdam 2015

PARTITIONing requires at least MySQL 5.1

The tips in this document apply to MySQL, MariaDB, and Percona.

More on PARTITIONing
LinkedIn discussion
Why NOT Partition
Geoff Montee's SP
Why partitioning does not work
PARTITION BY HASH is useless with ranges
Well written argument against Partitioning

A self contained Stored Procedure for monthly partitions
Transportable Tablespaces

MySQL 8.0, (GA 8.0.11 released on 2018-04-19):
    ⚈  Only InnoDB tables can be partitioned -- MariaDB is likely to continue maintaining Partitioning on non-InnoDB tables, but Oracle is clearly not.
    ⚈  Some of the problems having lots of partitions are lessened by the Data-Dictionary-in-a-table.

Native partitioning will give:
    ⚈  Slightly improved performance by combining two "handlers" into one.
    ⚈  Decreased memory usage, especially when using a large number of partitions.

Future (as envisioned in 2019):
    ⚈  FOREIGN KEY support, perhaps in a later 8.0.xx.
    ⚈  "GLOBAL INDEX" -- this might avoid the need for putting the partition key in every unique index, but make DROP PARTITION costly. This will be farther into the future.
    ⚈  FULLTEXT -- maybe later

8.0.3 Changelog??: Need to check "DROP PARTITION with ALGORITHM=INPLACE deletes data stored in the partition and drops the partition. However, DROP PARTITION with ALGORITHM=COPY or old_alter_table=ON rebuilds the partitioned table and attempts to move data from the dropped partition to another partition with a compatible PARTITION ... VALUES definition. Data that cannot be moved to another partition is deleted. "

-- Rick James

MySQL Documents by Rick James

HowTo Techniques for Optimizing Tough Tasks:

Partition Maintenance (DROP+REORG) for time series (includes list of PARTITION uses)

Big DELETEs - how to optimize -- and other chunking advice, plus a use for PARTITIONing
    Chunking lengthy DELETE/UPDATE/etc.

Data Warehouse techniques:
    Data Warehouse Overview   Summary Tables   High speed ingestion   Bulk Normalization  

Schema and code design for large Sensor database

Entity-Attribute-Value (EAV) -- a common, poorly performing, design pattern; plus an alternative

5 methods for 'Find Nearest'

Lat/Lng search to Find the nearest 10 pizza parlors
    Lat/Long representation choices

Z-Order 'find nearest'

Pagination, not with OFFSET, LIMIT

Techniques on efficiently finding a random row (On beyond ORDER BY RAND())

GUID/UUID Performance (type 1 only)

IP Range Table Performance -- or other disjoint ranges

Rollup Unique User Counts

Alter of a Huge table -- Mostly obviated by 5.6

Efficient List of Latest 10 news articles

Build and execute a Pivot SELECT (showing rows as columns)

(Groupwise Max): Efficiently find largest row(s) for each group

Other Tips, Tuning, Debugging, Optimizations, etc...

Rick's RoTs (Rules of Thumb -- lots of tips)

Datatypes and building a good schema

Memory Allocation (caching, etc)

Character Set and Collation problem solver
    Trouble with UTF-8   If you want case folding, but accent sensitivity, please file a request at .
    Python tips,   PHP tips,   other language tips
    utf8 Collations   utf8mb4 Collations on 8.0

Converting from MyISAM to InnoDB -- includes differences between them

Compound INDEXes plus other insights into the mysteries of INDEXing

Cookbook for Creating Indexes
    Many-to-many mapping table   Handler counts   wp_postmeta   UNION+OFFSET

MySQL Limits -- built-in hard limits
    767-byte INDEX limit

Galera, tips on converting to (Percona XtraDB Cluster, MariaDB 10, or manually installed)

5.7's Query Rewrite -- perhaps 5.7's best perf gain, at least for this forum's users

Analyze MySQL Performance
    Analyze VARIABLEs and GLOBAL STATUS     Analyze SlowLog

My slides from conferences
MiniFest 2021 - Rick James & Daniel Black - Answering on Stack Overflow(+comments) - MariaDB Frontlines
Percona Live 4/2017 - Rick's RoTs (Rules of Thumb) - MySQL/MariaDB
Percona Live 4/2017 - Index Cookbook - MySQL/MariaDB
Percona Live 9/2015 - PARTITIONing - MySQL/MariaDB

Contact me via LinkedIn; be sure to include a brief teaser in the Invite request:   View Rick James's profile on LinkedIn

Did my articles help you out? Like what you see? Consider donating:

☕️ Buy me a Banana Latte and bagel ($10) There is no obligation but it would put a utf8mb4 smiley 🙂 on my face, instead of the Mojibake "🙂"