This blog covers
⚈ PARTITIONing uses and non-uses
⚈ How to Maintain a time-series PARTITIONed table
⚈ AUTO_INCREMENT secrets
⚈ To "shard" is to split data across multiple machines. (This document does not cover sharding.)
⚈ To "partition" is to split one table into multiple sub-tables (partitions) on a single MySQL instance.
First, my Opinions on PARTITIONing
Taken from Rick's RoTs - Rules of Thumb for MySQL
⚈ #1: Don't use PARTITIONing until you know how and why it will help.
⚈ Don't use PARTITION unless you will have >1M rows
⚈ No more than 50 PARTITIONs on a table (open, show table status, etc, are impacted) (fixed in 5.6.6?; a much better fix is likely to be in 8.0)
⚈ PARTITION BY RANGE is the only useful method.
⚈ SUBPARTITIONs are not useful.
⚈ The partition field should not be the field first in any key.
⚈ It is OK to have an AUTO_INCREMENT as the first part of a compound key, or in a non-UNIQUE index.
It is so tempting to believe that PARTITIONing will solve performance problems. But it is so often wrong.
PARTITIONing splits up one table into several smaller tables. But table size is rarely a performance issue. Instead, I/O time and indexes are the issues.
A common fallacy: "Partitioning will make my queries run faster". It won't. Ponder what it takes for a 'point query'. Without partitioning, but with an appropriate index, there is a BTree (the index) to drill down to find the desired row. For a billion rows, this might be 5 levels deep. With partitioning, first the partition is chosen and "opened", then a smaller BTree (of say 4 levels) is drilled down. Well, the savings of the shallower BTree is consumed by having to open the partition. Similarly, if you look at the disk blocks that need to be touched, and which of those are likely to be cached, you come to the conclusion that about the same number of disk hits is likely. Since disk hits are the main cost in a query, Partitioning does not gain any performance (at least for this typical case). The 2D case (below) gives the main contradiction to this discussion.
Use Cases for PARTITIONing
Use case #1 -- time series. Perhaps the most common use case where PARTITIONing shines is in a dataset where "old" data is peroidically deleted from the table. RANGE PARTITIONing by day (or other unit of time) lets you do a nearly instantaneous DROP PARTITION plus REORGANIZE PARTITION instead of a much slower DELETE. Much of this blog is focused on this use case. This use case is also discussed in Big DELETEs
The big win for Case #1: DROP PARTITION is a lot faster than DELETEing a lot of rows.
Use case #2 -- 2-D index. INDEXes are inherently one-dimensional. If you need two "ranges" in the WHERE clause, try to migrate one of them to PARTITIONing.
Finding the nearest 10 pizza parlors on a map needs a 2D index. Partition pruning sort of gives a second dimension. See Latitude/Longitude Indexing
That uses PARTITION BY RANGE(latitude) together with PRIMARY KEY(longitude, ...)
The big win for Case #2: Scanning fewer rows.
Use case #3 -- hot spot. This is a bit complicated to explain. Given this combination:
⚈ A table's index is too big to be cached, but the index for one partition is cacheable, and
⚈ The index is randomly accessed, and
⚈ Data ingestion would normally be I/O bound due to updating the index
Partitioning can keep all the index "hot" in RAM, thereby avoiding a lot of I/O.
The big win for Case #3: Improving caching to decrease I/O to speed up operations. Some use cases involve both Cases #1 and #3.
Use case #4 -- transportable tablespace. Using EXPORT/IMPORT partition for quickly archiving or importing data. (IMPORTing could be tricky because of the partition key.)
This link talks about 5.7, but has a section on a more complex way to do it in 5.6. Transportable Tablespaces for InnoDB Partitions
Another technique to consider (in 5.6+) is Exchanging Partitions with Tables
Not available until 5.6.17: Flush tables for export
Transportable tablespaces best practices -- Percona
Manipulating partitions should be rather fast in 5.6 and 5.7, but the jury is out on 8.0 due to major implementation changes.
The big win for Case #4: Quickly moving a partition in between tables (or servers).
Use case #5 -- I have yet to find a 5th use case.
Note that almost always, these use cases involve BY RANGE partitioning, not the other forms.
AUTO_INCREMENT in PARTITION
⚈ For AUTO_INCREMENT to work (in any table), it must be the first field in some index. Period. There are no other requirements on indexing it.
⚈ Being the first field in some index lets the engine find the 'next' value when opening the table.
⚈ AUTO_INCREMENT need not be UNIQUE. What you lose: prevention of explicitly inserting a duplicate id. (This is rarely needed, anyway.)
Examples (where id is AUTO_INCREMENT):
⚈ PRIMARY KEY (...), INDEX(id) -- to get clustering on something more useful than id
⚈ PRIMARY KEY (...), UNIQUE(id, partition_key) -- not useful
⚈ INDEX(id), INDEX(...) (but no UNIQUE keys)
⚈ PRIMARY KEY(id), ... -- works only if id is the partition key (not very useful)
INDEXes in a PARTITIONed Table
If you change table from non-PARTITIONed to PARTITIONed, or vice versa, you should rethink all the indexes. Here are some considerations:
⚈ Since an index's scope is limited to one partition, the uniqueness constraint of a UNIQUE index is useless. So use INDEX, not UNIQUE.
⚈ Similarly, the uniqueness constraint of a PRIMARY KEY is useless. Instead consider having a PK for its "clustering" effects.
⚈ If you don't have a 'natural' PK (a combination of column(s) that is naturally unique), and want the advantage of clustering, then do have an AUTO_INCREMENT id and put it last in the PK. Then have INDEX(id).
⚈ If you choose to have the partition key in an index, it is usually (but not always) best to put the partition key last in any index. Note that the "pruning" is done before looking at the index, so the partition key is effectively used before the rest of the column(s) in the index.
Without PARTITION The guidelines are here: Cookbook for creating indexes from SELECTs
PARTITION Maintenance for the Time-Series Case
Let's focus on the maintenance task involved in Case #1, as described above.
You have a large table that is growing on one end and being purged on the other. Examples include news, logs, and other transient information. PARTITION BY RANGE is an excellent vehicle for such a table.
⚈ DROP PARTITION is much faster than DELETE. (This is the big reason for doing this flavor of partitioning.)
⚈ Queries often limit themselves to 'recent' data, thereby taking advantage of "partition pruning".
Depending on the type of data, and how long before it expires, you might have daily or weekly or hourly (etc) partitions.
There is no simple SQL statement to "drop partitions older than 30 days" or "add a new partition for tomorrow". It would be tedious to do this by hand every day.
High Level View of the Code
ALTER TABLE tbl
DROP PARTITION from20120314;
ALTER TABLE tbl
REORGANIZE PARTITION future INTO (
from20120415 VALUES LESS THAN (TO_DAYS('2012-04-16')),
future VALUES LESS THAN MAXVALUE
After which you have...
CREATE TABLE tbl (
dt DATETIME NOT NULL, -- or DATE
PRIMARY KEY (..., dt),
UNIQUE KEY (..., dt),
PARTITION BY RANGE (TO_DAYS(dt)) (
start VALUES LESS THAN (0),
from20120315 VALUES LESS THAN (TO_DAYS('2012-03-16')),
from20120316 VALUES LESS THAN (TO_DAYS('2012-03-17')),
from20120414 VALUES LESS THAN (TO_DAYS('2012-04-15')),
from20120415 VALUES LESS THAN (TO_DAYS('2012-04-16')),
future VALUES LESS THAN MAXVALUE
Perhaps you noticed some odd things in the example. Let me explain them.
⚈ Partition naming: Make them useful.
⚈ from20120415 ... 04-16: Note that the LESS THAN is the next day's date
⚈ The "start" partition: See paragraph below.
⚈ The "future" partition: This is normally empty, but it can catch overflows; more later.
⚈ The range key (dt) must be included in any PRIMARY or UNIQUE key.
⚈ The range key (dt) should be last in any keys it is in -- You have already "pruned" with it; it is almost useless in the index, especially at the beginning.
⚈ Any column used for 'range' filtering should be at the end of an index.
⚈ DATETIME, etc -- I picked this datatype because it is typical for a time series. Newer MySQL versions allow TIMESTAMP. INT can be used. DECIMAL and FLOAT cannot.
⚈ There is an extra day (03-16 thru 04-16) to give you a full month: The latest day is only partially full.
Why the bogus "start" partition? If an invalid datetime (Feb 31) were to be used, the datetime would turn into NULL. NULLs are put into the first partition. Since any SELECT could have an invalid date (yeah, this stretching things), the partition pruner always includes the first partition in the resulting set of partitions to search. So, if the SELECT must scan the first partition, it would be slightly more efficient if that partition were empty. Hence the bogus "start" partition. Longer discussion, by The Data Charmer
5.5 eliminates the bogus check, but only if you switch to a new syntax:
PARTITION BY RANGE COLUMNS(dt) ( PARTITION day_20100226 VALUES LESS THAN ('2010-02-27'), ...
ALTER TABLE tbl REORGANIZE PARTITION future INTO ( PARTITION from20150606 VALUES LESS THAN (736121), PARTITION future VALUES LESS THAN MAXVALUE ) ALTER TABLE tbl DROP PARTITION from20150603
Partition Maintenance (DROP+REORG) for time series (includes list of PARTITION uses)
Big DELETEs - how to optimize -- and other chunking advice, plus a use for PARTITIONing
Chunking lengthy DELETE/UPDATE/etc.
Data Warehouse techniques:
Overview Summary Tables High speed ingestion Bulk Normalization
Entity-Attribute-Value -- a common, poorly performing, design pattern (EAV); plus an alternative
5 methods for 'Find Nearest'
Find the nearest 10 pizza parlors -- efficient searching on Latitude + Longitude (another PARITION use)
Lat/Long representation choices
Z-Order 'find nearest'(under construction)
Pagination, not with OFFSET, LIMIT
Techniques on efficiently finding a random row (On beyond ORDER BY RAND())
GUID/UUID Performance (type 1 only)
IP Range Table Performance -- or other disjoint ranges
Rollup Unique User Counts
Alter of a Huge table -- Mostly obviated by 5.6
Latest 10 news articles -- how to optimize the schema and code for such
Build and execute a "Pivot" SELECT (showing rows as columns)
Find largest row for each group ("groupwise max")
Other Tips, Tuning, Debugging, Optimizations, etc...
Rick's RoTs (Rules of Thumb -- lots of tips)
Datatypes and building a good schema
Memory Allocation (caching, etc)
Character Set and Collation problem solver
Trouble with UTF-8 If you want case folding, but accent sensitivity, please file a request at http://bugs.mysql.com .
Python tips, PHP tips, other language tips
utf8 Collations utf8mb4 Collations on 8.0
Converting from MyISAM to InnoDB -- includes differences between them
Compound INDEXes plus other insights into the mysteries of INDEXing
Cookbook for Creating Indexes
Many-to-many mapping table Handler counts wp_postmeta UNION+OFFSET
MySQL Limits -- built-in hard limits
767-byte INDEX limit
Galera, tips on converting to (Percona XtraDB Cluster, MariaDB 10, or manually installed)
5.7's Query Rewrite -- perhaps 5.7's best perf gain, at least for this forum's users
Analyze MySQL Performance
Analyze VARIABLEs and GLOBAL STATUS Analyze SlowLog
My slides from conferences
Percona Live 4/2017 - Rick's RoTs (Rules of Thumb) - MySQL/MariaDB
Percona Live 4/2017 - Index Cookbook - MySQL/MariaDB
Percona Live 9/2015 - PARTITIONing - MySQL/MariaDB
(older ones upon request)
Contact me via LinkedIn; be sure to include a brief teaser in the Invite request: