Allocating RAM for MySQL - The Short Answer
If using just MyISAM, set key_buffer_size to 20% of _available_ RAM.
If using just InnoDB, set innodb_buffer_pool_size to 70% of _available_ RAM. (Plus key_buffer_size = 10M, small, but not zero.)
Rule of thumb for tuning mysql:
⚈ Start with released copy of my.cnf / my.ini.
⚈ Change key_buffer_size and innodb_buffer_pool_size according to engine usage and RAM.
⚈ Slow queries can usually be 'fixed' via indexes, schema changes, or SELECT changes, not by tuning.
⚈ Don't get carried away with the Query cache until you understand what it can and cannot do.
⚈ Don't change anything else unless you run into trouble (eg, max connections).
⚈ Be sure the changes are under the [mysqld] section, not some other section.
Now for the gory details. (NDB Cluster is not discussed here.)in
What is the key_buffer?
MyISAM does two different things for caching.
⚈ Index blocks (1KB each, BTree structured, from .MYI file) live in the "key buffer".
⚈ Data block caching (from .MYD file) is left to the OS, so be sure to leave a bunch of free space for this.
Caveat: Some flavors of OS always claim to be using over 90%, even when there is really lots of free space.
SHOW GLOBAL STATUS LIKE 'Key%'; then calculate Key_read_requests / Key_reads If it is high (say, over 10), then the key_buffer is big enough.
What is the buffer_pool?
InnoDB does all its caching in a the "buffer pool", whose size is controlled
by innodb_buffer_pool_size. It contains 16KB data and index blocks from
the open tables, plus some maintenance overhead.
MySQL 5.5 (and 5.1 with the "Plugin") lets you declare the block size to be 8KB or 4KB. MySQL 5.5 allows multiple buffer pools; this can help because there is one mutex per pool, thereby relieving some of the Mutex bottleneck.
More on InnoDB Tuning
This will set the main cache settings to the minimum; it
could be important to systems with lots of other processes
and/or RAM is 2GB or smaller.
Do SHOW TABLE STATUS for all the tables in all the databases.
Add up Index_length for all the MyISAM tables. Set key_buffer_size no larger than that size.
Add up Data_length + Index_length for all the InnoDB tables. Set innodb_buffer_pool_size to no more than 110% of that total.
If that leads to swapping, cut both settings back. Suggest cutting them down proportionately.
Run this to see the values for you system. (If you have a lot of tables, it can take minute(s).)
ROUND(SUM(data_length) /1024/1024, 1) AS "Data MB",
ROUND(SUM(index_length)/1024/1024, 1) AS "Index MB",
ROUND(SUM(data_length + index_length)/1024/1024, 1) AS "Total MB",
COUNT(*) "Num Tables"
WHERE table_schema not in ("information_schema", "performance_schema")
GROUP BY ENGINE;
MySQL was designed in the days of single-CPU machines, and designed
to be easily ported to many different architectures.
Unfortunately, that lead to some sloppiness in how to interlock actions.
There are small number (too small) of "mutexes" to gain access to several critical
processes. Of note:
⚈ MyISAM's key_buffer
⚈ The Query Cache
⚈ InnoDB's buffer_pool
With multi-core boxes, the mutex problem is causing performance problems. In general, past 4-8 cores, MySQL gets slower, not faster. MySQL 5.5 and Percona's XtraDB are making that somewhat better in InnoDB; the practical limit for cores is more like 32, and performance tends plateaus after that rather than declining. 5.6 claims to scale up to about 48 cores.
HyperThreading and Multiple cores (CPUs)
⚈ Turn off HyperThreading
⚈ Turn off any cores beyond 8
⚈ HyperThreading is mostly a thing of the past, so this section may not apply.
HyperThreading is great for marketing, lousy for performance. It involves having two processing units sharing a single hardware cache. If both units are doing the same thing, the cache will be reasonably useful. If the units are doing different things, they will be clobbering each other's cache entries.
Furthermore MySQL is not great on using multiple cores. So, if you turn off HT, the remaining cores run a little faster.
32-bit OS and MySQL
First, the OS (and the hardware?) may conspire to not let you use
all 4GB, if that is what you have.
If you have more than 4GB of RAM, the excess beyond 4GB
is _totally_ inaccessable and unusable on a 32-bit OS.
Secondly, the OS probably has a limit on how much RAM it will allow any process to use.
Example: FreeBSD's maxdsiz, which defaults to 512MB.
$ ulimit -a
max memory size (kbytes, -m) 524288
So, once you have determined how much RAM is available to mysqld, then apply the 20%/70%, but round down some.
If you get an error like [ERROR] /usr/libexec/mysqld: Out of memory (Needed xxx bytes), it probably means that MySQL exceeded what the OS is willing to give it. Decrease the cache settings.
64-bit OS with 32-bit MySQL
The OS is not limited by 4GB, but MySQL is.
If you have at least 4GB of RAM, then maybe these would be good:
⚈ key_buffer_size = 20% of _all_ of RAM, but not more than 3G
⚈ buffer_pool = 3G
You should probably upgrade MySQL to 64-bit.
64-bit OS and MySQL
MyISAM only: key_buffer_size (before 5.0.52 / 5.1.23) had a hard limit of 4G.
Otherwise, use about 20% of RAM. Set (in my.cnf / my.ini) innodb_buffer_pool_size = 0.
InnoDB only: innodb_buffer_pool_size = 70% of RAM. If you have lots of RAM and are using 5.5 (or later), then consider having multiple pools. Recommend 1-16 innodb_buffer_pool_instances, such that each one is no smaller than 1GB. (Sorry, no metric on how much this will help; probably not a lot.)
Meanwhile, set key_buffer_size = 20M (tiny, but non-zero)
If you have a mixture of engines, lower both numbers.
Each "thread" takes some amount of RAM.
This used to be about 200KB; 100 threads would be 20MB, not a signifcant size.
If you have max_connections = 1000, then you are talking about 200MB, maybe more.
Having that many connections probably implies other issues that should be addressed.
In 5.6 (or MariaDB 5.5), optional thread pooling interacts with max_connections. This is a more advanced topic.
Thread stack overrun rarely happens. If it does, do something like thread_stack=256K
More on max_connections, wait_timeout, connection pooling, etc
(The name changed in some version.)
The OS has some limit on the number of open files it will let a process have. Each table needs 1 to 3 open files. Each PARTITION is effectively a table. Most operations on a partitioned table open _all_ partitions.
In *nix, ulimit tells you what the file limit is. The maximum value is in the tens of thousands, but sometimes it is set to only 1024. This limits you to about 300 tables. More discussion on ulimit
(This paragraph is in disputed.) On the other side, the table cache is (was) inefficiently implemented -- lookups were done with a linear scan. Hence, setting table_cache in the thousands could actually slow down mysql. (Benchmarks have shown this.)
You can see how well your system is performing via SHOW GLOBAL STATUS; and computing the opens/second via Opened_files / Uptime If this is more than, say, 5, table_cache should be increased. If it is less than, say, 1, you might get improvement by decreasing table_cache.
Short answer: query_cache_type = OFF and query_cache_size = 0
The QC is effectively a hash mapping SELECT statements to resultsets.
Long answer... There are many aspects of the "Query cache"; many are negative.
⚈ Novice Alert! The QC is totally unrelated to the key_buffer and buffer_pool.
⚈ When it is useful, the QC is blazingly fast. It would not be hard to create a benchmark that runs 1000x faster.
⚈ There is a single mutex controlling the QC.
⚈ The QC, unless it is OFF & 0, is consulted for _every_ SELECT.
⚈ Yes, the mutex is hit even if query_cache_type = DEMAND (2).
⚈ Yes, the mutex is hit even for SQL_NO_CACHE.
⚈ Any change to a query (even adding a space) leads (potentially) to a different entry in the QC.
"Pruning" is costly and frequent:
⚈ When _any_ write happens on a table, _all_ entries in the QC for _that_ table are removed.
⚈ It happens even on a readonly Slave.
⚈ Purges are performed with a linear algorithm, so a large QC (even 200MB) can be noticeably slow.
To see how well your QC is performing, SHOW GLOBAL STATUS LIKE 'Qc%'; then compute the read hit rate: Qcache_hits / Qcache_inserts If it is over, say, 5, the QC might be worth keeping.
If you decide the QC is right for you, then I recommend
⚈ query_cache_size = no more than 50M
⚈ query_cache_type = DEMAND
⚈ SQL_CACHE or SQL_NO_CACHE in all SELECTs, based on which queries are likely to benefit from caching.
QC in depth
This is a minor tunable. Zero will slow down thread (connection) creation.
A small (say, 10), non-zero number is good.
The setting has essentially no impact on RAM usage.
It is the number of extra processes to hang onto. It does not restrict the number of threads; max_connections does.
If you have turned on binarly loging (via log_bin) for replication and/or point-in-time recovery, The system will create binary logs forever. That is, they can slowly fill up disk. Suggest setting expire_logs_days = 14 to keep only 14 days' worth of logs.
RHEL, in its infinite wisdom, decided to let you control how aggressively the OS
will preemptively swap RAM. This is good in general, but lousy for MySQL.
MySQL would love for RAM allocations to be reasonably stable -- the caches are (mostly) pre-allocated; the threads, etc, are (mostly) of limited scope. ANY swapping is likely to severly hurt performance of MySQL.
With a high value for swappiness, you lose some RAM because the OS is trying to keep a lot of space free for future allocations (that MySQL is not likely to need).
With swappiness = 0, the OS will probably crash rather than swap. I would rather have MySQL limping than die. The latest recommendation is swappiness = 1. (2015)
Somewhere in between (say, 5?) might be a good value for a MySQL-only server.
OK, it's time to complicate the architecture of how a CPU talks to RAM.
NUMA (Non-Uniform Memory Access) enters the picture.
Each CPU (or maybe socket with several cores) has a part of the RAM
hanging off each. This leads to memory access being faster for local
RAM, but slower (tens of cycles slower) for RAM hanging off other CPUs.
Then the OS enters the picture. In at least one case (RHEL?), two things seem to be done:
⚈ OS allocations are pinned to the 'first' CPU's RAM.]
⚈ Other allocations go by default to the first CPU until it is full.
Now for the problem.
⚈ The OS and MySQL have allocated all the 'first' RAM.
⚈ MySQL has allocated some of the second RAM.
⚈ The OS needs to allocate something.
Ouch -- it is out of room in the one CPU where it is willing to allocate its stuff, so it swaps out some of MySQL. Bad.
dmesg | grep -i numa # to see if you have numa
Probable solution: Configure the BIOS to "interleave" the RAM allocations. This should prevent the premature swapping, at the cost of off-CPU RAM accesses half the time. Well, you have the costly accesses anyway, since you really want to use all of RAM.
Another possible solution: Turn numa off (if the OS has a way of doing that)
Overall performance loss/gain: A few percent.
This is another hardware performance gimmick.
For a CPU to access RAM, especially mapping a 64-bit address to somewhere in, say, 128GB or 'real' RAM, the TLB is used. (TLB = Translation Lookup Buffer.) Think of the TLB as a hardware associative memory lookup table; given a 64-bit virtual address, what is the real address.
Because it is an associative memory of finite size, sometimes there will be "misses" that require reaching into real RAM to resolve the lookup. This is costly, so should be avoided.
Normally, RAM is 'paged' in 4KB pieces; the TLB actually maps the top (64-12) bits into a specific page. Then the bottom 12 bits of the virtual address are carried over intact.
For example, 128GB of RAM broken 4KB pages means 32M page-table entries. This is a lot, and probably far exceeds the capacity of the TLB. So, enter the "Huge page" trick.
With the help of both the hardware and the OS, it is possible to have some of RAM in huge pages, of say 4MB (instead of 4KB). This leads to far fewer TLB entries, but it means the unit of paging is 4MB for such parts of RAM. Hence, huge pages tend to be non-pagable.
Now RAM is broken into pagable and non pagable parts; what parts can reasonably be non pagable? In MySQL, the innodb_buffer_pool is a perfect candidate. So, by correctly configuring these, InnoDB can run a little faster:
⚈ Huge pages enabled
⚈ Tell the OS to allocate the right amount (namely to match the buffer_pool)
⚈ Tell MySQL to use huge pages
innodb memory usage vs swap
That thread has more details on what to look for and what to set.
Overall performance gain: A few percent. Yawn. Too much hassle for too little benefit.
Jumbo Pages? Turn off.
This is a little-used alternative to MyISAM and InnoDB.
The data is not persistent, so it has limited uses.
The size of a MEMORY table is limited to max_heap_table_size, which defaults to 16MB.
I mention it in case you have changed the value to something huge; this would
stealing from other possible uses of RAM.
How to Set VARIABLEs
In the text file my.cnf (my.ini on Windows), add more modify a line to say something like
innodb_buffer_pool_size = 5G
That is, VARIABLE name, "=", and a value. Some abbreviations are allowed, such as
M for million (1048576), G for billion.
For the server to see it, the settings must be in the "[mysqld]" section of the file.
The settings in my.cnf or my.ini will not take effect until you restart the server.
Most settings can be changed on the live system by connecting as user root (or other user with SUPER privilege) and doing something like
SET @@global.key_buffer_size = 77000000;
Note: No M or G suffix is allowed here.
To see the setting a global VARIABLE do something like
mysql> SHOW GLOBAL VARIABLES LIKE "key_buffer_size";
| Variable_name | Value |
| key_buffer_size | 76996608 |
Note that this particular setting was rounded down to some multiple that MySQL
You may want to do both (SET, and modify my.cnf) in order to make the change immediately and have it so that the next restart (for whatever reason) will again get the value.
A web server like Apache runs multiple threads. If each threads opens a connection
to MySQL, you could run out of connections. Make sure MaxClients (or equivalent)
is set to some civilized number (under 50).
There are several tools that advise on memory. One misleading entry they come up with
Maximum possible memory usage: 31.3G (266% of installed RAM)
Don't let it scare you -- the formulas used are excessively conservative.
They assume all of max_connections are in use and active, and doing something memory-intensive.
Total fragmented tables: 23 This implies that OPTIMIZE TABLE _might_ help. I suggest it for tables with either a high percentage of "free space" (see SHOW TABLE STATUS) or where you know you do a lot of DELETEs and/or UPDATEs. Still, don't bother to OPTIMIZE too often. Once a month might suffice.
5.7 stores a lot more information in RAM, leading to the footprint being perhaps half a GB than 5.6. See [https://blogs.oracle.com/svetasmirnova/entry/memory_summary_tables_in_performance][Memory increase in 5.7]]
Created 2010; Refreshed Oct, 2012, Jan, 2014
More in-depth: Tocker's tuning for 5.6
Irfan's InnoDB performance optimization basics (redux)
10 MySQL settings to tune after installation