The following features have been added to MySQL 5.6
Better Performance and Scalability
In MySQL 5.6 performance has been improved to provide better linear performance and scale on systems supporting multi-processors and high CPU thread concurrency. This is achieved through improvements InnoDB storage engine efficiency and concurrency that remove legacy thread contention and mutex locking within the InnoDB kernel.
InnoDB now has multiple purge threads, which makes purge operations more efficient across multiple tables. Earlier purge operations were handled by the InnoDB master thread which also handles other InnoDB operations. In MySQL 5.6 value for innodb_purge_threads can be set to greater than 1.
Separate Flush Thread
MySQL 5.6 uses separate thread for flush operation. Earlier flush operation was handled ny InnoDB master thread.
InnoDB now enables true online index and table operations via new CREATE INDEX and ALTER TABLE DDL syntax additions. Users can add indexes and perform standard table alterations while the database remains available for application updates.
New Full Text Search (FTS)
InnoDB now enables developers to build FULLTEXT indexes on InnoDB tables to represent text-based content and speed up application searches for words and phrases. InnoDB full-text search supports Natural language/Boolean modes, proximity search and relevance ranking.
InnoDB .ibd files created in file-per-table model are now transportable between physical storage devices and database servers; when creating a table you can now designate a storage location for the .idb file outside of the MySQL data directory. This enables you to define “hot” or busy tables on an external SSD device that does not compete with application or database overhead. This new feature also enables you to easily export/import InnoDB tables between running MySQL servers without inconsistencies or mismatches caused by buffered data, in-progress transactions, etc.
The optimizer now simplifies query development by optimizing subqueries using semi-joins and materialization, in some cases prior to execution. Developers no longer need to re-write existing subqueries into joins for the optimizer to assign the most efficient access path to data. Tests run using the DBT3 Query #13 show a marked improvement in execution times over previous versions.
File Sort Optimization
For queries with ORDER BY and small LIMIT values, the optimizer now produces an ordered result set using a single table scan. This feature speeds up the sort when the contents of rows can fit into the sort buffer. By default, the optimizer now pushes WHERE conditions down to the storage engine for evaluation, table scan and return of ordered result set to the MySQL server.
Index Condition Pushdown
Moves more of the processing for WHERE clauses to the storage engine. Instead of fetching entire rows to evaluate against a set of WHERE clauses, ICP sends those clauses to the storage engine, which can prune the result set by examining index tuples. The result is less I/O overhead for the base table, and less internal communication overhead for the server and the storage engine. This feature works with InnoDB, MyISAM, and NDBCLUSTER tables.
Improved Replication, High Availability
MySQL 5.6 has introduced new replication features with self-healing replication topologies and high performance master and slaves.
Global Transactions Identifiers (GTIDs)
GTIDs enable replication transactional integrity to be tracked through a replication master/slave topology, providing a foundation for self-healing recovery, and enabling DBAs/developers to easily identify the most up to date slave in the event of a master failure. Built directly into the native replication stream, GTIDs eliminate the need for a third-party add- on to provide this same level of tracking intelligence.
New MySQL Replication HA utilities
A new set of Python Utilities are designed to leverage the new replication GTIDs to provide replication administration and monitoring with automatic fail-over in the event of a failed master, or switchover in the event of maintenance to the master. This too alleviates the need for additional third party High-Availability solutions, protecting web and cloud-based services against both planned and unplanned downtime without operator intervention.
Splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases – e.g. multi-tenant systems.
Binary Log Group Commit
Replication masters now group writes to the Binlog on disk, rather than committing them one at a time, significantly improving performance on the master side of the topology. Results have shown up to 4x in performance gains when enabled.
Optimized Row-based Replication
By only replicating those elements of the row image that have changed following INSERT, UPDATE and DELETE operations, replication throughput for both the master and slave(s) can be increased while binary log disk space, network resource and server memory footprint are all reduced.
Crash-Safe Slaves and Binlog
Slaves can automatically roll back replication to the last committed event before a crash, and resume replication without administrator intervention. Not only does this reduce operational overhead, it also eliminates the risk of data loss or corruption caused by the failure of a slave. Further, if a crash to the master causes corruption of the binary log, the server will automatically recover it to a position where it can be read correctly.
Ensure the integrity of data being replicated to a slave by detecting data corruption and returning an error, preventing the slave itself from becoming corrupt.
Provides protection against operational errors made on the master, for example accidently dropping tables, in which event the slave can be promoted to the new master in order to restore the database to its previous state. It also becomes possible to inspect the state of a database without reloading a back up.
Provide an alternative to password in master.info
MySQL 5.6 extends the replication START SLAVE command to enable DBAs to specify master user and password as part of the replication slave options and to authenticate the account used to connect to the master through an external authentication plugin (user defined or those provided under MySQL Enterprise Edition). With these options the user and password no longer need to be included in the master.info file.
Obfuscate passwords in general query log, slow query log, and binary log
Passwords in statements written to these logs are no longer recorded in plain text.
Providing password hashing with appropriate strength
Default password hashing for internal MySQL server authentication via PASSWORD() is now done using the SHA-256 password hashing algorithm using a random salt value.
Avoid passwords on the command line
MySQL 5.6 introduces a new “scrambled” option/config file (.mylogin.cnf) that can be used to securely store user passwords that are used for command line operations.
Change Password at next login
Developers and DBAs can now control when account passwords must be changed via a new password_expired flag in the mysql.user table.
Policy-based Password validations
Passwords can now be validated for appropriate strength, length, mixed case, special chars, and other user defined policies based on LOW, MEDIUM and STRONG designation settings.
Latest My.ini parameters
InnoDB Multiple Buffer Pools
It is useful when innodb buffer pool size is large. When the InnoDB buffer pool is large, many data requests can be satisfied by retrieving from memory. You might encounter bottlenecks from multiple threads trying to access the buffer pool at once.
To enable multiple buffer pools set the value of innodb_buffer_pool_instances to greater than 1 and up to 64(maximum). This option will take effect only when innodb_buffer_pool_size is set to 1GB or higher.
This parameter is useful only when we use multiple InnoDB tablespaces. It specifies the maximum number of .ibd files that MySQL can keep open at one time. Minimum value is 10.
This will dump data cached in buffer pool into a file before mysql server shutdown.
This will load data back to buffer pool while starting mysql server.
Save the cached data from innodb buffer pool into specified file while stopping MySQL service using parameter ‘innodb_buffer_pool_dump_at_shutdown’
The number of open tables for all threads. Increasing this value increases the number of file descriptors that mysql requires. Before MySQL 5.1.3, this variable is called ‘table_cache’
The number of table definitions (from .frm files) that can be stored in the definition cache. If you use a large number of tables, you can create a large table definition cache to speed up opening of tables. The table definition cache takes less space and does not use file descriptors, unlike the normal table cache.