Dear MySQL Users,

MySQL Cluster is the distributed, shared-nothing variant of MySQL.
This storage engine provides:

  - In-Memory storage - Real-time performance (with optional
    checkpointing to disk)
  - Transparent Auto-Sharding - Read & write scalability
  - Active-Active/Multi-Master geographic replication

  - 99.999% High Availability with no single point of failure
    and on-line maintenance
  - NoSQL and SQL APIs (including C++, Java, http, Memcached
    and JavaScript/Node.js)

MySQL Cluster 7.5.8, has been released and can be downloaded from

where you will also find Quick Start guides to help you get your
first MySQL Cluster database up and running.

MySQL Cluster 7.5 is also available from our repository for Linux
platforms, go here for details:

The release notes are available from

MySQL Cluster enables users to meet the database challenges of next
generation web, cloud, and communications services with uncompromising
scalability, uptime and agility.

More details can be found at

Enjoy !

Changes in MySQL NDB Cluster 7.5.8 (5.7.20-ndb-7.5.8) (2017-10-18,
General Availability)

   MySQL NDB Cluster 7.5.8 is a new release of MySQL NDB Cluster
   7.5, based on MySQL Server 5.7 and including features in
   version 7.5 of the NDB
   storage engine, as well as fixing recently discovered bugs in
   previous NDB Cluster releases.

   Obtaining MySQL NDB Cluster 7.5.  MySQL NDB Cluster 7.5
   source code and binaries can be obtained from

   For an overview of changes made in MySQL NDB Cluster 7.5, see
   What is New in NDB Cluster 7.5

   This release also incorporates all bugfixes and changes made
   in previous NDB Cluster releases, as well as all bugfixes and
   feature changes which were added in mainline MySQL 5.7
   through MySQL 5.7.20 (see Changes in MySQL 5.7.20 (Not yet
   released, General Availability)

   Bugs Fixed

     * Replication: With GTIDs generated for incident log
       events, MySQL error code 1590 (ER_SLAVE_INCIDENT) could
       not be skipped using the --slave-skip-errors=1590 startup
       option on a replication slave. (Bug #26266758)

     * Errors in parsing NDB_TABLE modifiers could cause memory
       leaks. (Bug #26724559)

     * Added DUMP code 7027 to facilitate testing of issues
       relating to local checkpoints. For more information, see
       DUMP 7027
( (Bug #26661468)

     * A previous fix intended to improve logging of node
       failure handling in the transaction coordinator included
       logging of transactions that could occur in normal
       operation, which made the resulting logs needlessly
       verbose. Such normal transactions are no longer written
       to the log in such cases. (Bug #26568782)
       References: This issue is a regression of: Bug #26364729.

     * Due to a configuration file error, CPU locking capability
       was not available on builds for Linux platforms. (Bug

     * Some DUMP codes used for the LGMAN kernel block were
       incorrectly assigned numbers in the range used for codes
       belonging to DBTUX. These have now been assigned symbolic
       constants and numbers in the proper range (10001, 10002,
       and 10003). (Bug #26365433)

     * Node failure handling in the DBTC kernel block consists
       of a number of tasks which execute concurrently, and all
       of which must complete before TC node failure handling is
       complete. This fix extends logging coverage to record
       when each task completes, and which tasks remain,
       includes the following improvements:

          + Handling interactions between GCP and node failure
            handling interactions, in which TC takeover causes
            GCP participant stall at the master TC to allow it
            to extend the current GCI with any transactions that
            were taken over; the stall can begin and end in
            different GCP protocol states. Logging coverage is
            extended to cover all scenarios. Debug logging is
            now more consistent and understandable to users.

          + Logging done by the QMGR block as it monitors
            duration of node failure handling duration is done
            more frequently. A warning log is now generated
            every 30 seconds (instead of 1 minute), and this now
            includes DBDIH block debug information (formerly
            this was written separately, and less often).

          + To reduce space used, DBTC instance number: is
            shortened to DBTC number:.

          + A new error code is added to assist testing.
       (Bug #26364729)

     * Error 899 Rowid already allocated... was reported under
       certain conditions involving high loads on a cluster
       having 4 or more data nodes. This occurred when a row ID
       was available at the primary but not available at the
       backup; a transaction that seized the row ID at the
       primary failed to seize it at the backup since the ID was
       For multiple operations on the same tuple where the end
       result of the transaction is that the tuple is deleted,
       the last operation on that tuple is marked to deallocate
       the row ID. The operation is marked by setting a flag
       during the COMMIT phase. When processing a COMPLETE
       operation, this flag is checked, and if set, the row ID
       is deallocated.
       When a transaction contained a READ-EX operation,
       followed by a DELETE on the same tuple, some commits
       could be made out of order. When another transaction was
       running insert operations on the same table, the
       following events occurred in the order listed:

         1. DELETE committed on primary

         2. READ-EX committed on primary, marked as deallocator
            on primary

         3. READ-EX completed on primary, row ID deallocated

         4. Freed row ID on primary seized by INSERT operation

         5. INSERT tries to seize row ID on backup, gets Error
            899 because row ID has not yet been freed on backup

         6. DELETE completed on backup, row ID deallocated
       This problem is fixed by enforcing a constraint such that
       the deallocator operation must be a write operation, so
       that read operations are ignored while choosing the
       deallocator and the last write operation. Since write
       operations are completed first at the backup and then at
       the primary, this ensures that the row ID cannot be freed
       at the primary before it is freed at the backup. In
       addition, since row ID deallocation is now performed on
       the primary and on the backup by the same operation, the
       time between deallocation on the backup and deallocation
       on the primary is significantly shortened. (Bug
       References: See also: Bug #85706, Bug #26040805.

     * NDB Cluster did not compile successfully when the build
( (Bug
       #86881, Bug #26375985)

     * A potential hundredfold signal fan-out when sending a
       START_FRAG_REQ signal could lead to a node failure due to
       a job buffer full error in start phase 5 while trying to
       perform a local checkpoint during a restart. (Bug #86675,
       Bug #26263397)
       References: See also: Bug #26288247, Bug #26279522.

     * Compilation of NDB Cluster failed when using
       to build only the client libraries. (Bug #85524, Bug #25741111)

MySQL General Mailing List
For list archives:
To unsubscribe:

Reply via email to