Bug ID: 62360
           Summary: LocalFileDeleteBatch::getHashes oldimage table scan on
                    commons is slow
           Product: MediaWiki
           Version: 1.23-git
          Hardware: All
                OS: All
            Status: NEW
          Severity: major
          Priority: Unprioritized
         Component: File management
       Web browser: ---
   Mobile Platform: ---

Spotted this on commonswiki master:

SELECT /* LocalFileDeleteBatch::getHashes */ oi_archive_name, oi_sha1 FROM
`oldimage` WHERE oi_archive_name IN

Commons oldimage has millions of rows, and this does a table scan because
oi_archive_name is only indexed in second position:

KEY `oi_name_archive_name` (`oi_name`,`oi_archive_name`(14))

This transaction was also holding locks on `page` table presumably from an
earlier query. Those locks caused a number of other file-upload transaction to
exceed innodb_lock_wait_timeout.

tendril> select * from innodb_locks_log where lock_trx_id = 'AB64BF626'\G
*************************** 1. row ***************************
      stamp: 2014-03-07 05:51:47
  server_id: 1060
    lock_id: AB64BF626:0:34662177:177
lock_trx_id: AB64BF626
  lock_mode: X
  lock_type: RECORD
 lock_table: `commonswiki`.`page`
 lock_index: `name_title`
 lock_space: 0
  lock_page: 34662177
   lock_rec: 177
  lock_data: 6,
... more similar locks ...
16 rows in set (0.35 sec)


1. oi_archive_name deserves an index, or can oi_name be included in the query?

2. oldimage deserves a primary key to make online schema changes possible.

3. Can the query go to a slave?

You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
Wikibugs-l mailing list

Reply via email to