ArielGlenn has uploaded a new change for review.

  https://gerrit.wikimedia.org/r/50180


Change subject: bugfixes, more documentation, good enough for v0.0.1 now.
......................................................................

bugfixes, more documentation, good enough for v0.0.1 now.

mwxml2sql:
- add missing quote at beginning of text field output
- remove namespace from title before output
- -1 for "ignore size" in sql_escape, missed this one earlier
- sql escape the username field before output
- wrote page and text table fields in the wrong order, fixed (needs
  to match the order in the create statements from MW or you have
  to add a trailing line of columns when using LOAD DATA INFILE)
- fix batch writing of revs (COMMIT after every 1000 lines)
- make revision output file use full table name in filename like
  the other output files do
- if we don't write gzipped text blobs, write CREATE TABLE statement
  which uses innodb table-based compression for the text table
- quiet compiler initialization warnings

sql2txt:
- output buffer was too short, fixed segfault
- after doing a tuple which ends the line, don't refill the
  buffer, let the line processor handle it
- clean up short option handling (mismatched short and long opts)
- convert NULL fields to \N for LOAD DATA INFILE
- docs for sql2txt in README, more timing info, rerganize docs,
  note that we tested on FreeBSD too
- fix creation of output filenames with no mw version

Change-Id: I095fb169024a243ca66bb8b9e511baae06aeb197
---
M xmlfileutils/README
M xmlfileutils/filebuffers.c
M xmlfileutils/mwxml2sql.c
M xmlfileutils/mwxml2sql.h
M xmlfileutils/mwxmlelts.c
M xmlfileutils/sqlutils.c
6 files changed, 162 insertions(+), 56 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/dumps 
refs/changes/80/50180/1

diff --git a/xmlfileutils/README b/xmlfileutils/README
index 9f5268d..6138091 100644
--- a/xmlfileutils/README
+++ b/xmlfileutils/README
@@ -1,3 +1,10 @@
+This is a small set of tools to help folsk who want to import
+XML dumps into a local instance of MediaWiki.
+
+Tools:
+
+mwxml2sql
+
 This is a bare bones converter of MediaWiki XML dumps to sql.
 It does one thing and hopefully at least doesn't suck at it.
 If you want the swiss army knife of converters, better shop
@@ -11,17 +18,25 @@
 toolchain or an equivalent C compiler and its supporting
 libraries. You'll also need the 'make' utility.
 
-This program has been tested only on 64-bit Linux.  You can
-try building it on other platforms but without any support
-from the author.  If you do build it on another platform and
+sql2txt
+
+This converts sql produced from mysqldump or from an XML dump to
+sql file converter, into a format that can be read by MySQL via
+LOAD DATA INFILE, a much faster process than having mysql do a pile
+of INSERTS. If you feed it data formatted differently from that
+output, or with rows broken across lines etc, it may produce garbage.
+
+These programs have been tested only on 64-bit Linux and on FreeBSD.
+You can try building them on other platforms but without any support
+from the author.  If you do build them on another platform and
 run it successfully, let us know and we'll include that
 information here.
 
 INSTALLATION
 
 Unpack the distribution someplace, cd into it, 'make'
-and if you like 'make install'.  The binary will be
-installed into /usr/local/bin.  If you want it someplace
+and if you like 'make install'.  The binaries will be
+installed into /usr/local/bin.  If you want them someplace
 else, edit the Makefile and change the line
 'PREFIX=/usr/local'  as desired.
 
@@ -61,7 +76,17 @@
 gzipped output files: 460M for page table, 905M for revision table,
 9.6GB for text table.
 
+I ran mwdumper on this same file on the same hardware and it took
+just over two hours.  Somewhere there is a small memory leak,
+not enough to concern someone working with the current page content
+only.
+
+A run of mwimport (perl script) on the same hardware took slightly
+over two hours.
+
 USING THE OUTPUT
+
+Making a local mirror with all current pages
 
 The sql files produced keep the page, revision and text ids of the
 input XML files.  This means that if your database uses only XML
@@ -70,28 +95,47 @@
 between imports or you import on top of an existing database you
 will encounter problems.
 
-Once you have generated the files, you can import them into mysql
-(5.x, untested and probably broken with 4.x). If the tables don't
-exist you can create them with the generated -createtables- file.
-This creates tables with InnoDB and binary character set; we
-recommend this setup for everyone.
+Convert it and all other needed sql tables to the format needed
+for LOAD DATA INFILE, using the command
 
-After the tables are created (if needed) and the page, revision
-and text tables are imported, you should import the other sql tables
-from the same download.  This will save you from needing to rebuild
-things like the links tables afterwards (horribly slow).
+zcat blah.sql.gz | sql2txt > blah.txt
+(sqltxt --help for full usage message)
+
+You can skip converting the image, imagelinks, oldimage and
+user_groups tables.  Loading in the other tables saves you from
+needing to rebuild things like the links tables afterwards
+(horribly slow).
+
+Make sure the database is set up with the right character set
+(we like 'binary'), set your client character set (utf-8) and
+change any settings you want to for speed (turning off for
+example foreign_key_checks and/or unique_checks).
+
+Now load your files in one at a time. (Tested on MySQL 5.5,
+probably ok on 5.1, probably broken on 4.x.) You may find that
+files that are multiple GB should be split into smaller files
+and fed in one at a time; let us know if that's the case so
+we can put it on our todo list.
+
+If you don't have a ton of space you may want to convert your
+sql files one at a time and remove the plain text file after
+MySQL has read it in.
 
 Note that the other sql tables are not guaranteed to be 100% consistent
 with the XML files, since they are produced at different times and we
 don't lock the tables before dumping.  Null edits to a page ought to
 fix any rendering issues it may have based on out of sync tables.
 
+Making a local mirror with a subset of current pages
+
 If you want to import only a subset of the current pages from a wiki
-instead of the entire thing, this is NOT the tool for you. You should
+instead of the entire thing, these are NOT the tools for you. You should
 consider using maintenance/importDump.php (in your MediaWiki installation
-directory) instead, if the subset is small enough.  Loading in the other
-tables will give you a lot of links and entries that won't be valid,
-and the other option, maintenance/rebuildall.php, will also be very slow.
+directory) instead, if the subset is small enough.  Converting XML dumps
+to page/rev/text tables and then loading in the other tables from our
+generated sql files will give you a lot of links and entries that won't
+be valid, and the other option, maintenance/rebuildall.php, will also be
+very slow.
 
 WARNINGS
 
@@ -99,7 +143,13 @@
 and not very comprehensively.  Please help discover bugs!
 
 This does NOT support dumps from wikis with LiquidThread enabled.
-That's a feature set for a future version.
+That's a feature set for a future version (or not).
+
+Other untested features: importing output with gzipped text revisions,
+using history xml dumps.
+
+Changes to the xml schema, in particular new tags or a switch in
+the order of items, will break the script.
 
 LICENSE
 
diff --git a/xmlfileutils/filebuffers.c b/xmlfileutils/filebuffers.c
index 1640043..fac444b 100644
--- a/xmlfileutils/filebuffers.c
+++ b/xmlfileutils/filebuffers.c
@@ -588,7 +588,7 @@
   one write stream!
 */
 output_file_t *init_output_file(char *basename, char *suffix, mw_version_t 
*mwv) {
-  output_file_t *outf, *current, *head = NULL;
+  output_file_t *outf, *current = NULL, *head = NULL;
   mw_version_t *next = NULL;
   int do_once = 1;
   char *version = NULL;
@@ -625,7 +625,7 @@
     if (outf->mwv) version = outf->mwv->version;
     else version = NULL;
 
-    outf->filename = (char *)malloc(strlen(basename) + 
(suffix?strlen(suffix):0) + strlen(version) + 2);
+    outf->filename = (char *)malloc(strlen(basename) + 
(suffix?strlen(suffix):0) + (version?strlen(version):0) + 2);
     if (!outf->filename) {
       fprintf(stderr,"failed to get memory for output file information\n");
       free_output_file(head);
diff --git a/xmlfileutils/mwxml2sql.c b/xmlfileutils/mwxml2sql.c
index 90fd252..999a7bb 100644
--- a/xmlfileutils/mwxml2sql.c
+++ b/xmlfileutils/mwxml2sql.c
@@ -150,7 +150,7 @@
   fprintf(stderr,"\n");
   fprintf(stderr,"mediawiki   (m):   version of mediawiki for which to output 
sql;  supported versions are\n");
   fprintf(stderr,"                   shown in the program version information, 
available via the 'version' option\n");
-  fprintf(stderr,"                   used to derive the names of the sql files 
for the page, revs and text content\n");
+  fprintf(stderr,"                   used to derive the names of the sql files 
for the page, revision and text content\n");
   fprintf(stderr,"stubs       (s):   name of stubs xml file; .gz and .bz2 
files will be silently uncompressed.\n");
   fprintf(stderr,"\n");
   fprintf(stderr,"Optional arguments:\n");
@@ -158,7 +158,7 @@
   fprintf(stderr,"text        (t):   name of text xml file; .gz and .bz2 files 
will be silently uncompressed.\n");
   fprintf(stderr,"                   if not specified, data will be read from 
stdin\n");
   fprintf(stderr,"mysqlfile   (f):   name of filename (possibly ending in .gz 
or .bz2 or .txt) which will be\n");
-  fprintf(stderr,"                   used to derive the names of the sql files 
for the page, revs and text content\n");
+  fprintf(stderr,"                   used to derive the names of the sql files 
for the page, revision and text content\n");
   fprintf(stderr,"                   if none is specified, all data will be 
written to stdout, but since sql INSERT\n");
   fprintf(stderr,"                   statements are batched on the assumption 
that they will be in three separate\n");
   fprintf(stderr,"                   files, this will likely not be what you 
want.\n");
@@ -170,6 +170,8 @@
   fprintf(stderr,"Flags:\n");
   fprintf(stderr,"\n");
   fprintf(stderr,"compress    (t):   compress text revisions in the sql output 
(requires the 'text' option\n");
+  fprintf(stderr,"                   if this option is not set, the text table 
create statement will include\n");
+  fprintf(stderr,"                   parameters for InnoDB table-based 
compression instead.\n");
   fprintf(stderr,"help        (h):   print this help message and exit\n");
   fprintf(stderr,"nodrop      (n):   in the CREATE TABLES sql output, do not 
add DROP IF EXISTS beforehand;\n");
   fprintf(stderr,"                   if this option is given, INSERT IGNORE 
statements will be written instead\n");
@@ -328,7 +330,7 @@
 
     sprintf(mysql_createtables_file, "%s-createtables.sql", filebase);
     sprintf(mysql_page_file, "%s-page.sql", filebase);
-    sprintf(mysql_revs_file, "%s-revs.sql", filebase);
+    sprintf(mysql_revs_file, "%s-revision.sql", filebase);
 
     mysql_createtables = init_output_file(mysql_createtables_file, filesuffix, 
mwv);
     mysql_page = init_output_file(mysql_page_file, filesuffix, mwv);
@@ -350,7 +352,9 @@
     exit(1);
   };
 
-  write_createtables_file(mysql_createtables, nodrop, tables);
+  /* if we compress text blobs then don't request innodb table compression,
+     otherwise we want it */
+  write_createtables_file(mysql_createtables, nodrop, !text_compress, tables);
   close_output_file(mysql_createtables);
   if (verbose) fprintf(stderr,"Create tables sql file written, beginning scan 
of xml\n");
 
@@ -387,7 +391,7 @@
   }
 
   while (! eof) {
-    result = do_page(stubs, text, text_compress, mysql_page, mysql_revs, 
mysql_text, verbose, tables, nodrop, start_page_id);
+    result = do_page(stubs, text, text_compress, mysql_page, mysql_revs, 
mysql_text, s_info, verbose, tables, nodrop, start_page_id);
     if (!result) break;
     pages_done++;
     if (verbose && !(pages_done%1000)) fprintf(stderr,"%d pages processed\n", 
pages_done);
diff --git a/xmlfileutils/mwxml2sql.h b/xmlfileutils/mwxml2sql.h
index f4f4ce1..e60b2ea 100644
--- a/xmlfileutils/mwxml2sql.h
+++ b/xmlfileutils/mwxml2sql.h
@@ -212,12 +212,14 @@
 void print_sql_field(FILE *f, char *field, int isstring, int islast);
 void copy_sql_field(char *outbuf, char *field, int isstring, int islast);
 char *sql_escape(char *s, int s_size, char *out, int out_size);
-char *tab_escape(char *s, int s_size, char *out, int out_size);
+char *load_data_escape(char *s, int s_size, char *out, int out_size, int 
donulls);
 void title_escape(char *t);
+void namespace_strip(char *t, siteinfo_t *s);
+
 char *un_xml_escape(char *value, char *output, int last);
 void digits_only(char *buf);
 void write_metadata(output_file_t *f, char *schema, siteinfo_t *s);
-void write_createtables_file(output_file_t *f, int nodrop, tablenames_t *t);
+void write_createtables_file(output_file_t *f, int nodrop, int table_compress, 
tablenames_t *t);
 tablenames_t *setup_table_names(char *prefix);
 
 int find_first_tag(input_file_t *f, char *holder, int holder_size);
@@ -238,7 +240,7 @@
 int do_contributor(input_file_t *f, contributor_t *c, int verbose);
 int do_text(input_file_t *f,  output_file_t *sqlt, revision_t *r, int verbose, 
tablenames_t *t, int insrt_ignore, int get_sha1, int get_text_len, int 
text_commpress);
 int do_revision(input_file_t *stubs, input_file_t *text, int text_compress, 
output_file_t *sqlp, output_file_t *sqlr, output_file_t *sqlt, page_t *p, int 
verbose, tablenames_t *t, int insert_ignore);
-int do_page(input_file_t *stubs, input_file_t *text, int text_compress, 
output_file_t *sqlp, output_file_t *sqlr, output_file_t *sqlt, int verbose, 
tablenames_t *t, int insert_ignore, char *start_page_id);
+int do_page(input_file_t *stubs, input_file_t *text, int text_compress, 
output_file_t *sqlp, output_file_t *sqlr, output_file_t *sqlt, siteinfo_t 
*s_info, int verbose, tablenames_t *t, int insert_ignore, char *start_page_id);
 int do_namespace(input_file_t *f, namespace_t *n, int verbose);
 int do_namespaces(input_file_t *f, siteinfo_t *s, int verbose);
 int do_siteinfo(input_file_t *f, siteinfo_t **s, int verbose);
diff --git a/xmlfileutils/mwxmlelts.c b/xmlfileutils/mwxmlelts.c
index 1e7e281..ec4a0b1 100644
--- a/xmlfileutils/mwxmlelts.c
+++ b/xmlfileutils/mwxmlelts.c
@@ -350,7 +350,7 @@
   if (text_bytes_written == 0) {
     strcpy(buf,"BEGIN;\n");
     put_line_all(sqlt, buf);
-    snprintf(buf, sizeof(buf), "INSERT %s INTO %s (old_id, old_flags, 
old_text) VALUES\n", insert_ignore?"IGNORE":"", t->text);
+    snprintf(buf, sizeof(buf), "INSERT %s INTO %s (old_id, old_text, 
old_flags) VALUES\n", insert_ignore?"IGNORE":"", t->text);
     put_line_all(sqlt, buf);
   }
   else {
@@ -360,7 +360,7 @@
   /* text: old_text old_flags */
   /* write the beginning piece */
   snprintf(buf, sizeof(buf),                                           \
-            "(%s, '%s', '", r->text_id, text_compress?"utf-8,gzip":"utf-8");
+          "(%s, '",r->text_id);
   put_line_all(sqlt, buf);
 
   if (verbose > 1) fprintf(stderr,"text info: insert start of line written\n");
@@ -450,17 +450,16 @@
   }
   /* write out the end piece */
   text_bytes_written += text_field_len;
+  strcpy(buf,"', ");
+    put_line_all(sqlt, buf);
+
+  sprintf(buf,"'%s')", text_compress?"utf-8,gzip":"utf-8");
+  put_line_all(sqlt, buf);
 
   if (text_bytes_written > MAX_TEXT_PACKET) {
-    strcpy(buf,"');\n");
-    put_line_all(sqlt, buf);
-    strcpy(buf,"COMMIT;\n");
+    strcpy(buf,";\nCOMMIT;\n");
     put_line_all(sqlt, buf);
     text_bytes_written = 0;
-  }
-  else {
-    strcpy(buf,"')");
-    put_line_all(sqlt, buf);
   }
 
   
@@ -555,7 +554,8 @@
   contributor_t c;
   int get_sha1 = 0;
   int get_text_len = 0;
-  char escaped_comment [FIELD_LEN*2];
+  char escaped_comment[FIELD_LEN*2];
+  char escaped_user[FIELD_LEN*2];
 
   char attrs[MAX_ATTRS_STR_LEN];
   char *attrs_ptr = NULL;
@@ -721,6 +721,7 @@
   }
 
   sql_escape(r.comment,-1,escaped_comment, sizeof(escaped_comment));
+  if (c.username[0]) sql_escape(c.username,-1,escaped_user, 
sizeof(escaped_user));
   if (verbose > 1) {
     fprintf(stderr,"revision info: id %s, parentid %s, timestamp %s, minor %s, 
comment %s, sha1 %s, model %s, format %s, len %s, textid %s\n", r.id, 
r.parent_id, r.timestamp, r.minor, escaped_comment, r.sha1, r.model, r.format, 
r.text_len, r.text_id);
   }
@@ -778,7 +779,7 @@
   snprintf(out_buf, sizeof(out_buf),              \
       "(%s, %s, %s, '%s', %s, '%s', '%s', %s, %s", \
           r.id, p->id, r.text_id, escaped_comment, c.id[0]?c.id:"0",   \
-          c.ip[0]?c.ip:c.username, \
+          c.ip[0]?c.ip:escaped_user, \
           r.timestamp, r.minor, "0");
   put_line_all(sqlr, out_buf);
   if (verbose > 2) fprintf(stderr,"(%s) %s",t->revs, out_buf);
@@ -802,6 +803,7 @@
     strcpy(out_buf,");\nCOMMIT;\n");
     put_line_all(sqlr, out_buf);
     if (verbose > 2) fprintf(stderr,out_buf);
+    rev_rows_written = 0;
   }
   else {
     strcpy(out_buf,")");
@@ -902,7 +904,7 @@
        is successfully read
 */
 
-int do_page(input_file_t *stubs, input_file_t *text, int text_compress, 
output_file_t *sqlp, output_file_t *sqlr, output_file_t *sqlt, int verbose, 
tablenames_t *t, int insert_ignore, char*start_page_id) {
+int do_page(input_file_t *stubs, input_file_t *text, int text_compress, 
output_file_t *sqlp, output_file_t *sqlr, output_file_t *sqlt, siteinfo_t *s, 
int verbose, tablenames_t *t, int insert_ignore, char*start_page_id) {
   page_t p;
   char out_buf[1024]; /* seriously how long can username plus title plus the 
rest of the cruft be? */
   int want_text = 0;
@@ -1002,6 +1004,7 @@
     }
   }
   sql_escape(p.title,-1, escaped_title, sizeof(escaped_title));
+  namespace_strip(escaped_title, s);
   title_escape(escaped_title);
   /* we also need blank to _, see what else happens, woops */
   if (verbose > 1) {
@@ -1035,7 +1038,7 @@
     if (verbose > 2) fprintf(stderr,"(%s) %s",t->page, out_buf);
 
     snprintf(out_buf, sizeof(out_buf), "INSERT %s INTO %s \
-(page_id, page_title, page_namespace, page_restrictions, \
+(page_id, page_namespace, page_title, page_restrictions, \
 page_counter, page_is_redirect, page_is_new, \
 page_random, page_touched, page_latest, page_len", insert_ignore?"IGNORE":"", 
t->page);
     put_line_all(sqlp, out_buf);
@@ -1056,8 +1059,8 @@
   /* fixme having a fixed size buffer kinda sucks here */
   /* text: page_title page_restrictions page_touched */
   snprintf(out_buf, sizeof(out_buf),                           \
-       "(%s, '%s', %s, '%s', %s, %s, %s, %.14f, '%s', %s, %s", \
-          p.id, escaped_title, p.ns, p.restrictions, \
+       "(%s, %s, '%s', '%s', %s, %s, %s, %.14f, '%s', %s, %s", \
+          p.id, p.ns, escaped_title, p.restrictions,           \
           "0", p.redirect, "0", drand48(), p.touched, p.latest, p.len );
   put_line_all(sqlp, out_buf);
   if (verbose > 2) fprintf(stderr,"(%s) %s",t->page, out_buf);
diff --git a/xmlfileutils/sqlutils.c b/xmlfileutils/sqlutils.c
index 5720754..284501d 100644
--- a/xmlfileutils/sqlutils.c
+++ b/xmlfileutils/sqlutils.c
@@ -197,7 +197,7 @@
 
   from = s;
   to = out;
-  while ((!s_size && *from) || ind < s_size) {
+  while ((s_size == -1 && *from) || ind < s_size) {
     if (copied +3 > out_size) {
       /* null terminate here and return index */
       *to = '\0';
@@ -254,6 +254,7 @@
      s_size      length of string to escape
      out         holder for result
      out_size    size of holder for result
+     donull      convert NULL to \N
 
    returns:
       pointer to the next byte in s to be processed, or to NULL if all
@@ -261,12 +262,13 @@
 
    this function escapes tabs in character strings for input to LOAD FILE
    adding a trailing '\0' to the result (you should pass a string that
-   already has the remainder of the mysql escapes applied)
+   already has the remainder of the mysql escapes applied), also potentially
+   converting NULL to \N
 
    if s_size is -1, the string to escape must be null terminated
    and its length is not checked.
 */
-char *tab_escape(char *s, int s_size, char *out, int out_size) {
+char *load_data_escape(char *s, int s_size, char *out, int out_size, int 
donull) {
   char c;
   char *from ;
   char *to;
@@ -275,6 +277,17 @@
 
   from = s;
   to = out;
+
+  if (donull) {
+    if ((s_size == -1 && !strcmp(from,"NULL")) || (s_size == 4 && 
!strncmp(from, "NULL", 4))) {
+      *to = '\\';
+      to++;
+      *to = 'N';
+      to++;
+      *to = '\0';
+      return(NULL);
+    }
+  }
 
   while ((s_size == -1 && *from) || ind < s_size) {
     if (copied +3 > out_size) {
@@ -321,6 +334,40 @@
     if (*t == ' ') *t = '_';
     t++;
   }
+  return;
+}
+
+/*
+  args:
+    t    null-terminated title string to be converted
+    s    site info structure with namespace information filled in
+
+  this function strips off any namespace from the title
+  in place
+*/
+void namespace_strip(char *t, siteinfo_t *s) {
+  namespace_t *ns;
+  char *colon, *rest;
+
+  colon = strchr(t, ':');
+  if (!colon) return;
+  ns = s->namespaces;
+
+  *colon = '\0';
+  while (ns) {
+    if (!strcmp(t, ns->namespace)) {
+      rest = colon+1;
+      while (*rest) {
+       *t = *rest;
+       t++;
+       rest++;
+      }
+      *t = '\0';
+      return;
+    }
+    ns = ns->next;
+  }
+  *colon = ':';
   return;
 }
 
@@ -405,7 +452,7 @@
   page, revision and text tables for the MediaWiki version specified
 
  */
-void write_createtables_file(output_file_t *f, int nodrop, tablenames_t *t) {
+void write_createtables_file(output_file_t *f, int nodrop, int table_compress, 
tablenames_t *t) {
   char out_buf[256];
   mw_version_t *mwv;
 
@@ -435,10 +482,13 @@
     put_line(f, out_buf);
     snprintf(out_buf, sizeof(out_buf), "PRIMARY KEY (`old_id`)\n");
     put_line(f, out_buf);
-    snprintf(out_buf, sizeof(out_buf), ") ENGINE=InnoDB DEFAULT 
CHARSET=binary\n");
+    snprintf(out_buf, sizeof(out_buf), ") ENGINE=InnoDB DEFAULT 
CHARSET=binary");
     put_line(f, out_buf);
-
-    snprintf(out_buf, sizeof(out_buf), "\n");
+    if (table_compress) {
+      snprintf(out_buf,sizeof(out_buf), " ROW_FORMAT=COMPRESSED 
KEY_BLOCK_SIZE=16");
+      put_line(f, out_buf);
+    }
+    snprintf(out_buf, sizeof(out_buf), ";\n\n");
     put_line(f, out_buf);
     
     if (!nodrop) {
@@ -518,15 +568,12 @@
       snprintf(out_buf, sizeof(out_buf), "KEY `page_redirect_namespace_len` 
(`page_is_redirect`,`page_namespace`,`page_len`)\n");
       put_line(f, out_buf);
     }
-    snprintf(out_buf, sizeof(out_buf), ") ENGINE=InnoDB DEFAULT 
CHARSET=binary\n");
+    snprintf(out_buf, sizeof(out_buf), ") ENGINE=InnoDB DEFAULT 
CHARSET=binary;\n\n");
     put_line(f, out_buf);
 
     /* auto_increment how does it work when we insert a bunch of crap into a 
table with fixed values
        for those indexes? */
 
-    snprintf(out_buf, sizeof(out_buf), "\n");
-    put_line(f, out_buf);
-    
     if (!nodrop) {
       snprintf(out_buf, sizeof(out_buf), "DROP TABLE IF EXISTS `%s`;\n", 
t->revs);
       put_line(f, out_buf);
@@ -625,7 +672,7 @@
       put_line(f, out_buf);
     }
 
-    snprintf(out_buf, sizeof(out_buf), ") ENGINE=InnoDB DEFAULT 
CHARSET=binary\n");
+    snprintf(out_buf, sizeof(out_buf), ") ENGINE=InnoDB DEFAULT 
CHARSET=binary;\n");
     put_line(f, out_buf);
     f = f->next;
   }

-- 
To view, visit https://gerrit.wikimedia.org/r/50180
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I095fb169024a243ca66bb8b9e511baae06aeb197
Gerrit-PatchSet: 1
Gerrit-Project: operations/dumps
Gerrit-Branch: ariel
Gerrit-Owner: ArielGlenn <[email protected]>

_______________________________________________
MediaWiki-commits mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits

Reply via email to