ArielGlenn has uploaded a new change for review. (
https://gerrit.wikimedia.org/r/338519 )
Change subject: trailing whitespace cleanup
......................................................................
trailing whitespace cleanup
Change-Id: Ifda9eaa02c1edb083ddebd3ba17866e95729bc41
---
M xmldumps-backup/mwbzutils/Makefile
M xmldumps-backup/mwbzutils/README
M xmldumps-backup/mwbzutils/dumpbz2filefromoffset.c
M xmldumps-backup/mwbzutils/dumplastbz2block.c
M xmldumps-backup/mwbzutils/findpageidinbz2xml.c
M xmldumps-backup/mwbzutils/httptiny.c
M xmldumps-backup/mwbzutils/mwbzlib.c
M xmldumps-backup/mwbzutils/recompressxml.c
M xmldumps-backup/mwbzutils/writeuptopageid.c
9 files changed, 96 insertions(+), 96 deletions(-)
git pull ssh://gerrit.wikimedia.org:29418/operations/dumps/mwbzutils
refs/changes/19/338519/1
diff --git a/xmldumps-backup/mwbzutils/Makefile
b/xmldumps-backup/mwbzutils/Makefile
index 0b869ee..be1a905 100644
--- a/xmldumps-backup/mwbzutils/Makefile
+++ b/xmldumps-backup/mwbzutils/Makefile
@@ -1,7 +1,7 @@
# ------------------------------------------------------------------
# This Makefile builds binaries which rely on three source files
-# from libbzip2 version 1.0.6. (See bz2libfuncs.c, bzlib.h and
-# bzlib_private.h; the first is slightly modified while the
+# from libbzip2 version 1.0.6. (See bz2libfuncs.c, bzlib.h and
+# bzlib_private.h; the first is slightly modified while the
# second is unchanged from the library version.)
#
# The copyright for those two files is as follows:
@@ -142,7 +142,7 @@
rm -f $(DOCDIR)LICENSE_BZ
rm -f $(DOCDIR)COPYING
-clean:
+clean:
rm -f *.o *.a dumplastbz2block findpageidinbz2xml \
getlastidinbz2xml \
checkforbz2footer dumpbz2filefromoffset \
@@ -155,7 +155,7 @@
reallyclean: distclean
rm -f docs/*.1
-dist:
+dist:
rm -f $(DISTNAME)
ln -s -f . $(DISTNAME)
tar cvf $(DISTNAME).tar \
diff --git a/xmldumps-backup/mwbzutils/README b/xmldumps-backup/mwbzutils/README
index 6762049..a377a96 100644
--- a/xmldumps-backup/mwbzutils/README
+++ b/xmldumps-backup/mwbzutils/README
@@ -7,10 +7,10 @@
quickly instead of requiring a serial read/decompress of the file. Some
of these files range from 2 to 30 GB in size, so serial access is too slow.
-The files bz2libfuncs.c, bzlib.h and bzlib_private.h are taken from
bzip2/libbzip2
-version 1.0.6 of 6 September 2010 (Copyright (C) 1996-2010 Julian Seward
-<[email protected]>) and as such their copyright license is in the file
-LICENSE_BZ; all other files in the package are released under the GPL,
+The files bz2libfuncs.c, bzlib.h and bzlib_private.h are taken from
bzip2/libbzip2
+version 1.0.6 of 6 September 2010 (Copyright (C) 1996-2010 Julian Seward
+<[email protected]>) and as such their copyright license is in the file
+LICENSE_BZ; all other files in the package are released under the GPL,
see the file COPYING for details.
Scripts:
@@ -30,25 +30,25 @@
dumpbz2filefromoffset - Uncompresses the file from the first bz2 block found
after
the specified offset, and dumps the results to stdout.
- This will first look for and dump the <mediawiki>
header,
+ This will first look for and dump the <mediawiki>
header,
up to and including the </siteinfo> tag; then it will
find the first <page> tag in the first bz2 block after
the specified output and dump the contents from that
point
on.
-dumplastbz2block - Finds the last bz2 block marker in a file and dumps
whatever
- can be decompressed after that point; the header of
the file
- must be intact in order for any output to be produced.
This
- will produce output for truncated files as well, as
long as
- there is "enough" data after the bz2 block marker.
+dumplastbz2block - Finds the last bz2 block marker in a file and dumps
whatever
+ can be decompressed after that point; the header of
the file
+ must be intact in order for any output to be produced.
This
+ will produce output for truncated files as well, as
long as
+ there is "enough" data after the bz2 block marker.
Exits with 0 if decompression of some data can be done,
1 if decompression fails, and -1 on error.
-findpageidinbz2xml - Given a bzipped and possibly truncated file, and a
page id,
+findpageidinbz2xml - Given a bzipped and possibly truncated file, and a
page id,
hunt for the page id in the file; this assumes that the
bz2 header is intact and that page ids are steadily
increasing
- throughout the file. It writes the offset of the
relevant block
- (from beginning of file) and the first pageid found in
that block,
+ throughout the file. It writes the offset of the
relevant block
+ (from beginning of file) and the first pageid found in
that block,
to stdout. Format of output:
position:xxxxx pageid:nnn
It exits with 0 on success, -1 on error.
@@ -58,7 +58,7 @@
xml file.
recompresszml - Reads an xml stream of pages and writes multiple bz2
compressed
- streams, concatenated, to stdout, with the specified
number of
+ streams, concatenated, to stdout, with the specified
number of
pages per stream. The mediawiki site info header is in
its
own bz2 stream. Each stream can be extracted as a
separate file
by an appropriate tool, checking for the byte-aligned
string "BZh91AY&SY"
@@ -77,7 +77,7 @@
External library routines:
bz2libfuncs.c - the BZ2_bzDecompress() routine, modified so that it
does not do
- a check of the cumulative CRC (since we read from an
arbitrary
+ a check of the cumulative CRC (since we read from an
arbitrary
point in most of these files, we won't have a
cumulative CRC
that makes any sense). It's a one line fix but it
requires
unRLE_obuf_to_output_FAST() which is marked static in
the original
diff --git a/xmldumps-backup/mwbzutils/dumpbz2filefromoffset.c
b/xmldumps-backup/mwbzutils/dumpbz2filefromoffset.c
index 9197c61..fbf44cf 100644
--- a/xmldumps-backup/mwbzutils/dumpbz2filefromoffset.c
+++ b/xmldumps-backup/mwbzutils/dumpbz2filefromoffset.c
@@ -59,10 +59,10 @@
exit(-1);
}
-/*
+/*
dump the <mediawiki> header (up through
- </siteinfo> close tag) found at the
- beginning of xml dump files.
+ </siteinfo> close tag) found at the
+ beginning of xml dump files.
returns:
0 on success,
-1 on error
@@ -113,7 +113,7 @@
bfile.strm.next_out = (char *)b->next_to_fill;
bfile.strm.avail_out = b->end - b->next_to_fill;
}
- }
+ }
}
else {
fprintf(stderr,"missing mediawiki header from bz2 xml file\n");
@@ -166,8 +166,8 @@
}
}
-/*
- find the first page id after position in file
+/*
+ find the first page id after position in file
decompress and dump to stdout from that point on
returns:
0 on success,
@@ -251,7 +251,7 @@
b->next_to_fill = b->buffer; /* empty */
bfile.strm.next_out = (char *)b->next_to_fill;
bfile.strm.avail_out = b->end - b->next_to_fill;
- }
+ }
return(0);
}
diff --git a/xmldumps-backup/mwbzutils/dumplastbz2block.c
b/xmldumps-backup/mwbzutils/dumplastbz2block.c
index 0ef1696..03b14a3 100644
--- a/xmldumps-backup/mwbzutils/dumplastbz2block.c
+++ b/xmldumps-backup/mwbzutils/dumplastbz2block.c
@@ -143,7 +143,7 @@
b->next_to_fill = b->buffer; /* empty */
bfile.strm.next_out = (char *)b->next_to_fill;
bfile.strm.avail_out = b->end - b->next_to_fill;
- }
+ }
close(fin);
exit(0);
}
diff --git a/xmldumps-backup/mwbzutils/findpageidinbz2xml.c
b/xmldumps-backup/mwbzutils/findpageidinbz2xml.c
index 4968eb2..94ed14d 100644
--- a/xmldumps-backup/mwbzutils/findpageidinbz2xml.c
+++ b/xmldumps-backup/mwbzutils/findpageidinbz2xml.c
@@ -70,11 +70,11 @@
exit(-1);
}
-/*
- find the first bz2 block marker in the file,
+/*
+ find the first bz2 block marker in the file,
from its current position,
- then set up for decompression from that point
- returns:
+ then set up for decompression from that point
+ returns:
0 on success
-1 if no marker or other error
*/
@@ -110,7 +110,7 @@
regex_t compiled_base_expr;
/* <base>http://el.wiktionary.org/wiki/...</base> */
/* <base>http://trouble.localdomain/wiki/ */
- char *base_expr = "<base>http://([^/]+)/";
+ char *base_expr = "<base>http://([^/]+)/";
int length=5000; /* output buffer size */
buf_info_t *b;
@@ -138,7 +138,7 @@
/* so someday the header might grow enough that <base> isn't in the first
1000 characters but we'll ignore that for now */
if (bfile.bytes_read && b->bytes_avail > 1000) {
/* get project name and language name from the file header
- format:
+ format:
<mediawiki xmlns="http://www.mediawiki.org/xml/export-0.5/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.mediawiki.org/xml/export-0.5/
http://www.mediawiki.org/xml/export-0.5.xsd" version="0.5" xml:lang="el">
<siteinfo>
<sitename>Βικιλεξικό</sitename>
@@ -191,9 +191,9 @@
It scans through the entire file looking for the page id which corresponds
to the revision id. This can take up to 5 minutes for the larger
stub history files; clearly we don't want to do this unless we
- have no other option.
+ have no other option.
we need this in the case where the page text is huge (eg en wp pageid
5137507
- which has a cumulative text length across all revisions of > 163 GB.
+ which has a cumulative text length across all revisions of > 163 GB.
This can take over two hours to uncompress and scan through looking for
the next page id, so we cheat */
long int get_page_id_from_rev_id_via_stub(long int rev_id, char *stubfile) {
@@ -243,9 +243,9 @@
}
/* returns pageid, or -1 on error. this requires network access,
- it does an api call to the appropriate server for the appropriate project
+ it does an api call to the appropriate server for the appropriate project
we need this in the case where the page text is huge (eg en wp pageid 5137507
- which has a cumulative text length across all revisions of > 163 GB.
+ which has a cumulative text length across all revisions of > 163 GB.
This can take over two hours to uncompress and scan through looking for
the next page id, so we cheat */
int get_page_id_from_rev_id_via_api(long int rev_id, int fin) {
@@ -257,7 +257,7 @@
char *api_call = "/w/api.php?action=query&format=xml&revids=";
regmatch_t *match_page_id_expr;
regex_t compiled_page_id_expr;
- char *page_id_expr = "<pages><page pageid=\"([0-9]+)\"";
+ char *page_id_expr = "<pages><page pageid=\"([0-9]+)\"";
hostname = get_hostname_from_xml_header(fin);
if (!hostname) {
@@ -278,8 +278,8 @@
return(-1);
}
else {
- /* dig the page id out of the buffer
- format:
+ /* dig the page id out of the buffer
+ format:
<?xml version="1.0"?><api><query><pages><page pageid="6215" ns="0"
title="hystérique" /></pages></query></api>
*/
match_page_id_expr = (regmatch_t *)malloc(sizeof(regmatch_t)*3);
@@ -294,8 +294,8 @@
}
}
-/*
- get the first page id after position in file
+/*
+ get the first page id after position in file
if a pageid is found, the structure pinfo will be updated accordingly
use_api nonzero means that we will fallback to ask the api about a page
that contains a given rev_id, in case we wind up with a huge page which
@@ -311,7 +311,7 @@
regex_t compiled_page, compiled_page_id, compiled_rev, compiled_rev_id;
int length=5000; /* output buffer size */
char *page = "<page>";
- char *page_id = "<page>\n[ ]+<title>[^<]+</title>\n([ ]+<ns>[0-9]+</ns>\n)?[
]+<id>([0-9]+)</id>\n";
+ char *page_id = "<page>\n[ ]+<title>[^<]+</title>\n([ ]+<ns>[0-9]+</ns>\n)?[
]+<id>([0-9]+)</id>\n";
char *rev = "<revision>";
char *rev_id_expr = "<revision>\n[ ]+<id>([0-9]+)</id>\n";
@@ -373,7 +373,7 @@
}
else {
/* should never happen */
- fprintf(stderr,"regex gone bad...\n");
+ fprintf(stderr,"regex gone bad...\n");
exit(-1);
}
}
@@ -387,12 +387,12 @@
}
}
- /* this needs to be called if we don't find a page by X tries, or Y
buffers read,
- and we need to retrieve a page id from a revision id in the text
instead
+ /* this needs to be called if we don't find a page by X tries, or Y
buffers read,
+ and we need to retrieve a page id from a revision id in the text
instead
where does this obscure figure come from? assume we get at least 2-1
compression ratio,
text revs are at most 10mb plus a little, then if we read this many
buffers we should have
at least one rev id in there. 20 million / 5000 or whatever it is,
is 4000 buffers full of crap
- hopefully that doesn't take forever.
+ hopefully that doesn't take forever.
*/
if (buffer_count>(20000000/BUFINSIZE) && rev_id) {
if (verbose) fprintf(stderr, "passed retries cutoff for using api\n");
@@ -466,7 +466,7 @@
/* search for pageid in a bz2 file, given start and end offsets
to search for
we guess by the most boring method possible (shrink the
- interval according to the value found on the last guess,
+ interval according to the value found on the last guess,
try midpoint of the new interval)
multiple calls of this will get the job done.
interval has left end = right end if search is complete.
@@ -477,23 +477,23 @@
why? because then we can use the output for prefetch
for xml dumps and be sure a specific page range is covered :-P
- return value from guess, or -1 on error.
+ return value from guess, or -1 on error.
*/
int do_iteration(iter_info_t *iinfo, int fin, id_info_t *pinfo, int use_api,
int use_stub, char *stubfilename, int verbose) {
int res;
off_t new_position;
off_t interval;
- /*
- last_position is somewhere in the interval, perhaps at an end
+ /*
+ last_position is somewhere in the interval, perhaps at an end
last_value is the value we had at that position
*/
-
+
interval = (iinfo->right_end - iinfo->left_end)/(off_t)2;
if (interval == (off_t)0) {
interval = (off_t)1;
}
- if (verbose)
+ if (verbose)
fprintf(stderr,"interval size is %"PRId64", left end %"PRId64", right end
%"PRId64", last val %d\n",interval, iinfo->left_end, iinfo->right_end,
iinfo->last_value);
/* if we're this close, we'll check this value and be done with it */
if (iinfo->right_end -iinfo->left_end < (off_t)2) {
@@ -533,7 +533,7 @@
return(iinfo->last_value);
}
/* in theory we were moving towards beginning of file, should not have
issues, so bail here */
- else {
+ else {
if (verbose) fprintf(stderr,"something very broken, giving up\n");
return(-1);
}
diff --git a/xmldumps-backup/mwbzutils/httptiny.c
b/xmldumps-backup/mwbzutils/httptiny.c
index 37f702a..eb04e57 100644
--- a/xmldumps-backup/mwbzutils/httptiny.c
+++ b/xmldumps-backup/mwbzutils/httptiny.c
@@ -7,7 +7,7 @@
#include <netdb.h>
#include <netinet/in.h>
#include <arpa/inet.h>
-#include <sys/ioctl.h>
+#include <sys/ioctl.h>
#include <string.h>
#include <unistd.h>
@@ -87,7 +87,7 @@
result = select(FD_SETSIZE,&fds,NULL,NULL,timeout);
if (result <= 0) {
perror("read error of some sort (0)");
-
+
}
else {
result=recv(sd,buf+count,length-count,0);
@@ -108,8 +108,8 @@
else result=recv(sd,buf+count,length-count,0);
}
else {
- fprintf(stderr,"%s: can't read from socket\n",whoami);
- perror(whoami);
+ fprintf(stderr,"%s: can't read from socket\n",whoami);
+ perror(whoami);
return(-1);
}
}
@@ -134,7 +134,7 @@
result=send(sd,message,(unsigned int) length,0);
if (result == -1) {
perror("some error, let's see it");
- if (errno!=EAGAIN) {
+ if (errno!=EAGAIN) {
fprintf(stderr,"%s: write to server failed\n",whoami);
perror(whoami);
exit(1);
@@ -142,13 +142,13 @@
}
else break;
}
- return(result);
+ return(result);
}
int doconnect(int *sd,struct timeval *timeout,struct sockaddr_in *sa_us)
{
fd_set fds;
-
+
if ((*sd = socket(AF_INET,SOCK_STREAM,0)) == -1) {
fprintf(stderr, "%s: could not get socket\n",whoami);
perror(whoami);
@@ -169,7 +169,7 @@
exit(1);
}
else if ((connect(*sd,(struct sockaddr *) sa_us,sizeof(*sa_us))== -1)
- && ( errno != EISCONN)) {
+ && ( errno != EISCONN)) {
/* shouldn't in theory but.. */
fprintf(stderr, "%s: connect failed\n",whoami);
perror(whoami);
diff --git a/xmldumps-backup/mwbzutils/mwbzlib.c
b/xmldumps-backup/mwbzutils/mwbzlib.c
index 1a1e5c6..76c6ce0 100644
--- a/xmldumps-backup/mwbzutils/mwbzlib.c
+++ b/xmldumps-backup/mwbzutils/mwbzlib.c
@@ -33,7 +33,7 @@
for (i=0; i<buflen; i++) {
/* left 1 */
buffer[i] = (unsigned char) ((int) (buffer[i]) << numbits);
-
+
/* grab leftmost from next byte */
if (i < buflen-1) {
buffer[i] = ( unsigned char ) ( (unsigned int) buffer[i] | ( (
((unsigned int) buffer[i+1]) & bit_mask(numbits,MASKLEFT) ) >> (8-numbits) ) );
@@ -48,7 +48,7 @@
for (i=buflen-1; i>=0; i--) {
/* right 1 */
buffer[i] = (unsigned char) ((int) (buffer[i]) >> numbits);
-
+
/* grab rightmost from prev byte */
if (i > 0) {
buffer[i] = ( unsigned char ) ((unsigned int) buffer[i] | ( ((unsigned
int) (buffer[i-1])<<(8-numbits)) & bit_mask(numbits,MASKLEFT)));
@@ -78,7 +78,7 @@
return(marker);
}
-/* buff1 is some random bytes, buff2 is some random bytes which we expect to
start with the contents of buff1,
+/* buff1 is some random bytes, buff2 is some random bytes which we expect to
start with the contents of buff1,
both buffers are bit-shifted to the right "bitsrightshifted". this function
compares the two and returns 1 if buff2
matches and 0 otherwise. */
int bytes_compare(unsigned char *buff1, unsigned char *buff2, int numbytes,
int bitsrightshifted) {
@@ -152,9 +152,9 @@
fprintf(stderr,"read of file failed\n");
return(-1);
}
- /* must be after 4 byte file header, and we add a leftmost byte to the
buffer
+ /* must be after 4 byte file header, and we add a leftmost byte to the buffer
of data read in case some bits have been shifted into it */
- while (bfile->position <= bfile->file_size - 6 && bfile->position >= 0 &&
bfile->bits_shifted < 0) {
+ while (bfile->position <= bfile->file_size - 6 && bfile->position >= 0 &&
bfile->bits_shifted < 0) {
bfile->bits_shifted = check_buffer_for_bz2_block_marker(bfile);
if (bfile->bits_shifted < 0) {
if (direction == FORWARD) {
@@ -182,7 +182,7 @@
}
/*
- initializes the bz2 strm structure,
+ initializes the bz2 strm structure,
calls the BZ2 decompression library initializer
returns:
@@ -221,7 +221,7 @@
reads the first 4 bytes from a bz2 file (should be
"BZh" followed by the block size indicator, typically "9")
and passes them into the BZ2 decompression library.
- This must be done before decompression of any block of the
+ This must be done before decompression of any block of the
file is attempted.
returns:
@@ -255,14 +255,14 @@
/*
seek to appropriate offset as specified in bfile,
- read compressed data into buffer indicated by bfile,
+ read compressed data into buffer indicated by bfile,
update the bfile structure accordingly,
save the overflow byte (bit-shifted data = suck)
this is for the *first* buffer of data in a stream,
for subsequent buffers use fill_buffer_to_decompress()
this will set bfile->eof on eof. no other indicator
- will be provided.
+ will be provided.
returns:
0 on success
@@ -299,12 +299,12 @@
return(0);
}
-/*
+/*
set up the marker, seek to right place, get first
buffer of compressed data for processing
bfile->position must be set to desired offset first by caller.
returns:
- -1 if no marker or other error, position of next read if ok
+ -1 if no marker or other error, position of next read if ok
*/
int init_bz2_file(bz_info_t *bfile, int fin, int direction) {
off_t seekresult;
@@ -341,7 +341,7 @@
/*
- read compressed data into buffer indicated by bfile,
+ read compressed data into buffer indicated by bfile,
from current position of file,
stuffing the overflow byte in first.
update the bfile structure accordingly
@@ -351,7 +351,7 @@
setup_first_buffer_to_decompress()
this will set bfile->eof on eof. no other indicator
- will be provided.
+ will be provided.
returns:
0 on success
@@ -376,11 +376,11 @@
return(0);
}
-/* size of buffer is bytes usable. there will be a null byte at the end
+/* size of buffer is bytes usable. there will be a null byte at the end
what we do with the buffer:
- - read from front of buffer to end,
- - fill from point where prev read did not fill buffer, or from where
+ - read from front of buffer to end,
+ - fill from point where prev read did not fill buffer, or from where
move of data at end of buffer to beginning left room,
- mark a string of bytes (starting from what's available to read) as "read"
@@ -468,15 +468,15 @@
return(0);
}
-/*
+/*
fill output buffer in b with uncompressed data from bfile
if this is the first call to the function for this file,
the file header will be read, and the first buffer of
uncompressed data will be prepared. bfile->position
- should be set to the offset (from the beginning of file) from
+ should be set to the offset (from the beginning of file) from
which to find the first bz2 block.
-
- returns:
+
+ returns:
on success, number of bytes read (may be 0)
-1 on error
*/
@@ -486,7 +486,7 @@
if (buffer_is_full(b)) {
return(0);
}
-
+
if (buffer_is_empty(b)) {
b->next_to_fill = b->buffer;
}
@@ -519,10 +519,10 @@
fprintf(stderr, "b->bytes_avail: %ld\n", (long int) b->bytes_avail);
}
-/*
+/*
copy text from end of buffer to the beginning, that we want to keep
around for further processing (i.e. further regex matches)
- returns number of bytes copied
+ returns number of bytes copied
*/
int move_bytes_to_buffer_start(buf_info_t *b, unsigned char *fromwhere, int
maxbytes) {
int i, tocopy;
@@ -540,7 +540,7 @@
}
b->next_to_fill = b->buffer + tocopy;
b->next_to_fill[0] = '\0';
- b->next_to_read = b->buffer;
+ b->next_to_read = b->buffer;
b->bytes_avail = tocopy;
return(tocopy);
}
diff --git a/xmldumps-backup/mwbzutils/recompressxml.c
b/xmldumps-backup/mwbzutils/recompressxml.c
index 894af61..14b1519 100644
--- a/xmldumps-backup/mwbzutils/recompressxml.c
+++ b/xmldumps-backup/mwbzutils/recompressxml.c
@@ -155,7 +155,7 @@
else return 0;
}
- /* normal check for end of page, end of content */
+ /* normal check for end of page, end of content */
if (!strcmp(buf,pageCloseTag) || !strcmp(buf,mediawikiCloseTag)) return 1;
else return 0;
}
diff --git a/xmldumps-backup/mwbzutils/writeuptopageid.c
b/xmldumps-backup/mwbzutils/writeuptopageid.c
index 858fcb0..1ccbb0a 100644
--- a/xmldumps-backup/mwbzutils/writeuptopageid.c
+++ b/xmldumps-backup/mwbzutils/writeuptopageid.c
@@ -61,10 +61,10 @@
}
/* note that even if we have only read a partial line
- of text from the body of the page, (cause the text
- is longer than our buffer), it's fine, since the
+ of text from the body of the page, (cause the text
+ is longer than our buffer), it's fine, since the
<> delimiters only mark xml, they can't appear
- in the page text.
+ in the page text.
returns new state */
States setState (char *line, States currentState, int startPageID, int
endPageID) {
@@ -168,7 +168,7 @@
char line[4097];
/* order of magnitude of 2K lines of 80 chrs each,
no header of either a page nor the mw header should
- ever be longer than that. At least not for some good
+ ever be longer than that. At least not for some good
length of time. */
char mem[MAXHEADERLEN];
@@ -202,9 +202,9 @@
errno = 0;
startPageID = strtol(argv[optind], &nonNumeric, 10);
- if (startPageID == 0 ||
+ if (startPageID == 0 ||
*nonNumeric != 0 ||
- nonNumeric == (char *) &startPageID ||
+ nonNumeric == (char *) &startPageID ||
errno != 0) {
usage("The value you entered for startPageID must be a positive integer.");
exit(-1);
@@ -212,15 +212,15 @@
optind++;
if (optind < argc) {
endPageID = strtol(argv[optind], &nonNumeric, 10);
- if (endPageID == 0 ||
+ if (endPageID == 0 ||
*nonNumeric != 0 ||
- nonNumeric == (char *) &endPageID ||
+ nonNumeric == (char *) &endPageID ||
errno != 0) {
usage("The value you entered for endPageID must be a positive
integer.\n");
exit(-1);
}
}
-
+
while (fgets(line, sizeof(line)-1, stdin) != NULL) {
text=line;
while (*text && isspace(*text))
--
To view, visit https://gerrit.wikimedia.org/r/338519
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ifda9eaa02c1edb083ddebd3ba17866e95729bc41
Gerrit-PatchSet: 1
Gerrit-Project: operations/dumps/mwbzutils
Gerrit-Branch: master
Gerrit-Owner: ArielGlenn <[email protected]>
_______________________________________________
MediaWiki-commits mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits