I can reproduce this with a database backup I saved. Unfortunately I can't trim down the test case for the NegativeArraySizeException--reproducing requires a 317MB zip file (for a nearly 1GB database), an xquery module, and an xquery script. I have no trouble sharing this data and code with you since it's all public domain or open source, but it's big.
The same database produces an ArrayIndexOutOfBoundsException when OPTIMIZE ALL is run on it. I suspected something related to nodeid overflows but: 1. As you can see below, the number of nodes reported by INFO DATABASE is only 25598080, which is not anywhere near the limit. 2. The database is larger now but this problem has not reappeared. I had to revert the DB to an earlier backup (lost about an hour of data), but the same script has been running on it for a week since then without issues. On Apr 13, 2013, at 2:30 PM, Christian Grün wrote: > This could point to a node ID overflow, and may be related to > Fabrice’s observations, for which I’ve now created a new GitHub issue > [1]. Feel free to give us an update if you should manage to make this > error reproducible in some way. > > [1] https://github.com/BaseXdb/basex/issues/676 > ___________________________ > > On Tue, Apr 9, 2013 at 3:10 PM, Francis Avila <[email protected]> > wrote: >> I am not using UPINDEX=true and I am already on the latest stable version >> (7.6). >> >> This happens when I run OPTIMIZE, though, so perhaps it is related to >> indexes. >> >> On Apr 9, 2013, at 2:06 AM, Fabrice Etanchaud wrote: >> >>> Dear Francis, >>> I experienced similar problems when executing XQuery Update on collections >>> with UPDINDEX = TRUE. >>> Are you in that case ? >>> You should give a try to the last stable version. >>> >>> Regards, >>> Fabrice >>> >>> -----Message d'origine----- >>> De : [email protected] >>> [mailto:[email protected]] De la part de Francis >>> Avila >>> Envoyé : lundi 8 avril 2013 18:29 >>> À : BaseX >>> Objet : [basex-talk] NegativeArraySizeException >>> >>> Received the following stack trace after running a .bxs script using >>> basexclient from the command line: >>> >>> >>> Version: BaseX 7.6 >>> Java: Sun Microsystems Inc., 1.6.0_27 >>> OS: Linux, amd64 >>> Stack Trace: >>> java.lang.NegativeArraySizeException >>> org.basex.io.random.TableDiskAccess.insert(TableDiskAccess.java:390) >>> org.basex.data.Data.insert(Data.java:931) >>> org.basex.data.Data.insert(Data.java:820) >>> org.basex.data.atomic.Insert.apply(Insert.java:31) >>> org.basex.data.atomic.AtomicUpdateList.applyStructuralUpdates(AtomicUpdateList.java:297) >>> org.basex.data.atomic.AtomicUpdateList.execute(AtomicUpdateList.java:285) >>> org.basex.query.up.DatabaseUpdates.apply(DatabaseUpdates.java:183) >>> org.basex.query.up.ContextModifier.apply(ContextModifier.java:90) >>> org.basex.query.up.Updates.apply(Updates.java:120) >>> org.basex.query.QueryContext.update(QueryContext.java:270) >>> org.basex.query.QueryContext.value(QueryContext.java:255) >>> org.basex.query.QueryContext.iter(QueryContext.java:240) >>> org.basex.query.QueryProcessor.iter(QueryProcessor.java:76) >>> org.basex.core.cmd.AQuery.query(AQuery.java:84) >>> org.basex.core.cmd.XQuery.run(XQuery.java:22) >>> org.basex.core.Command.run(Command.java:342) >>> org.basex.core.Command.exec(Command.java:321) >>> org.basex.core.Command.execute(Command.java:78) >>> org.basex.server.ClientListener.run(ClientListener.java:145) >>> >>> What the script does is very complex (large xquery functions, db:node-pre() >>> indexing and lookups, etc) so if you need it to isolate this issue we can >>> discuss directly. >>> >>> However I get a similar stack trace when I attempt to OPTIMIZE ALL: >>> >>>> open deepbills >>> Database 'deepbills' was opened in 114.99 ms. >>>> optimize all >>> Improper use? Potential bug? Your feedback is welcome: >>> Contact: [email protected] >>> Version: BaseX 7.6 >>> Java: Sun Microsystems Inc., 1.6.0_27 >>> OS: Linux, amd64 >>> Stack Trace: >>> java.lang.ArrayIndexOutOfBoundsException: 65 >>> org.basex.util.Compress.pull(Compress.java:156) >>> org.basex.util.Compress.unpack(Compress.java:112) >>> org.basex.data.DiskData.txt(DiskData.java:268) >>> org.basex.data.DiskData.text(DiskData.java:235) >>> org.basex.io.serial.Serializer.node(Serializer.java:343) >>> org.basex.io.serial.Serializer.serialize(Serializer.java:99) >>> org.basex.core.cmd.OptimizeAll$DBParser.parse(OptimizeAll.java:199) >>> org.basex.build.Builder.parse(Builder.java:73) >>> org.basex.build.DiskBuilder.build(DiskBuilder.java:90) >>> org.basex.core.cmd.OptimizeAll.optimizeAll(OptimizeAll.java:124) >>> org.basex.core.cmd.OptimizeAll.run(OptimizeAll.java:44) >>> org.basex.core.Command.run(Command.java:342) >>> org.basex.core.Command.exec(Command.java:321) >>> org.basex.core.Command.execute(Command.java:78) >>> org.basex.server.ClientListener.run(ClientListener.java:145) >>> >>> >>> >>> >>>> info database >>> Database Properties >>> Name: deepbills >>> Size: 901 MB >>> Nodes: 25598080 >>> Documents: 15528 >>> Binaries: 0 >>> Timestamp: 2013-04-08-15-02-07 >>> >>> Resource Properties >>> Timestamp: 2013-04-08-15-02-07 >>> Encoding: UTF-8 >>> Whitespace Chopping: ON >>> >>> Indexes >>> Up-to-date: false >>> Text Index: OFF >>> Attribute Index: OFF >>> Full-Text Index: OFF >>> UPDINDEX: OFF >>> MAXCATS: 100 >>> MAXLEN: 96 >>> >>> >>> If I restore from a backup (taken after this issue appeared), I still can't >>> "optimize all", but the stack trace appears truncated: >>> >>>> restore deepbills >>> 'deepbills-2013-04-08-15-54-32.zip' was restored in 40918.34 ms. >>>> optimize all >>> No database opened. >>>> open deepbills >>> Database 'deepbills' was opened in 136.22 ms. >>>> optimize all >>> Improper use? Potential bug? Your feedback is welcome: >>> Contact: [email protected] >>> Version: BaseX 7.6 >>> Java: Sun Microsystems Inc., 1.6.0_27 >>> OS: Linux, amd64 >>> Stack Trace: >>> java.lang.ArrayIndexOutOfBoundsException >>> >>>> >>> >>> >>> If I restore a backup that is just a little bit older (a little over an >>> hour), it seems fine: >>> >>>> restore deepbills-2013-04-08-14-22-22 >>> 'deepbills-2013-04-08-14-22-22.zip' was restored in 38982.91 ms. >>>> optimize all >>> Database 'deepbills' was optimized in 129658.46 ms. >>> >>> >>> So I have two similar databases, one 363MB zip, one 372MB zip, one of which >>> is fine and the other which seems to encounter this problem. Could I be >>> looking at database corruption here? If so, how to fix? >>> -- >>> Francis Avila >>> Senior Developer >>> Dancing Mammoth, Inc. >>> (Formerly PJ Doland Web Design, Inc.) >>> P: 703.621.0990 >>> E: [email protected] >>> http://dancingmammoth.com >>> >>> _______________________________________________ >>> BaseX-Talk mailing list >>> [email protected] >>> https://mailman.uni-konstanz.de/mailman/listinfo/basex-talk >> >> -- >> Francis Avila >> Senior Developer >> Dancing Mammoth, Inc. >> (Formerly PJ Doland Web Design, Inc.) >> P: 703.621.0990 >> E: [email protected] >> http://dancingmammoth.com >> >> _______________________________________________ >> BaseX-Talk mailing list >> [email protected] >> https://mailman.uni-konstanz.de/mailman/listinfo/basex-talk -- Francis Avila Senior Developer Dancing Mammoth, Inc. (Formerly PJ Doland Web Design, Inc.) P: 703.621.0990 E: [email protected] http://dancingmammoth.com _______________________________________________ BaseX-Talk mailing list [email protected] https://mailman.uni-konstanz.de/mailman/listinfo/basex-talk

