Bugs item #2017862, was opened at 2008-07-14 12:41
Message generated for change (Comment added) made by jflokstra
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=482468&aid=2017862&group_id=56967

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: PF/runtime
Group: Pathfinder CVS Head
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Ying Zhang (yingying)
Assigned to: Jan Flokstra (jflokstra)
Summary: PF: shredding empty file causes MXQ to crash

Initial Comment:
Shred an empty file using pf:add-doc() in mclient crashes Mserver:

In terminal 1:

$ Mserver --dbinit="module(pathfinder);"
# MonetDB Server v4.24.1
# based on GDK   v1.24.1
# Copyright (c) 1993-2008, CWI. All rights reserved.
# Compiled for x86_64-unknown-linux-gnu/64bit with 64bit OIDs; dynamically 
linked.
# Visit http://monetdb.cwi.nl/ for further information.
# PF/Tijah module v0.5.0 loaded. http://dbappl.cs.utwente.nl/pftijah
# MonetDB/XQuery module v0.24.1 loaded (default back-end is 'milprint_summer')
# XRPC administrative console at http://127.0.0.1:50001/admin
MonetDB>Segmentation fault

In terminal 2:
$ mclient -lx
xquery>pf:add-doc("/ufs/zhang/tmp/empty.xml", "empty.xml")
more><>
MAPI  = [EMAIL PROTECTED]:50000
ACTION= read_line
QUERY = pf:add-doc("/ufs/zhang/tmp/empty.xml", "empty.xml")
ERROR = Connection terminated

The crash happens in shredder.mx

Empty file is not a valid XML document, however, I think it should not cause 
Mserver to segmentation fault.  Shredding the same doc using MIL shred_doc() 
results (correctly) in an error:

MonetDB>shred_doc("/ufs/zhang/tmp/empty.xml", "empty.xml");
!ERROR: Document is empty
!ERROR: Start tag expected, '<' not found
!ERROR: [shred_url]: 1 times inserted nil due to errors at tuples [EMAIL 
PROTECTED]
!ERROR: [shred_url]: first error was:
!ERROR: emit_tuple: node.level(18831648) >= XML_DEPTH_MAX(128)
!ERROR: CMDshred_url: operation failed.


Kind regards,

Jennie

----------------------------------------------------------------------

>Comment By: Jan Flokstra (jflokstra)
Date: 2008-10-14 16:37

Message:
I had a look at this a couple of weeks ago and it did not crash with me on
my SuSe10.3 system. I now have the newest HEAD and it still does not crash.
I think I will just have to guess what goes wrong and anticipate on this. I
will try to solve it this week (in the HEAD)

----------------------------------------------------------------------

Comment By: Stefan Manegold (stmane)
Date: 2008-10-14 16:27

Message:
Jan,

could you please have a look at this one?

It seems to work fine in the stable release, but fails in the development
trunk ...

Thanks!

Stefan


----------------------------------------------------------------------

Comment By: Lefteris Sidirourgos (lsidir)
Date: 2008-08-12 16:07

Message:
Logged In: YES 
user_id=1856546
Originator: NO

Did some looking around for this bug. Although the original bug report was
for milprint_summer, the same behavior is noticed with the algebra (since
is a runtime error anyway). The stack trace from gdb looks like:

 0x00002aaab9bb22f5 in emit_tuple (shredCtx=0x18a1508,
pre=15842497851538791387, size=606348325, level=-606348325, prop=5185,
kind=48 '0', nid=15842497851538791387)
    at /home/lsidir/develop//current/pathfinder/runtime/shredder.mx:574
574         shredCtx->dstBAT[PRE_SIZE].cast.intCAST[pre] = size;
Missing separate debuginfos, use: debuginfo-install bzip2.x86_64
e2fsprogs.x86_64 glibc.x86_64 keyutils.x86_64 krb5.x86_64 libselinux.x86_64
libxml2.x86_64 openssl.x86_64 pcre.x86_64 zlib.x86_64
(gdb) bt
#0  0x00002aaab9bb22f5 in emit_tuple (shredCtx=0x18a1508,
pre=15842497851538791387, size=606348325, level=-606348325, prop=5185,
kind=48 '0', nid=15842497851538791387)
    at /home/lsidir/develop//current/pathfinder/runtime/shredder.mx:574
#1  0x00002aaab9bb23c5 in emit_node (shredCtx=0x18a1508, node=0x18ba360)
at /home/lsidir/develop//current/pathfinder/runtime/shredder.mx:626
#2  0x00002aaab9bb35be in shred_end_document (xmlCtx=0x18a1508) at
/home/lsidir/develop//current/pathfinder/runtime/shredder.mx:822
#3  0x00000038aae48cc8 in xmlParseDocument () from
/usr/lib64/libxml2.so.2
#4  0x00002aaab9bb4fdb in shredder_parse (shredCtx=0x18a1508,
location=0x1006370 "/export/scratch0/lsidir/xmldocs/empty.xml", buffer=0x0,
s=0x0)
    at /home/lsidir/develop//current/pathfinder/runtime/shredder.mx:1479
#5  0x00002aaab9bb71d9 in shred (docBAT=0x127c8c8, location=0x1006370
"/export/scratch0/lsidir/xmldocs/empty.xml", buffer=0x0, s=0x0,
percentage=0, serFun=0x0, serCtx=0x0, collLock=0x427f8890)
    at /home/lsidir/develop//current/pathfinder/runtime/shredder.mx:1985
#6  0x00002aaab9bb7422 in CMDshred_url (docBAT=0x127c8c8,
location=0x1006370 "/export/scratch0/lsidir/xmldocs/empty.xml",
percentage=0x427f8880, collLock=0x427f8890, verbose=0x427f88a0 "")
    at /home/lsidir/develop//current/pathfinder/runtime/shredder.mx:2013
#7  0x00002aaab9b70753 in CMDshred_url_unpack524513197 (argc=6,
argv=0x427f8850) at pf_support.glue.c:52


You have to try 3-4 times to shred the same empty doc before you get the
segmentation fault. The segmentation fault is caused when the call back
function "shred_end_document" is called by xmllib2, so it might be some
leftovers from previous runs that have not been freed correctly.

----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=482468&aid=2017862&group_id=56967

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Monetdb-bugs mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/monetdb-bugs

Reply via email to