Hi Ivan,

Seems that this query is the problem:

document("$documents")/documents/collection[@name="vpDita/technicalSummary"]/document[@name="ts_vp_BUK7Y4R8-60E.html"]

Can you try to run the same database in isolated environment (no external
workload) and try to run doc('$documents') first, then this query?

Also, there is a possibility that database is corrupted (after the first
'No space left' error happened). Try to restart it from the backup or make
se_exp export/restore.

Ivan

On Fri, Feb 15, 2013 at 2:51 PM, Ivan Lagunov <lagi...@gmail.com> wrote:

> Hi Ivan,****
>
> ** **
>
> Here is the zipped event.log file.****
>
> ** **
>
> Best regards,****
>
> Ivan Lagunov****
>
> ** **
>
> *From:* Ivan Lagunov [mailto:lagi...@gmail.com]
> *Sent:* Friday, February 15, 2013 11:50 AM
> *To:* Robby Pelssers; 'Ivan Shcheklein'
> *Cc:* sedna-discussion@lists.sourceforge.net
> *Subject:* RE: [Sedna-discussion] question regarding data files
> [extremely big]****
>
> ** **
>
> Hi Ivan,****
>
> ** **
>
> Could you check the attached event.log and help us resolve the issue? ****
>
> ** **
>
> The database was restarted at about 11:00 due to the fatal error:****
>
> ** **
>
> SYS   15/02/2013 10:58:11 (SM nxp pid=27220) [uhdd.c:uWriteFile:237]:
> write (code = 28): No space left on device****
>
> FATAL 15/02/2013 10:58:11 (SM nxp pid=27220)
> [bm_core.cpp:write_block_addr:233]: Cannot write block****
>
> INFO  15/02/2013 11:00:01 (GOV pid=28048)
> [gov_functions.cpp:log_out_system_information:87]: SEDNA version is 3.5.161
> (64bit Release)****
>
> INFO  15/02/2013 11:00:01 (GOV pid=28048)
> [gov_functions.cpp:log_out_system_information:95]: System: Linux
> 2.6.18-238.12.1.el5 x86_64****
>
> ** **
>
> It must have crashed at that moment probably because we have a script that
> automatically checks and starts the database every 5 minutes.****
>
> ** **
>
> Then the tmp file started growing back to 46GB at about 11:08:****
>
> ** **
>
> LOG   15/02/2013 11:08:22 (SM nxp pid=28081)
> [blk_mngmt.cpp:extend_tmp_file:629]: Temp file has been extended, size:
> c800000****
>
> LOG   15/02/2013 11:08:24 (SM nxp pid=28081)
> [blk_mngmt.cpp:extend_tmp_file:629]: Temp file has been extended, size:
> 12c00000****
>
> LOG   15/02/2013 11:08:25 (SM nxp pid=28081)
> [blk_mngmt.cpp:extend_tmp_file:629]: Temp file has been extended, size:
> 19000000****
>
> ** **
>
> So it happens regularly now, the Sedna goes down, then being restarted and
> the file starts growing to 46GB again. It looks like there is some huge
> transaction that cannot be completed. Is there any way to locate it and
> kill that transaction or resolve this somehow to prevent growing?****
>
> ** **
>
> Best regards,****
>
> Ivan Lagunov****
>
> ** **
>
> *From:* Robby Pelssers [mailto:robby.pelss...@nxp.com<robby.pelss...@nxp.com>]
>
> *Sent:* Friday, February 15, 2013 11:27 AM
> *To:* Ivan Shcheklein
> *Cc:* sedna-discussion@lists.sourceforge.net
> *Subject:* Re: [Sedna-discussion] question regarding data files
> [extremely big]****
>
> ** **
>
> Hi Ivan,****
>
> ** **
>
> Issue solved indeed. Thx again for the quick reply !!****
>
> ** **
>
> Robby****
>
> ** **
>
> *From:* Ivan Shcheklein [mailto:shchekl...@gmail.com<shchekl...@gmail.com>]
>
> *Sent:* Friday, February 15, 2013 10:52 AM
> *To:* Robby Pelssers
> *Cc:* sedna-discussion@lists.sourceforge.net
> *Subject:* Re: [Sedna-discussion] question regarding data files
> [extremely big]****
>
> ** **
>
> Robby,****
>
> ** **
>
> Thx for the quick reply.  I was looking into the admin guide
> http://www.sedna.org/adminguide/AdminGuidesu3.html and noticed that it’s
> possible to set the max file size during creation.  ****
>
> ** **
>
> Yes, it's possible, but I think transactions will be rolled back if they
> failed to extend setmp. In this case you will need to restart (not
> recreate, just restart with se_smsd, se_sm commands) database anyway.
> Probably, we should try to implement vacuum() function which at least trims
> setmp without the need to restart the database.****
>
>  ****
>
>  But what would be the scenario to accomplish this as our database is
> obviously already created.****
>
> ** **
>
> To clean setmp you don't need to recreate database - just restart it with
> se_smsd, se_sm.****
>
>  ****
>
> I just want to make sure I don’t miss out on anything here as obviously we
> want the data, xquery libraries and indexes to be fully restored after this
> procedure.****
>
> ** **
>
> Just try it (if you really want to see if it's possible to shrink 'sedata'
> file at all). se_exp should work fine in most cases. Just one note from the
> documentation: ****
>
> ** **
>
> "... current version of the *se_exp* utility doesn’t
> support exporting/importing triggers, documents with multiple roots and
> empty documents (empty documents nodes and document nodes with multiple
> root elements are allowed by XQuery data model but cannot be serialized as
> is without modifications)."****
>
> ** **
>
> ** **
>
> Ivan ****
>
> ** **
>
> ** **
>
>  ****
>
> *From:* Ivan Shcheklein [mailto:shchekl...@gmail.com]
> *Sent:* Friday, February 15, 2013 10:23 AM
> *To:* Robby Pelssers
> *Cc:* sedna-discussion@lists.sourceforge.net
> *Subject:* Re: [Sedna-discussion] question regarding data files
> [extremely big]****
>
>  ****
>
> Hi Robby,****
>
>  ****
>
> Actually, you can. Just restart database - "setmp" should be restored to
> its default size.****
>
>  ****
>
> To get statistics, try: ****
>
>    - *$schema_<name>* document – the descriptive schema of the document
>    or collection named *<name>*;****
>    - *$document_<name>* document – statistical information about the
>    document named *<name>*;****
>    - *$collection_<name>* document – statistical information about
>    the collection named *<name>*.****
>
> details there: http://sedna.org/progguide/ProgGuidesu8.html#x14-580002.5.6
> ****
>
>  ****
>
> You can also try to determine if your data (sedata) file is fragmented.
> Use se_exp to export and then import data into clean database. Compare new
> sedata size with the old one.****
>
>  ****
>
>  ****
>
> Ivan Shcheklein,****
>
> Sedna Team****
>
>  ****
>
> On Fri, Feb 15, 2013 at 1:16 PM, Robby Pelssers <robby.pelss...@nxp.com>
> wrote:****
>
> Hi all,
>
> Seems like some files are growing out of proportion for the database nxp.
>  It kind of surprises me as I don't expect we have that much data. But to
> find a possible bottleneck or get more insight into which collection might
> be this big, how can I get some statistics?  And I guess we can't clean up
> one of these files, right?
>
> Thx in advance,
> Robby
>
>
>
> drwxr-xr-x 2 pxprod1 spider        4096 Feb 15 05:40 ./
> drwxr-xr-x 3 pxprod1 spider      983040 Feb 15 08:59 ../
> -rw-rw---- 1 pxprod1 spider     1313106 Feb 15 10:04 nxp.15516.llog
> -rw-rw---- 1 pxprod1 spider 15309275136 Feb 15 10:04 nxp.sedata
> -rw-rw---- 1 pxprod1 spider 31352422400 Feb 15 10:08 nxp.setmp
> pxprod1@nlscli71:/appl/spider_prod/sedna/pxprod1/sedna35/data/nxp_files>
>
>
> pxprod1@nlscli71:/appl/spider_prod/sedna/pxprod1/sedna35/data/nxp_files>du
> -hs *
> 1.3M    nxp.15516.llog
> 15G     nxp.sedata
> 41G     nxp.setmp
>
> ------------------------------------------------------------------------------
> Free Next-Gen Firewall Hardware Offer
> Buy your Sophos next-gen firewall before the end March 2013
> and get the hardware for free! Learn more.
> http://p.sf.net/sfu/sophos-d2d-feb
> _______________________________________________
> Sedna-discussion mailing list
> Sedna-discussion@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/sedna-discussion****
>
>  ****
>
> ** **
>
------------------------------------------------------------------------------
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013 
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb
_______________________________________________
Sedna-discussion mailing list
Sedna-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sedna-discussion

Reply via email to