On 2.2.2013 0:09, Devlin, Alex wrote:
Quick follow-up on this discussion (sorry that it's a bit late): Turns
out there was a cron job from one user on the server that was so
resource-intensive that it consistently burned through all the memory
on the server. It never showed up in "top" due to a strange
permissions issue...how embarrassing...
:-D usual enteprise miscomm ... embarrassing is the fact that we still
didn't learn how to talk to each other ...
<advert(but no pun intended)> this probably wouldn't happen with
solaris, where ptree under root/Administrator just works ;) </advert(but
no pun intended)>
I do have one extra question/concern about this though. We also
noticed in the example configuration at
http://src.opensolaris.org/source/xref/opengrok/trunk/doc/EXAMPLE.txt
there's a setup for running index generation on one server and hosting
on another server. How mandatory is it to use the advanced
configuration in such a setting? Has anyone tried to just use the
simple configuration across multiple machines?
Not mandatory at all - this advanced setting is about having the window
between the sources refreshed on disk and indexes refreshed as small as
possible
There are a lot of ways how to achieve this state of always in sync, or
just bite the bullet and tell users the source will be out of sync with
indexes e.g. every day from 10-11 (when the indexing runs)
I found a neat way how to do this using zfs datasets for source and data
and renaming them upon index refresh completion - I guess with dedup,
snapshots and cloning it doesn't even take so much space
(again, no pun intended, I have no clue how netapp storage works, but if
you'd explain I'd be happy to read/listen and learn something new, maybe
you have even better features which can be used for this purpose)
Then the outage will equal to time which your webapp container needs to
restart itself / reload opengrok webapp ... which is usually seconds
Also I only once tried indexing on top of nfs filesystem and it is a
very stupid thing to do(even if the boxes are in same lan and close to
each other (e.g. one hop)).
Indexing is quite I/O intensive and I guess we will make it even more
I/O intensive if I can prove the feasibility of making the analyzers
stop caching files in memory and use java readers all the time (it would
make the analyzer read the file 2x (xref+lucene) instead of current once
and cache).
The Oracle internal showcase grok service has some FC storage I think,
still the full reindex takes a day or two (mostly because of remote
history queries)
Would be curious how long does it take for your sources to get indexed.
xing the fingers
L
Thanks again,
-Alex
*From:*Rodrigo Chiossi [mailto:rodrigochio...@gmail.com]
*Sent:* Thursday, January 17, 2013 4:02 AM
*To:* Lubos Kosco
*Cc:* Devlin, Alex; opengrok-discuss@opensolaris.org
*Subject:* Re: [opengrok] Opengrok 11.1 Memory Leak
I have a production server running with 150Gb source + indexes. The
server has 16Gb RAM and moderate load. I haven't experienced any
memory issues so far. Actually, that RAM is mostly used for indexing.
For daily operation, it hardly gets to the 6GB mark.
On Thu, Jan 17, 2013 at 6:15 AM, Lubos Kosco <lubos.ko...@oracle.com
<mailto:lubos.ko...@oracle.com>> wrote:
On 16.1.2013 23:46, Devlin, Alex wrote:
Thanks for the quick replies Vladimir and Lubos!
It sounds like this problem is unique to our configuration. I'm
using mat, jmap, and jmx to try and track down any useful
information now. Also, one other question: How much RAM does your
inhouse machine have? Our production version currently has 8gb but
we're concerned that this may not be enough (even without the leak).
(not sure if jmx will help ;) )
Well, due to zfs and other(file serving) purpose of that box the
memory of opengrok server is 48GB
It's a busy server too, hence tomcat6 was tuned a bit.
I can see the tomcat6 instance currently eats 5G of ram (so it
might have been set to more than 4G mem).
Note that we talk about 70GB of sources indexed. Indexes
themselves are around 85GB.
As noted previously I will try to decrease this memory footprint
(on the cost of disk I/O), so I hope next version will
dramatically decrease its memory usage(indexing might take 5%
longer though, will see once I have some benchmarks, it could even
stay the same, depending on OS caches and how good new lucene 4.0
indexing is).
hth
L
Thanks again,
-Alex
-----Original Message-----
From: opengrok-discuss-boun...@opensolaris.org
<mailto:opengrok-discuss-boun...@opensolaris.org>
[mailto:opengrok-discuss-boun...@opensolaris.org
<mailto:opengrok-discuss-boun...@opensolaris.org>] On Behalf Of
Vladimir Kotal
Sent: Wednesday, January 16, 2013 1:40 AM
To: opengrok-discuss@opensolaris.org
<mailto:opengrok-discuss@opensolaris.org>
Subject: Re: [opengrok] Opengrok 11.1 Memory Leak
On 01/16/13 09:45, Lubos Kosco wrote:
<snip>
Get to 0.11.1 , I think no reindex needed, it should contain some
fixes we use 0.11.1 inhouse (not with perforce, but with 4 other scms)
and don't see any of this afaik (Vlada, any comments from your side?)
It's been very solid. We run 0.11.1 under 64-bit Tomcat6 and
JavaDB (both shipped with Solaris 11+) and serving variety of
Mercurial,Teamware,SCCS,svn,CVS repositories (mirrored and indexed
daily). It's been running like this since 0.11.1 came out.
It looks like this right now:
$ svcs -p tomcat6
STATE STIME FMRI
online Dec_14 svc:/network/http:tomcat6
Dec_14 902 java
$ ps -yfl -p 902
S UID PID PPID C PRI NI RSS SZ WCHAN STIME TTY
TIME CMD
S webservd 902 1 0 40 20 4926672 5704004 ? Dec 14 ?
148:04 /usr/jdk/instances/jdk1.6.0/bin/amd
No sight of memory leaks, the last restart was purely administrative.
v.
_______________________________________________
opengrok-discuss mailing list
opengrok-discuss@opensolaris.org
<mailto:opengrok-discuss@opensolaris.org>
http://mail.opensolaris.org/mailman/listinfo/opengrok-discuss
_______________________________________________
opengrok-discuss mailing list
opengrok-discuss@opensolaris.org
<mailto:opengrok-discuss@opensolaris.org>
http://mail.opensolaris.org/mailman/listinfo/opengrok-discuss
_______________________________________________
opengrok-discuss mailing list
opengrok-discuss@opensolaris.org
<mailto:opengrok-discuss@opensolaris.org>
http://mail.opensolaris.org/mailman/listinfo/opengrok-discuss
_______________________________________________
opengrok-discuss mailing list
opengrok-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/opengrok-discuss