I have a production server running with 150Gb source + indexes. The server
has 16Gb RAM and moderate load. I haven't experienced any memory issues so
far. Actually, that RAM is mostly used for indexing. For daily operation,
it hardly gets to the 6GB mark.

On Thu, Jan 17, 2013 at 6:15 AM, Lubos Kosco <lubos.ko...@oracle.com> wrote:

> On 16.1.2013 23:46, Devlin, Alex wrote:
>
>> Thanks for the quick replies Vladimir and Lubos!
>>
>> It sounds like this problem is unique to our configuration. I'm using
>> mat, jmap, and jmx to try and track down any useful information now. Also,
>> one other question: How much RAM does your inhouse machine have? Our
>> production version currently has 8gb but we're concerned that this may not
>> be enough (even without the leak).
>>
>
> (not sure if jmx will help ;) )
>
> Well, due to zfs and other(file serving) purpose of that box the memory of
> opengrok server is 48GB
> It's a busy server too, hence tomcat6 was tuned a bit.
> I can see the tomcat6 instance currently eats 5G of ram (so it might have
> been set to more than 4G mem).
> Note that we talk about 70GB of sources indexed. Indexes themselves are
> around 85GB.
>
> As noted previously I will try to decrease this memory footprint (on the
> cost of disk I/O), so I hope next version will dramatically decrease its
> memory usage(indexing might take 5% longer though, will see once I have
> some benchmarks, it could even stay the same, depending on OS caches and
> how good new lucene 4.0 indexing is).
>
> hth
>
> L
>
>
>> Thanks again,
>> -Alex
>>
>> -----Original Message-----
>> From: 
>> opengrok-discuss-bounces@**opensolaris.org<opengrok-discuss-boun...@opensolaris.org>[mailto:
>> opengrok-discuss-**boun...@opensolaris.org<opengrok-discuss-boun...@opensolaris.org>]
>> On Behalf Of Vladimir Kotal
>> Sent: Wednesday, January 16, 2013 1:40 AM
>> To: opengrok-discuss@opensolaris.**org <opengrok-discuss@opensolaris.org>
>> Subject: Re: [opengrok] Opengrok 11.1 Memory Leak
>>
>> On 01/16/13 09:45, Lubos Kosco wrote:
>>
>> <snip>
>>
>>  Get to 0.11.1 , I think no reindex needed, it should contain some
>>> fixes we use 0.11.1 inhouse (not with perforce, but with 4 other scms)
>>> and don't see any of this afaik (Vlada, any comments from your side?)
>>>
>> It's been very solid. We run 0.11.1 under 64-bit Tomcat6 and JavaDB (both
>> shipped with Solaris 11+) and serving variety of
>> Mercurial,Teamware,SCCS,svn,**CVS repositories (mirrored and indexed
>> daily). It's been running like this since 0.11.1 came out.
>>
>> It looks like this right now:
>>
>> $ svcs -p tomcat6
>> STATE          STIME    FMRI
>> online         Dec_14   svc:/network/http:tomcat6
>>                  Dec_14        902 java
>> $ ps -yfl -p 902
>> S      UID   PID  PPID   C PRI NI   RSS     SZ    WCHAN    STIME TTY
>>        TIME CMD
>> S webservd   902     1   0  40 20 4926672 5704004        ?   Dec 14 ?
>>         148:04 /usr/jdk/instances/jdk1.6.0/**bin/amd
>>
>>
>> No sight of memory leaks, the last restart was purely administrative.
>>
>>
>> v.
>> ______________________________**_________________
>> opengrok-discuss mailing list
>> opengrok-discuss@opensolaris.**org <opengrok-discuss@opensolaris.org>
>> http://mail.opensolaris.org/**mailman/listinfo/opengrok-**discuss<http://mail.opensolaris.org/mailman/listinfo/opengrok-discuss>
>> ______________________________**_________________
>> opengrok-discuss mailing list
>> opengrok-discuss@opensolaris.**org <opengrok-discuss@opensolaris.org>
>> http://mail.opensolaris.org/**mailman/listinfo/opengrok-**discuss<http://mail.opensolaris.org/mailman/listinfo/opengrok-discuss>
>>
>
> ______________________________**_________________
> opengrok-discuss mailing list
> opengrok-discuss@opensolaris.**org <opengrok-discuss@opensolaris.org>
> http://mail.opensolaris.org/**mailman/listinfo/opengrok-**discuss<http://mail.opensolaris.org/mailman/listinfo/opengrok-discuss>
>
_______________________________________________
opengrok-discuss mailing list
opengrok-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/opengrok-discuss

Reply via email to