I've only set set minimum memory and have not set maximum memory.  I'm doing
more investigation and I see that I have 100+ dynamic fields for my
documents, not the 10 fields I quoted earlier.  I also sort against those
dynamic fields often,  I'm reading that this potentially uses a lot of
memory.  Could this be the cause of my problems and if so what options do I
have to deal with this?

On Wed, Aug 17, 2011 at 2:46 PM, Markus Jelsma
<markus.jel...@openindex.io>wrote:

> Keep in mind that a commit warms up another searcher and potentially
> doubling
> RAM consumption in the back ground due to cache warming queries being
> executed
> (newSearcher event). Also, where is your Xmx switch? I don't know how your
> JVM
> will behave if you set Xms > Xmx.
>
> 65m docs is quite a lot but it should run fine with 3GB heap allocation.
>
> It's a good practice to use a master for indexing without any caches and
> warm-
> up queries when you exceed a certain amount of documents, it will bite.
>
> > I have a large ec2 instance(7.5 gb ram), it dies every few hours with out
> > of heap memory issues.  I started upping the min memory required,
> > currently I use -Xms3072M .
> > I insert about 50k docs an hour and I currently have about 65 million
> docs
> > with about 10 fields each. Is this already too much data for one box? How
> > do I know when I've reached the limit of this server? I have no idea how
> > to keep control of this issue.  Am I just supposed to keep upping the min
> > ram used for solr? How do I know what the accurate amount of ram I should
> > be using is? Must I keep adding more memory as the index size grows, I'd
> > rather the query be a little slower if I can use constant memory and have
> > the search read from disk.
>



-- 
- sent from my mobile
6176064373

Reply via email to