Hum. I thought we had worked around all of our cache size problems by
splitting stuff up under multiple keys. What version are you running?

-David

On Thu, Apr 2, 2009 at 10:43 AM, Martin <mkoeb...@gmail.com> wrote:

>
> Thank you, I will try the memcache parameters.
>
> Swapping is not an issue for us. Used Swap is at 0KB. We have 4GB
> memory in that server.
>
> Thanks,
> Martin
>
>
> On Apr 2, 1:31 pm, ciaomary <ciaom...@gmail.com> wrote:
> > Provided your machine can handle it, i believe you can do this when
> > starting memcache by setting the max memory to use for items higher. I
> > do this as so:
> >
> >  /usr/local/bin/memcached -d -m 2048 -l 127.0.0.1 -u memcache
> >
> > By the way, if you are using RB with really large diffs (I believe you
> > said this in another post) I would keep an eye on the servers Virtual
> > Memory. Our experience was that with only 2-4G of VM we started
> > thrashing in swap neverland after a while and had to add more memory
> > (8G is working well.) We have some really really big diffs though
> > which may not be the typical use case. But if you do it may be
> > something to watch for.
> >
> > On Apr 2, 8:42 am, Martin <mkoeb...@gmail.com> wrote:
> >
> > > Hi,
> >
> > > is there a way to store larger data in memcached?
> > > As posted in the performance related thread, a file containing of 24k
> > > lines is not stored in the cache.
> > > The log says:
> >
> > > WARNING - Failed to fetch large data from cache for key
> > > codereview:diff-sidebyside-31114: .
> >
> > > Is there a way to configure the software to store larger data?
> >
> > > Thanks,
> > > Martin
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"reviewboard" group.
To post to this group, send email to reviewboard@googlegroups.com
To unsubscribe from this group, send email to 
reviewboard+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/reviewboard?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to