he. In my case, my key cache hit
> rate
> > is about 20%. mainly because we do random read. We just going to leave
> the
> > index_interval as is for now.
> >
>
> That's pretty painful. If you can up that a bit, it'll probably help you
> out. You can adjust the index interval
do random read. We just going to leave the
> index_interval as is for now.
>
That's pretty painful. If you can up that a bit, it'll probably help you out.
You can adjust the index intervals, too, but I'd significantly increase key
cache size first
First, a big thank to Jeff who spent endless time to help this mailing list.
Agreed that we should tune the key cache. In my case, my key cache hit rate
is about 20%. mainly because we do random read. We just going to leave the
index_interval as is for now.
On Mon, Jul 10, 2017 at 8:47 PM, Jeff
x. For large tables like parsoid HTML with ~500G load per node this
> change adds a modest ~25mb off-heap memory."
>
> I wonder if any one has experience on working with max and min index_interval
> to increase the read speed.
It's usually more efficient to try to tune the key cache,
ML pages we seem to get better read latencies
by lowering the sampling interval from 128 min / 2048 max to 64 min / 512
max. For large tables like parsoid HTML with ~500G load per node this
change adds a modest ~25mb off-heap memory."
I wonder if any one has experience on working with max and mi
On Mon, May 13, 2013 at 9:19 PM, Bryan Talbot btal...@aeriagames.com wrote:
Can the index sample storage be treated more like key cache or row cache
where the total space used can be limited to something less than all
available system ram, and space is recycled using an LRU (or configurable)
From: Robert Coli rc...@eventbrite.com
To: user@cassandra.apache.org
Sent: Monday, June 17, 2013 3:28 PM
Subject: Re: index_interval
On Mon, May 13, 2013 at 9:19 PM, Bryan Talbot btal...@aeriagames.com wrote:
Can the index sample storage be treated more
So will cassandra provide a way to limit its off-heap usage to avoid
unexpected OOM kills? I'd much rather have performance degrade when 100%
of the index samples no longer fit in memory rather than the process being
killed with no way to stabilize it without adding hardware or removing data.
/ cold data read back in again on demand?
-Bryan
On Wed, May 8, 2013 at 4:24 PM, Jonathan Ellis jbel...@gmail.com wrote:
index_interval won't be going away, but you won't need to change it as
often in 2.0: https://issues.apache.org/jira/browse/CASSANDRA-5521
On Mon, May 6, 2013 at 12:27 PM
:
index_interval won't be going away, but you won't need to change it as
often in 2.0: https://issues.apache.org/jira/browse/CASSANDRA-5521
On Mon, May 6, 2013 at 12:27 PM, Hiller, Dean dean.hil...@nrel.gov
wrote:
I heard a rumor that index_interval is going away? What is the
replacement
index_interval won't be going away, but you won't need to change it as
often in 2.0: https://issues.apache.org/jira/browse/CASSANDRA-5521
On Mon, May 6, 2013 at 12:27 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
I heard a rumor that index_interval is going away? What is the replacement
@aaronmorton
http://www.thelastpickle.com
On 7/05/2013, at 5:27 AM, Hiller, Dean dean.hil...@nrel.gov wrote:
I heard a rumor that index_interval is going away? What is the replacement
for this? (we have been having to play with this setting a lot lately as too
big and it gets slow yet too small
lower? We are
running strong now with 512 index_interval for past 2-3 days and RAM never
looked better. We were pushing 10G before and now we are 2G slowing increasing
to 8G before gc compacts the long lived stuff which goes back down to 2G
again…..very pleased with LCS in our system!
Thanks
, Why is our disk size not reduced since RAM is way lower? We
are running strong now with 512 index_interval for past 2-3 days and RAM
never looked better. We were pushing 10G before and now we are 2G
slowing increasing to 8G before gc compacts the long lived stuff which
goes back down to 2G againŠ
I was just curious. Our RAM has significantly reduced but the *Index.db files
are the same size size as before.
Any ideas why this would be the case?
Basically, Why is our disk size not reduced since RAM is way lower? We are
running strong now with 512 index_interval for past 2-3 days
Index.db file always contains *all* position of the keys in data file.
index_interval is the rate that the position of the key in index file is store
in memory.
So that C* can begin scanning index file from closest position.
On Friday, March 22, 2013 at 11:17 AM, Hiller, Dean wrote:
I
am using LCS so bloom filter fp default for 1.2.2 is 0.1 so my
bloomfilter size is 1.27G RAM(nodetool cfstats)1.7 billion rows each
node.
My cfstats for this CF is attached(Since cut and paste screwed up the
formatting). During testing in QA, we were not sure if index_interval
change
a dead
cluster if we did that).
On startup, the initial RAM is around 6-8G. Startup with
index_interval=512 resulted in a 2.5G-2.8G initial RAM and I have seen it
grow to 3.3G and back down to 2.8G. We just rolled this out an hour ago.
Our website response time is the same as before as well.
We
Argh, now I think that row size has nothing to do with the ii-based
index size/efficiency (I was thinking about the need of reading
index_interval / 2 entries in average from index file before finding the
proper one, but it should not have nothing to do with row size) - forget
the question
for 1.2.2 is 0.1 so my
bloomfilter size is 1.27G RAM(nodetool cfstats)1.7 billion rows each
node.
My cfstats for this CF is attached(Since cut and paste screwed up the
formatting). During testing in QA, we were not sure if index_interval
change was working so we dug into the code to find out
with
index_interval=512 resulted in a 2.5G-2.8G initial RAM and I have seen it
grow to 3.3G and back down to 2.8G. We just rolled this out an hour ago.
Our website response time is the same as before as well.
We rolled to only 2 nodes(out of 6) in our cluster so far to test it out
and let it soak a bit
I am using LCS so bloom filter fp default for 1.2.2 is 0.1 so my
bloomfilter size is 1.27G RAM(nodetool cfstats)1.7 billion rows each
node.
My cfstats for this CF is attached(Since cut and paste screwed up the
formatting). During testing in QA, we were not sure if index_interval
change
It would be good to have index_interval configurable per keyspace.
Preferably in cassandra.yaml because i use it as tuning on nodes running
out of memory without affecting performance noticeably.
23 matches
Mail list logo