out of interest, why -100 and not -1 or + 1? any particular reason?
On 06/09/2012 19:17, Tyler Hobbs wrote:
To minimize the impact on the cluster, I would bootstrap a new 1d node
at (42535295865117307932921825928971026432 - 100), then decommission
the 1c node at
it looks that by specifying replace_token, the old owner is not removed
from gossip (which I had thought it would do).
Then it's understandable that the old owner would resurface later and we
get some warning saying that the same token is owned by both.
I ran an example with a 2-node cluster,
Hi there,
I'm working on a project that might want to set TTL to roughly 7 years.
However it might occur that the TTL should be reduced or extended. Is there
any way of updating the TTL without being in need of rewriting the data
back again? This would cause way to much overhead for this.
If
You should create an index where you store references to your records.
You can use composite column names where column
name=composite(timestamp,key)
then you would get a slice of all columns where timestamp part of the
composite is = TTL in the past, and then iterate through them and
delete
OK, that's an option indeed. However due to the amount of records it would
also involve bucketing which makes it not the simplest option. Further
more, there are lots of manual indexes referring to the keys of the actual
events: all those indexes should also be updated.
Best regards,
Robin
It is memory-mapped I/O. I wouldn't worry about it.
BTW, Windows might not be the best choice to run Cassandra on. My
experience running Cassandra on Windows has not been positive one. We
no longer support Windows as our production platform.
Regards,
Oleg
On 2012-09-10 09:00:02 +, Rene
The problem is that the system just freezes and nodes are dying. The system
becomes very unresponsive and it always happens when the shareable amount
of RAM reaches the total number of bytes in the system.
Is there something in Windows that I can tune in order to avoid this
behavior? I cannot
Is there any way of updating the TTL without being in need of rewriting the
data back again?
No, there isn't.
If not, is running a Map/Reduce task on the whole data set the best option
If the TTL is made rather infrequently and on a large percentage of
the data, which seems to be your
Do not use mmap/auto on Windows, standard access mode only. In cassandra.yaml:
disk_access_mode: standard
Best regards / Pagarbiai
Viktor Jevdokimov
Senior Developer
Email: viktor.jevdoki...@adform.commailto:viktor.jevdoki...@adform.com
Phone: +370 5 212 3063, Fax +370 5 261 0453
J. Jasinskio
For performance reasons I switched to memory mapped IO. Is there a way to
make memory-mapped IO work in Windows?
Thanks!
2012/9/10 Viktor Jevdokimov viktor.jevdoki...@adform.com
Do not use mmap/auto on Windows, standard access mode only. In
cassandra.yaml:
disk_access_mode:
Hi all,
We're running a small Cassandra cluster (v1.0.10) serving data to our web
application, and as our traffic grows, we're starting to see some weird
issues. The biggest of these is that sometimes, a single node becomes
unresponsive. It's impossible to start new connections, or impossible to
We used Cassandra on Windows for more than a year in production for RTB and
other staff, that requires lowest possible latency. We used mmap before issues
like yours, switched to mmap index only and finally to standard. No big
difference in performance, standard was most stable. The huge
When we ran Cassandra on windows, we got better performance without memory
mapped IO. We had the same problems your are describing, what happens is
that Windows is rather aggressive about swapping out memory when all the
memory is used, and it starts swapping out unused parts of the heap,
which
We have 3 tables for all indexing we do called
IntegerIndexing
DecimalIndexing
StringIndexing
playOrm would prefer that only these rows are cached as every row in those
tables are indices. Customers/Clients of playOrm tend to always hit the same
index rows over and over as they are using the
Hi,
I'm getting 5 identical assertions while running 'nodetool cleanup' on a
Cassandra 1.1.4 node with Load=104G and 80m keys.
From system.log :
ERROR [CompactionExecutor:576] 2012-09-10 11:25:50,265
AbstractCassandraDaemon.java (line 134) Exception in thread
We have seen various issues from these replaced nodes hanging around. For
clusters where a lot of nodes have been replaced, we see these replaced nodes
having an impact on heap/GC and a lot of tcp timeouts/retransmits (because the
old nodes no longer exist). As a result, we have begun
I am currently profiling a Cassandra 1.1.1 set up using G1 and JVM 7.
It is my feeble attempt to reduce Full GC pauses.
Has anyone had any experience with this ? Anyone tried it ?
--
Regards,
Oleg Dulin
NYC Java Big Data Engineer
http://www.olegdulin.com/
Thanks Jim, looks I'll have to read into the code to understand what is
happening under the hood
yang
On Mon, Sep 10, 2012 at 9:45 AM, Jim Cistaro jcist...@netflix.com wrote:
We have seen various issues from these replaced nodes hanging around.
For clusters where a lot of nodes have been
The Cassandra team is pleased to announce the release of Apache Cassandra
version 1.1.5.
Cassandra is a highly scalable second-generation distributed database,
bringing together Dynamo's fully distributed design and Bigtable's
ColumnFamily-based data model. You can read more here:
It leaves some breathing room for fixing mistakes, adding DCs, etc. The
set of data in a 100 token range is basically the same as a 1 token range:
nothing, statistically speaking.
On Mon, Sep 10, 2012 at 2:21 AM, Guy Incognito dnd1...@gmail.com wrote:
out of interest, why -100 and not -1 or +
Hi Aaron
Removing NodeInfo did do the trick, thanks. I see the ticket is already
resolved, good news.
Thanks for the help.
On Fri, Sep 7, 2012 at 12:26 AM, aaron morton aa...@thelastpickle.comwrote:
This is a problem…
[default@system] list NodeIdInfo ;
Using default limit of 100
...
Hey folks,
Can you recommend any tools to pull data from MySQL and pump it to Cassandra?
Thanks in advanced.
James
On Mon, Sep 10, 2012 at 10:17 PM, Morantus, James (PCLN-NW)
james.moran...@priceline.com wrote:
Hey folks,
Can you recommend any tools to pull data from MySQL and pump it to
Cassandra?
This: http://www.datastax.com/dev/blog/bulk-loading
--
Aaron Turner
http://synfin.net/ Twitter:
this is first version from 1.1 branch i used in pre-production stress
testing and i got lot of following errors: decorated key -1 != some number
INFO [CompactionExecutor:10] 2012-09-11 02:22:13,586
CompactionController.java (line 172) Compacting large row
In general wider rows take a bit longer to read, however different access
patterns have different performance. I did some tests here
http://www.slideshare.net/aaronmorton/cassandra-sf-2012-technical-deep-dive-query-performance
and http://thelastpickle.com/2011/07/04/Cassandra-Query-Plans/
I
It's impossible to start new connections, or impossible to send requests, or
it just doesn't return anything when you've sent a request.
If it's totally frozen it sounds like GC. How long does it freeze for?
Despite that, we occasionally get OOM exceptions, and nodes crashing, maybe a
few
playOrm would prefer that only these rows are cached as every row in those
tables are indices
CF level key and row caching is specified by the caching property. It can be
either KEY, ROW or BOTH.
So you can turn on the row cache, but only have some CF's use it.
Cheers
-
My question: Can these assertions be ignored? Or do I need to worry about it?
That looks like a problem.
Can you raise a ticket on https://issues.apache.org/jira/browse/CASSANDRA ?
May be good to include information on:
* how long you've been using Levelled Compaction.
* Is this all CF's or
I am currently profiling a Cassandra 1.1.1 set up using G1 and JVM 7.
It is my feeble attempt to reduce Full GC pauses.
Has anyone had any experience with this ? Anyone tried it ?
Have tried; for some workloads it's looking promising. This is without
key cache and row cache and with a pretty
29 matches
Mail list logo