On Wed, Feb 22, 2012 at 10:37 AM, Maxim Potekhin potek...@bnl.gov wrote:
The idea was to provide redundancy, resilience, automatic load balancing
and automatic repairs. Going the way of the file system does not achieve
any of that.
(Apologies for continuing slightly OT thread, but if people
On Tue, Feb 14, 2012 at 2:02 PM, Franc Carter franc.car...@sirca.org.auwrote:
On Wed, Feb 15, 2012 at 8:49 AM, Brandon Williams dri...@gmail.comwrote:
Before 1.0.8, use https://issues.apache.org/jira/browse/CASSANDRA-3337
to remove it.
I'm missing something ;-( I don't see a solution in
On 11/13/10 11:59 AM, Reverend Chip wrote:
Swapping could conceivably be a
factor; the JVM is 32G out of 72G, but the machine is 2.5G into swap
anyway. I'm going to disable swap and see if the gossip issues resolve.
Are you using JNA/memlock to prevent the JVM's heap from being swapped?
On 11/15/10 12:08 PM, Reverend Chip wrote:
logger_.warn(Unable to lock JVM memory (ENOMEM).
or
logger.warn(Unknown mlockall error + errno(e));
Trunk also logs if it is successful :
logger.info(JNA mlockall successful);
None of those messages has appeared in either output.log or
On 11/9/10 5:15 AM, Wayne wrote:
We are trying to use snapshots etc. to back up the data
but it is slow (hours) and slows down the entire node.
The snapshot process (as I understand it, and with the caveat that this
is the code path without JNA available) first flushes all memtables
(this
On 10/22/10 2:55 PM, Craig Ching wrote:
Even better, I'd love a way to not allow B to be available
until replication is complete, can I detect that somehow?
Proposed and rejected a while back :
https://issues.apache.org/jira/browse/CASSANDRA-768
=Rob
On 10/14/10 10:59 AM, B. Todd Burruss wrote:
0.7.0-beta2
top is reporting my cassandra process as using 11g. i have set
disk_access_mode: standard and Xmx8G (verified via JMX)
i have only noticed using more RAM than Xmx when using mmap i/o. this
leads me to believe that disk_access_mode was
On 10/14/10 12:44 PM, B. Todd Burruss wrote:
INFO 16:46:06,875 DiskAccessMode 'auto' determined to be mmap,
indexAccessMode is mmap
thx, it does say that in the log, but that is probably just a reflection
of whatever is read from cassandra.yaml.
Having read the relevant code, the log
On 10/6/10 9:05 AM, Utku Can Topçu wrote:
The nodes are still swapping, even though the swappiness is set to zero
right now. After swapping comes the OOM.
https://issues.apache.org/jira/browse/CASSANDRA-1214
?
=Rob
On 10/6/10 1:13 PM, Aaron Morton wrote:
To shutdown cleanly, say in a production system, use nodetool drain
first. This will flush the memtables and put the node into a read only
mode, AFAIK this also gives the other nodes a faster way of detecting
the node is down via the drained node gossiping
On 9/10/10 10:10 AM, kannan chandrasekaran wrote:
Thank you for the replies Jonathan... Just to make sure I understand
this correctly;
0) Stop writing to the ColumnFamily and then 'nodetool flush' all nodes.
1) Shutdown cassandra
2) Remove the keyspace ( and its corresponding CF )
On 8/22/10 12:00 AM, Wayne wrote:
Due to compaction being so expensive in terms of disk resources, does it
make more sense to have 2 data volumes instead of one? We have 4 data
disks in raid 0, would this make more sense to be 2 x 2 disks in raid 0?
That way the reader and writer I assume would
On 8/11/10 10:38 AM, Ran Tavory wrote:
Due to administrative error one of the hosts in the cluster lost
permission to write to it's data directory.
So I started seeing errors in the log, however, the server continued
serving traffic. It wasn't able to compact and do other write operations
but it
On 8/20/10 1:58 PM, Julie wrote:
Juliejulie.sugarat nextcentury.com writes:
Please see previous post but is hinted handoff a factor if the CL is set to ALL?
Your previous post looks like a flush or compaction is causing the node
to mark its neighbors down. Do you see correlation between
On 8/5/10 11:51 AM, Peter Schuller wrote:
Also, the variation in disk space in your most recent post looks
entirely as expected to me and nothing really extreme. The temporary
disk space occupied during the compact/cleanup would easily be as high
as your original disk space usage to begin with,
On 8/6/10 2:13 PM, Benjamin Black wrote:
Assuming the old version is already on disk in an SSTable, the new
version will not overwrite it, and both versions will be in the
system. A compaction will remove the old version, however.
To be clear, a compaction will only remove the old version if
On 8/5/10 1:42 AM, Oleg Anastasjev wrote:
3.) When using the random partitioner how much difference should be expected
(or has been observed) between nodes? 2%? 10%?
This depends on data. It will distribute keys almost equal between nodes, nut
sizes of row data could be different for
On 8/4/10 1:27 AM, David Boxenhorn wrote:
When I change schema I need to delete the commit logs - otherwise I get
a null pointer exception.
In versions prior to 0.6.4, there is a bug which leads to an infinite
loop when :
a) you stop the node without doing nodetool drain first
b) then you
On 7/28/10 2:43 PM, Dave Viner wrote:
Hi all,
I'm having a strange result in trying to iterate over all row keys for a
particular column family. The iteration works, but I see the same row
key returned multiple times during the iteration.
I'm using cassandra 0.6.3, and I've put the code in
On 7/28/10 12:26 PM, YES Linux wrote:
i was wondering what the trade offs were between the key cache and row
cache? which is more important from a read? if you have a large row
cache can your key cache be small?
- The row cache is a superset of the key cache. If you have a row cache
on a
On 7/7/10 10:10 AM, Julie wrote:
This doesn't explain why 30 GB of data is taking up 106 GB of disk 24 hours
after all writes have completed. Compactions should be complete, no?
Is your workload straight INSERT or does it contain UPDATE and/or
DELETE? If your workload contains UPDATE/DELETE
On 7/2/10 1:05 PM, Andy Skalet wrote:
Interestingly, if I run with no initialtoken specified, I get a No
other nodes seen! exception from the BootStrapper. Full debug log
(minus rowmutations) here:
https://gist.github.com/df122c109bb9332cd85c
Do you have your seeds set properly? If a given
On 6/21/10 4:57 PM, Joost Ouwerkerk wrote:
We're seeing very strange behaviour after decommissioning a node: when
requesting a get_range_slices with a KeyRange by token, we are getting
back tokens that are out of range.
What sequence of actions did you take to decommission the node? What
On 6/15/10 6:35 PM, Benjamin Black wrote:
jmhodges contributed a patch (I remain incompetent at Jira searches)
for 'coprocessors' to do what you want. That'd be where I'd start
looking.
https://issues.apache.org/jira/browse/CASSANDRA-1016
=Rob
On 6/2/10 12:49 PM, Eric Halpern wrote:
We'd like to double our cluster size from 4 to 8 and increase our replication
factor from 2 to 3.
Is there any special procedure we need to follow to increase replication?
Is it sufficient to just start the new nodes with the replication factor of
3 and
On 5/6/10 10:35 AM, Weijun Li wrote:
Hello, it seems that sstable index file only contains key/position and
each sstable doesn't have column index. So how does range slice query
work? Does it iterate through every key in the range for column
name/value comparison?
The column index is in the
On 4/30/10 5:21 AM, Bingbing Liu wrote:
hi,
thanks for your help.
i run the nodetool -h compact
but the load keep the same , is there anyone can tell me why?
compact and cleanup are two different operations. compact does a
major compaction. cleanup is a superset of compact which does a
On 4/17/10 6:47 PM, Ingram Chen wrote:
after upgrading jdk from 1.6.0_16 to 1.6.0_20, the problem solved.
FYI, this sounds like it might be :
https://issues.apache.org/jira/browse/CASSANDRA-896
http://bugs.sun.com/view_bug.do;jsessionid=60c39aa55d3666c0c84dd70eb826?bug_id=6805775
Where
On 4/13/10 5:04 PM, Paul Prescod wrote:
Am I correct in my understanding that the unit of caching (and
fetching from disk?) is a full row?
Cassandra has both a Key and a Row cache. Unfortunately there appears to
be no current wiki doc describing them. If you are looking into the
topic, wiki
On 4/5/10 11:48 PM, Ilya Maykov wrote:
No, the disks on all nodes have about 750GB free space. Also as
mentioned in my follow-up email, writing with ConsistencyLevel.ALL
makes the slowdowns / crashes go away.
I am not sure if the above is consistent with the cause of #896, but the
other
On 4/5/10 2:11 PM, Jonathan Ellis wrote:
On Mon, Mar 29, 2010 at 6:42 PM, Tatu Salorantatsalora...@gmail.com wrote:
Perhaps it would be good to have convenience workflow for replacing
broken host (squashing lemons)? I would assume that most common use
[ snip ]
Does anyone have numbers on how
On 3/26/10 5:57 PM, Jianing Hu wrote:
In a cluster with ReplicationFactor 1, if one server goes down, will
new replicas be created on other servers to satisfy the set
ReplicationFactor?
Yes, via Anti-Entropy.
http://wiki.apache.org/cassandra/AntiEntropy
101 - 132 of 132 matches
Mail list logo