We have same problem.
On Friday, August 31, 2012, Jean-Armel Luce jaluc...@gmail.com wrote:
Hello Aaron.
Thanks for your answer
Jira ticket 4597 created :
https://issues.apache.org/jira/browse/CASSANDRA-4597
Jean-Armel
2012/8/31 aaron morton aa...@thelastpickle.com
Looks like a bug.
On Tue, 2012-08-28 at 16:57 +1200, aaron morton wrote:
Sorry I don't understand your question.
Can you explain it a bit more or maybe someone else knows.
I believe the question is why is the maximum 2**127 and not
0x
Tim
Cheers
-
Aaron Morton
Hello,
I'm running a 1.1.2 Cassandra 2 nodes wide cluster with RF=2 (CL = 1,
nodes are m1.large from Amazon).
I had this error 524 times last month on the node 1 and 2805 time on
the second node.
Should I worry about it ? How can I fix these errors ?
Alain
2012/6/2 Peter Schuller
INFO [AntiEntropySessions:6] 2012-09-02 15:46:23,022
AntiEntropyService.java (line 663) [repair #%s] No neighbors to repair
with on range %s: session completed
you have RF=1, or too many nodes are down.
hi,
i know minor compaction is caused when the num of SSTableexceeds the thresholds.
then, is there anyway to find what time minor compaction happened?is minor
compaction output into log?
thanks,satoshi
Dear Distinguished Colleagues:
I need to add full-text search and somewhat free form queries to my
application. Our data is made up of items that are stored in a single
column family, and we have a bunch of secondary indices for look ups.
An item has header fields and data fields, and the
Today I configured incremental backups in a test node which already has some
data on it,
and I found that backups are not created for STTables created by a compact:
mddione@life:~/src/works/orange/Cassandra$ sudo find
/var/lib/cassandra/data/one_cf
/var/lib/cassandra/data/one_cf
Some one did search on Lucene, but for very fresh data they build search
index in memory so data become available for search without delays.
On 3 September 2012 22:25, Oleg Dulin oleg.du...@gmail.com wrote:
Dear Distinguished Colleagues:
Is there any way I can configure KeyCahce to use Non-Heap memory ?
We have large memory nodes : ~96GB memory per node and effectively using only
8 GB configured for heap ( to avoid GC issues because of a large heap)
We have a constraint with respect to :
1. Row cache models don't reflect
What version are you on ?
Check the result of you major compaction by looking for log lines such as
Compacted to… They will say how much smaller the new file is.
After a major compaction there should be a single SSTable, the ks-cf-he-1234
part with multiple components such as -Data.db. How
There are several logs associated with each minor compaction. Grep your
logs for Compacting.
On Mon, Sep 3, 2012 at 7:41 AM, Satoshi Yamada
bigtvioletb...@yahoo.co.jpwrote:
hi,
i know minor compaction is caused when the num of SSTable
exceeds the thresholds.
then, is there anyway to find
Incremental backups are only triggered when new data is written to disk,
such as a memtable being flushed or data being streamed in from a repair or
move. Compaction does not create any new data, so there's no need to back
up the result.
On Mon, Sep 3, 2012 at 8:45 AM, mdione@orange.com
12 matches
Mail list logo