It's fixed on 0.8.6. For 0.8.5 you would have to build it from source
with the patch applied, yes.
(Actually, in my opinion this bugfix is a good reason to release 0.8.6.)
Turns out I had managed to miss the fact that a 0.8.6 release is being
voted on so I'd expect it to happen soonish. You
thanks! is the load info also a bug? node1 supposed to have 80MB.
bash-3.2$ bin/nodetool -h localhost ring
Address DC RackStatus State LoadOwns
Token
93798607613553124915572813490354413064
node2 datacenter1 rack1 Up Normal 86.03 MB
There is no direct way to do that, but reading a CSV and inserting
rows in Java is really easy.
But you may want have a look at the new bulk loading tool,
sstableloader, described here :
http://www.datastax.com/dev/blog/bulk-loading
Small detail, it seems you still write email at the incubator
Hi everyone,
I noticed this line in the API docs,
The method is not O(1). It takes all the columns from disk to calculate the
answer. The only benefit of the method is that you do not need to pull all
the columns over Thrift interface to count them.
Does this mean if a row has a large number of
There is no real performance difference between the two partitions.
Yes and no. Yes each replica will have the same data load if you set RF to the
same number of nodes. No it's still not a good idea to have an unbalanced key
range, you can still have throughput hot spots.
Cheers
yes.
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 19/09/2011, at 7:16 AM, Tharindu Mathew wrote:
Hi everyone,
I noticed this line in the API docs,
The method is not O(1). It takes all the columns from disk to calculate the
This is fixed in 1.0
https://issues.apache.org/jira/browse/CASSANDRA-2894
On Sun, Sep 18, 2011 at 2:16 PM, Tharindu Mathew mcclou...@gmail.comwrote:
Hi everyone,
I noticed this line in the API docs,
The method is not O(1). It takes all the columns from disk to calculate the
answer. The
while doing repair on node3, the Load keep increasing, suddenly cassandra
has encountered OOM, and the Load stopped at 140GB, after cassandra came
back, I tried node cleanup but it seems not working
does node repair generate many temp sstables? how to get rid of them?
thanks!
Address
this comment in JIRA mentions it
https://issues.apache.org/jira/browse/CASSANDRA-1969?focusedCommentId=12985038page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12985038
but in the end, it's not immediately clear. could someone give a
summary of its advantages?
(if
In my tests I have seen repair sometimes take a lot of space (2-3 times),
cleanup did not clean it, the only way I could clean that was using major
compaction.
On Sun, Sep 18, 2011 at 6:51 PM, Yan Chunlu springri...@gmail.com wrote:
while doing repair on node3, the Load keep increasing,
so does major compaction actually clean it or merge it, I am afraid it
give me a single large file
On Mon, Sep 19, 2011 at 10:26 AM, Anand Somani meatfor...@gmail.com wrote:
In my tests I have seen repair sometimes take a lot of space (2-3 times),
cleanup did not clean it, the only way I
Thanks Aaron and Jake for the replies.
Any chance of a possible workaround to use for Cassandra 0.7?
On Mon, Sep 19, 2011 at 3:48 AM, aaron morton aa...@thelastpickle.comwrote:
Cool
Thanks, A
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
12 matches
Mail list logo