thrift (0.2.0.4)
thrift_client (0.4.6, 0.4.3)
On Mon, Aug 16, 2010 at 8:51 PM, Mark static.void@gmail.com wrote:
On 8/16/10 6:19 PM, Benjamin Black wrote:
client = Cassandra.new('system', '127.0.0.1:9160')
Brand new download of beta-0.7.0-beta1
http://gist.github.com/528357
Which
I'm testing with the default cassandra.yaml.
I cannot reproduce the output in that gist, however:
thrift_client = client.instance_variable_get(:@client)
= nil
Also, the Thrift version for 0.7 is 11.0.0, according to the code I
have. Can someone comment on whether 0.7 beta1 is at Thrift
Using beta, made a describe_version(), got 10.0.0 as reply, aint using gem
though, just thrift from java
/Justus
-Ursprungligt meddelande-
Från: Benjamin Black [mailto:b...@b3k.us]
Skickat: den 17 augusti 2010 08:37
Till: user@cassandra.apache.org
Ämne: Re: Cassandra gem
I'm testing
Then this may be the issue. I'll see if I can regenerate something
with 10.0.0 version tomorrow.
On Mon, Aug 16, 2010 at 11:45 PM, Thorvaldsson Justus
justus.thorvalds...@svenskaspel.se wrote:
Using beta, made a describe_version(), got 10.0.0 as reply, aint using gem
though, just thrift from
I figured some but I am stuck, would appreciate help a lot to understand how to
use secondary indices.
Create a Column family and define the secondary indices
CfDef cdef = new CfDef();
cdef.setColumn_type(columntype);
cdef.setComment(comment);
cdef.setComparator_type(comparatortype);
Hi All,
We have strange issue here.
We have 10 nodes cross 5 datacenters. Today I found a strange thing.
On one node, few data deleted came back after 8-9 days.
The data saved on a node and retrieved/deleted on another node in a
remote datacenter. The CF is a super column.
What is
Cassandra version is 0.6.3
On Aug 17, 2010, at 11:39 AM, Zhong Li wrote:
Hi All,
We have strange issue here.
We have 10 nodes cross 5 datacenters. Today I found a strange
thing. On one node, few data deleted came back after 8-9 days.
The data saved on a node and retrieved/deleted on
We have 10 nodes cross 5 datacenters. Today I found a strange thing. On
one node, few data deleted came back after 8-9 days.
The data saved on a node and retrieved/deleted on another node in a remote
datacenter. The CF is a super column.
What is possible causing this?
What is your GC
GCGraceSeconds864000/GCGraceSeconds
It is default 10 days.
I checked all system.log, all nodes are connected, although not all
the time, but they reconnected after a few minutes. None of node
disconnected more than GC grace seconds.
Best,
On Aug 17, 2010, at 11:53 AM, Peter Schuller
We are testing bulk data loads using thrift. About 5% of operations
are failing on the following exception. It appears that it is not
getting any response (end of file) on the batch mutate response. I'll
try to create a test case to demonstrate the behavior.
Caused by:
The videos of the cassandra summit are starting to be posted, just fyi for
those who were unable to make it out to SF.
http://www.riptano.com/blog/slides-and-videos-cassandra-summit-2010
what is the best way to move data between clusters. we currently have a 4
node prod cluster with 80G of data and want to move it to a dev env with 3
nodes. we have plenty of disk were looking into nodetool snapshot, but it
look like that wont work because of the system tables. sstabletojson
if i set a key cache size of 100% the way i understand how that works is:
- the cache is not write through, but read through
- a key gets added to the cache on the first read if not already available
- the size of the cache will always increase for ever item read. so if you
have 100mil items
without answering your whole question, just fyi: there is a matching
json2sstable command for going the other direction.
On Tue, Aug 17, 2010 at 10:48 AM, Artie Copeland yeslinux@gmail.com wrote:
what is the best way to move data between clusters. we currently have a 4
node prod cluster
On Tue, Aug 17, 2010 at 1:55 PM, Artie Copeland yeslinux@gmail.com wrote:
if i set a key cache size of 100% the way i understand how that works is:
- the cache is not write through, but read through
- a key gets added to the cache on the first read if not already available
- the size of
On Tue, Aug 17, 2010 at 10:55 AM, Artie Copeland yeslinux@gmail.com wrote:
if i set a key cache size of 100% the way i understand how that works is:
- the cache is not write through, but read through
- a key gets added to the cache on the first read if not already available
- the size of
So when using Redis, how do you go about updating the index?
Do you serialize changes to the index i.e. when someone votes, you then
update the index?
Little confused as to how to go about updating a huge index.
Say you have 1 million stores, and you want to order by the top votes, how
would
are there any errors on your server logs?
On Tue, Aug 17, 2010 at 11:46 AM, Andres March ama...@qualcomm.com wrote:
We are testing bulk data loads using thrift. About 5% of operations are
failing on the following exception. It appears that it is not getting any
response (end of file) on the
you can either use get_range_slices to scan through all your rows and
batch_mutate them into the 2nd cluster, or you can start a test
cluster with the same number of nodes as the live one and just scp
everything over, 1 to 1.
it's possible but highly error-prone to manually slice and dice data
It doesn't have to be disconnected more than GC grace seconds to cause
what you are seeing, it just has to be disconnected at all (thus
missing delete commands).
Thus you need to be running repair more often than gcgrace, or
confident that read repair will handle it for you (which clearly is
not
On Tue, Aug 17, 2010 at 2:49 PM, Jonathan Ellis jbel...@gmail.com wrote:
It doesn't have to be disconnected more than GC grace seconds to cause
what you are seeing, it just has to be disconnected at all (thus
missing delete commands).
Thus you need to be running repair more often than
(gurus, please check my logic here... I'm trying to validate my
understanding of this situation.)
Isn't the issue that while a server was disconnected, a delete could have
occurred, and thus the disconnected server never got the 'tombstone'?
(http://wiki.apache.org/cassandra/DistributedDeletes)
No errors in server logs. Let me know if you have any debug recommendations.
I'm just starting to set it up.
- Andres
From: Jonathan Ellis [jbel...@gmail.com]
Sent: Tuesday, August 17, 2010 12:44 PM
To: user@cassandra.apache.org
Subject: Re:
http://code.google.com/p/redis/wiki/SortedSets
On Tue, Aug 17, 2010 at 12:33 PM, S Ahmed sahmed1...@gmail.com wrote:
So when using Redis, how do you go about updating the index?
Do you serialize changes to the index i.e. when someone votes, you then
update the index?
Little confused as to how
I'm finding that once I add an index to a column family that I start getting
exceptions as I try to add rows to it. It works fine if I don't define the
column metadata. Any ideas what would cause this?
ERROR 12:44:21,477 Error in ThreadPoolExecutor
java.lang.RuntimeException:
On Tue, 2010-08-17 at 14:04 -0700, Ed Anuff wrote:
I'm finding that once I add an index to a column family that I start
getting
exceptions as I try to add rows to it. It works fine if I don't
define the
column metadata. Any ideas what would cause this?
ERROR 12:44:21,477 Error in
Yup, that's it, r986486 on Table.java made the problem go away, talk about
great timing :)
On Tue, Aug 17, 2010 at 2:38 PM, Eric Evans eev...@rackspace.com wrote:
On Tue, 2010-08-17 at 14:04 -0700, Ed Anuff wrote:
I'm finding that once I add an index to a column family that I start
Hi All
How performant is M/R on Cassandra when compared to running it on HDFS?
Anyone have any numbers they can share? Specifically how much of data the
M/R job was run against and what was the throughput etc. Any information
would be very helpful.
--
Cheers
Bill
Updated code is now in my master branch, with the reversion to 10.0.0.
Please let me know of further trouble.
b
On Tue, Aug 17, 2010 at 8:31 AM, Mark static.void@gmail.com wrote:
On 8/16/10 11:37 PM, Benjamin Black wrote:
I'm testing with the default cassandra.yaml.
I cannot
Those data were inserted one node, then deleted on a remote node in
less than 2 seconds. So it is very possible some node lost tombstone
when connection lost.
My question, is a ConstencyLevel.ALL read can retrieve lost tombstone
back instead of repair?
On Aug 17, 2010, at 4:11 PM, Ned
Hi,
We are going to use cassandra for searching purpose like inbox search.
The reading qps is very high, we'd like to use ConsitencyLevel.One for
reading and disable read-repair at the same time.
For reading consistency in this condition, the writing should use
ConsistencyLevel.ALL. But the
On Tue, Aug 17, 2010 at 10:55 PM, Chen Xinli chen.d...@gmail.com wrote:
Hi,
We are going to use cassandra for searching purpose like inbox search.
The reading qps is very high, we'd like to use ConsitencyLevel.One for
reading and disable read-repair at the same time.
For reading consistency
I'm using cassandra 0.6.4; there's a configuration option
DoConsistencyChecksBoolean in storage-conf.xml.
Is't that for read-repair ?
I will do a test for WRITE QUORUM, READ.ONE if it can meet our requirements.
2010/8/18 Edward Capriolo edlinuxg...@gmail.com
On Tue, Aug 17, 2010 at 10:55 PM,
On Tue, Aug 17, 2010 at 7:55 PM, Chen Xinli chen.d...@gmail.com wrote:
Hi,
We are going to use cassandra for searching purpose like inbox search.
The reading qps is very high, we'd like to use ConsitencyLevel.One for
reading and disable read-repair at the same time.
In 0.7 you can set a
On Tue, Aug 17, 2010 at 7:49 PM, Zhong Li z...@voxeo.com wrote:
Those data were inserted one node, then deleted on a remote node in less
than 2 seconds. So it is very possible some node lost tombstone when
connection lost.
My question, is a ConstencyLevel.ALL read can retrieve lost tombstone
35 matches
Mail list logo