I'm running the latest lazyboy against apache-cassandra-incubating-0.4.0, with
python 2.6.2 on ubuntu 9.04. I have modified the cassandra file
storage-conf.xml to add the Keyspace that lazyboy wants:
Keyspace Name=UserData
ColumnFamily CompareWith=BytesType Name=Users/
/Keyspace
Hello all,
I am running into problems with get_key_range. I have
OrderPreservingPartitioner defined in storage-conf.xml and I am using
a columnfamily that looks like
ColumnFamily CompareWith=BytesType
Name=DatastoreDeletionSchedule
/
My command is
You should check the other nodes for potential exceptions keeping them
from replying.
Without seeing that it's hard to say if this is caused by an old bug,
but you should definitely upgrade to 0.4.1 either way :)
On Mon, Oct 19, 2009 at 5:51 PM, Ramzi Rabah rra...@playdom.com wrote:
Hello all,
Whenever I try to do a quorum read on a row with a particularly large
supercolumn with get_slice under high load, cassandra throws timeouts.
The reads for that row repeatedly fail until load decreases, but
smaller reads still succeed during that time. bin/nodeprobe info
shows that the read
I re-read my response, and in case it was unclear, I meant:
I applied the patch but wasn't able to reuse the old tables. The
patch seems to be working fine after I nuked the data though.
Edmond
On Mon, Oct 19, 2009 at 5:16 PM, Jonathan Ellis jbel...@gmail.com wrote:
Thanks for following up!
Usually I'm trying to read 500 columns (~250KB) out of the 30K columns
(~15MB) of the supercolumn. But the same issues happen when I drop
down to 100 (~50KB) columns. The columns I request from get_slice()
are named, i.e. I'm not reading 500 consecutive columns.
Edmond
On Mon, Oct 19, 2009 at
are there many rows like this?
did you check the logs on the other nodes for exceptions?
On Mon, Oct 19, 2009 at 7:40 PM, Edmond Lau edm...@ooyala.com wrote:
Usually I'm trying to read 500 columns (~250KB) out of the 30K columns
(~15MB) of the supercolumn. But the same issues happen when I
Hi Jonathan I updated to 4.1 and I still get the same exception when I
call get_key_range.
I checked all the server logs, and there is only one exception being
thrown by whichever server I am connecting to.
Thanks
Ray
On Mon, Oct 19, 2009 at 4:52 PM, Jonathan Ellis jbel...@gmail.com wrote:
No,
On Mon, Oct 19, 2009 at 8:20 PM, Edmond Lau edm...@ooyala.com wrote:
On Mon, Oct 19, 2009 at 6:01 PM, Jonathan Ellis jbel...@gmail.com wrote:
are there many rows like this?
No - just a handful. I'm able to repro by just launching 5 or 6
threads that all read the same key.
Does it work if
How many sstable files are in the data directories for the
columnfamily you are querying?
How many are there after you restart and it is happy?
Are you doing system stat collection with munin or ganglia or some such?
On Mon, Oct 19, 2009 at 8:25 PM, Ramzi Rabah rra...@playdom.com wrote:
Hi
Hey,
I think I've fixed these before, let me know how the attached patch works
for you. The first issue which caused the the second True is because it's
not undirtying itself correctly, and there's also a bug in
examples/record.py which makes it not run because the variable key doesn't
exist.
Comments inline.
On Mon, Oct 19, 2009 at 6:33 PM, Jonathan Ellis jbel...@gmail.com wrote:
On Mon, Oct 19, 2009 at 8:20 PM, Edmond Lau edm...@ooyala.com wrote:
On Mon, Oct 19, 2009 at 6:01 PM, Jonathan Ellis jbel...@gmail.com wrote:
are there many rows like this?
No - just a handful. I'm
Hi Jonathan
I actually spoke too early. Now even if I restart the servers it still
gives a timeout exception.
As far as the sstable files are, not sure which ones are the sstables,
but here is the list of files in the data directory that are prepended
with the column family name:
Can you tell me anything about the nature of your rows? Many/few
columns? Large/small column values?
On Mon, Oct 19, 2009 at 9:17 PM, Ramzi Rabah rra...@playdom.com wrote:
Hi Jonathan
I actually spoke too early. Now even if I restart the servers it still
gives a timeout exception.
As far as
The rows are very small. There are a handful of columns per row
(approximately about 4-5 columns per row).
Each column has a name which is a String (20-30 characters long), and
the value is an empty array of bytes (new byte[0]).
I just use the names of the columns, and don't need to store any
That's really strange... Can you reproduce on a single-node cluster?
On Mon, Oct 19, 2009 at 9:34 PM, Ramzi Rabah rra...@playdom.com wrote:
The rows are very small. There are a handful of columns per row
(approximately about 4-5 columns per row).
Each column has a name which is a String
On Mon, Oct 19, 2009 at 9:48 PM, Edmond Lau edm...@ooyala.com wrote:
8 data files total. 3 nodes have 1, 1 has 2, the 3rd has 3.
Does it still take ~8s if you direct a CL.ONE query at one of the
nodes you know has the data (i.e., a local read)?
Local reads return quickly, but if you look at
So my cluster has 4 nodes node6, node8, node9 and node10. I turned
them all off.
1- I started node6 by itself and still got the problem.
2- I started node8 by itself and it ran fine (returned no keys)
3- I started node9 by itself and still got the problem.
4- I started node10 by itself and still
Is the data on 6, 9, or 10 small enough that you could tar.gz it up
for me to use to reproduce over here?
On Mon, Oct 19, 2009 at 10:17 PM, Ramzi Rabah rra...@playdom.com wrote:
So my cluster has 4 nodes node6, node8, node9 and node10. I turned
them all off.
1- I started node6 by itself and
Patch works great, thanks!
Now that record.py is working, I tried running the other example file view.py
and found multiple problems - incorrect names and typos:
UserRecord class doesn't exist - rename to User to match record.py_usrs
variable is a typo, change to _usersDigg keyspace name is
Hi Jonathan the data is about 60 MB. Would you like me to send it to you?
On Mon, Oct 19, 2009 at 8:20 PM, Jonathan Ellis jbel...@gmail.com wrote:
Is the data on 6, 9, or 10 small enough that you could tar.gz it up
for me to use to reproduce over here?
On Mon, Oct 19, 2009 at 10:17 PM, Ramzi
Yes, please. You'll probably have to use something like
http://www.getdropbox.com/ if you don't have a public web server to
stash it temporarily.
On Mon, Oct 19, 2009 at 10:28 PM, Ramzi Rabah rra...@playdom.com wrote:
Hi Jonathan the data is about 60 MB. Would you like me to send it to you?
Hi Jonathan:
Here is the storage_conf.xml for one of the servers
http://email.slicezero.com/storage-conf.xml
and here is the zipped data:
http://email.slicezero.com/datastoreDeletion.tgz
Thanks
Ray
On Mon, Oct 19, 2009 at 8:30 PM, Jonathan Ellis jbel...@gmail.com wrote:
Yes, please.
Got it. I will have a look tomorrow.
On Mon, Oct 19, 2009 at 10:45 PM, Ramzi Rabah rra...@playdom.com wrote:
Hi Jonathan:
Here is the storage_conf.xml for one of the servers
http://email.slicezero.com/storage-conf.xml
and here is the zipped data:
24 matches
Mail list logo