Hi Techy,
We are using Astyanax with cassandra 1.2.4.
beneficits:
* It is so easy to configure and use.
* Good wiki
* Mantained by Netflix
* Solution to manage the store of big files (more than 15mb)
* Solution to read all rows efficiently
problems:
* It consume more memory
2013/4/16
Thanks Everton for the suggestion. Couple of questions-
1) Does Astyanax client have any problem with previous version of Cassandra?
2) You said one problem, that it will consume more memory? Can you
elaborate that slightly? What do you mean by that?
3) Does Astyanax supports asynch capabilities?
1) Does Astyanax client have any problem with previous version of Cassandra?
We have used with 1.1.8, but for this version we do not use the last
version of Astyanax. But I think that to Cassandra 1.2.* the last version
of astyanax will work.
2) You said one problem, that it will consume more
Hi
We are using Cassandra 1.6 at this moment. We start to work with Hector,
because it is the first recommendation that you can find in a simple google
search for java clients Cassandra.
We start using Hector but when we start to have non dynamically column
families, that can be managed using
You're right, it's probably hard. I should have provided more data.
I'm running Ubuntu 10.04 LTS with JNA installed. I believe this line in the
log indicates that JNA is working, please correct me if I'm wrong:
CLibrary.java (line 111) JNA mlockall successful
Total amount of RAM is 4GB.
My
We would like to map multiple keys to a single token in cassandra. I
believe this should be possible now with CASSANDRA-1034
Ex:
Key1 -- 123/IMAGE
Key2 -- 123/DOCUMENTS
Key3 -- 123/MULTIMEDIA
I would like all keys with 123 as prefix to be mapped to a single token.
Is this possible? What should
Hi,
I am getting an exception when I run Hadoop with Cassandra that follows:
WARN org.apache.hadoop.mapred.Child (main): Error running child
java.lang.RuntimeException: InvalidRequestException(why:Start key's token
sorts after end token)
at
I literally jut replied to your stackoverflow comment then saw this email. I
need the whole stack trace. My guess is the ColFamily is configured for one
sort method where map/reduce is using another or something when querying but
that's just a guess.
Dean
From: Andre Tavares
Dean,
sorry, but I saw your comments on Stackoverflow (
http://stackoverflow.com/questions/16041727/operationtimeoutexception-cassandra-cluster-aws-emr)
just after I sent this message ...
and I think you may be right about the sort method, but Priam sets
Cassandra partitioner with
Hi,
When I am trying to insert the data into a table using Java with JDBC, I
am getting the error
InvalidRequestException(why:cannot parse 'Jo' as hex bytes)
My insert quarry is:
insert into temp(id,name,value,url_id) VALUES(108, 'Aa','Jo',10);
This insert quarry is running successfully
What's the stack trace you see? At the time, I was thinking column scan not
row scan as perhaps your code or priam's code was doing a column slice within a
row set and the columns are sorted by Integer while priam is passing in UTF8 or
vice-versa. Ie. Do we know if this is a column sorting
Is cassandra-thrift-1.1.1.jar the generated code? I see a send() and recv()
but I don't see a send(Callback cb) that is typicaly of true asynchronous
platforms. Ie. I don't know when to call recv myself obviously if I am trying
to make astyanax truly asynchronous.
The reason I ask is we have
Hello,
Can anyone provide any help on this?
Thanks in advance.
*Raihan Jamal*
On Tue, Apr 16, 2013 at 6:50 PM, Raihan Jamal jamalrai...@gmail.com wrote:
Hello,
I installed single node cluster in my local dev box which is running
Windows 7 and it was working fine. Due to some reason,
On Tue, Apr 16, 2013 at 10:29 PM, Kuldeep Mishra
kuld.cs.mis...@gmail.comwrote:
cassandra 1.2.0
Is it a bug in 1.2.0 ?
While I can't speak to this specific issue, 1.2.0 has meaningful known
issues. I suggest upgrade to 1.2.3(/4) ASAP.
=Rob
That was our first thought. Using maven's dependency tree info we verified
that we're using the expected (cass 1.2.3) jars
$ mvn dependency:tree | grep thrift
[INFO] | +- org.apache.thrift:libthrift:jar:0.7.0:compile
[INFO] | \- org.apache.cassandra:cassandra-thrift:jar:1.2.3:compile
I've
How many threads / processes do you have performing the writes?
How big are the mutations ?
Where are you measuring the latency ?
Look at the nodetool cfhistograms to see the time it takes for a single node to
perform a write.
Look at the nodetool proxyhistograms to see the end to end
It's the same as the Apache version, but DSC comes with samples and the free
version of Ops Centre.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 17/04/2013, at 6:36 PM, Francisco Trujillo
One node on the native binary protocol, AFAIK it's still considered beta in 1.2
Also +1 for Astyanax
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 17/04/2013, at 6:50 PM, Francisco Trujillo
INFO [ScheduledTasks:1] 2013-04-15 14:00:02,749 GCInspector.java (line 122)
GC for ParNew: 338798 ms for 1 collections, 592212416 used; max is 1046937600
This does not say that the heap is full.
ParNew is GC activity for the new heap, which is typically a smaller part of
the overall heap.
CASSANDRA-1034
That ticket is about removing an assumption which was not correct.
I would like all keys with 123 as prefix to be mapped to a single token.
Why?
it's not possible nor desirable IMHO. Tokens are used to identify a single row
internally.
Cheers
-
Aaron Morton
What version are you using ?
And what JDBC driver ?
Sounds like the driver is not converting the value to bytes for you.
I guess the problem may because of undefined
key_validation_class,default_validation_class and comparator etc.
If you are using CQL these are not relevant.
Cheers
Hello,
My test setup consist of two datacenters DC1 and DC2.
DC2 has a offset of 10 as you can see in the following ring command.
I have two questions:
1) Let's say in this case I insert a key at DC2 and its token is, let's
say 85070591730234615865843651857942052874, in this case will it
I have a working 3 node cluster in a single ec2 region and I need to hit it
from our datacenter. As you'd expect, the client gets the internal
addresses of the nodes back.
Someone on irc mentioned using the public IP for rpc and binding that
address to the box. I see that mentioned in an old list
Thanks Aaron for the suggestion. I am not sure, I was able to understand
regarding one node thing you mentioned on the native binary protocol? Can
you please elaborate that?
On Wed, Apr 17, 2013 at 11:21 AM, aaron morton aa...@thelastpickle.comwrote:
One node on the native binary protocol,
If you Hadoop task supplying both a start and finish key for the slice ? You
probably only want the start.
Provide the full call stack and the code in your hadoop task.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
Here's an example I did in python a long time ago
http://www.mail-archive.com/user@cassandra.apache.org/msg04775.html
Call send() then select on the file handle, when it's ready to read call
recv().
Or just add more threads on your side :)
Cheers
-
Aaron Morton
Freelance
On Wed, Apr 17, 2013 at 11:19 AM, aaron morton aa...@thelastpickle.comwrote:
It's the same as the Apache version, but DSC comes with samples and the
free version of Ops Centre.
DSE also comes with Solr special sauce and CDFS.
=Rob
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port
already in use: 7199; nested exception is:
java.net.BindException: Address already in use: JVM_Bind
The process is already running, is it installed as a service and was it
automatically started when the
On Wed, Apr 17, 2013 at 12:07 PM, maillis...@gmail.com wrote:
I have a working 3 node cluster in a single ec2 region and I need to hit
it from our datacenter. As you'd expect, the client gets the internal
addresses of the nodes back.
Someone on irc mentioned using the public IP for rpc and
Can you reproduce this in a simple way ?
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 18/04/2013, at 5:50 AM, Lanny Ripple la...@spotright.com wrote:
That was our first thought. Using maven's dependency tree
1) Let’s say in this case I insert a key at DC2 and its token is, let’s
say 85070591730234615865843651857942052874, in this case will it be owned by
DC2 ? and then replicated on DC1 ? i.e. who owns it.
We don't think in terms of owning the token.
The token range in the local DC that
Was a typo, should have been One note on
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 18/04/2013, at 7:23 AM, Techy Teck comptechge...@gmail.com wrote:
Thanks Aaron for the suggestion. I am not sure, I was able
Hi Team,
I have a high write traffic to my Cassandra cluster. I experience a very
high number of pending compactions. As I expect higher writes, The pending
compactions keep increasing. Even when I stop my writes it takes several
hours to finishing pending compactions.
My CF is configured
three things:
1) compaction throughput is fairly low (yaml nodetool)
2) concurrent compactions is fairly low (yaml)
3) multithreaded compaction might be off in your version
Try raising these things. Otherwise consider option 4.
4)$$$ RAID,RAMCPU$$
On Wed, Apr
:D
Jay, check if your disk(s) utilization allows you to change the
configuration the way Edward suggest. iostat -xkcd 1 will show you how much
of your disk(s) are in use.
On Wed, Apr 17, 2013 at 5:26 PM, Edward Capriolo edlinuxg...@gmail.comwrote:
three things:
1) compaction throughput is
It's slow going finding the time to do so but I'm working on that.
We do have another table that has one or sometimes two columns per row. We can
run jobs on it without issue. I looked through org.apache.cassandra.hadoop
code and don't see anything that's really changed since 1.1.5 (which was
Depending on your client, disable automatic client discovery and just specify a
list of all your nodes in your client configuration.
For more details check out
http://xzheng.net/blogs/problem-when-connecting-to-cassandra-with-ruby/ ,
obviously this deals specifically with a ruby client but it
I run cassandra on single win 8 machine for development needs. Everything
has been working fine for several months but just today I saw this error
message in cassandra logs all host pools were marked down.
ERROR 08:40:42,684 Error occurred during processing of message.
I had a situation earlier where my shuffle failed after a hard disk drive
filled up. I went through and disabled shuffle on the machines while
trying to get the situation resolved. Now, while I can re-enable shuffle
on the machines, when trying to do an ls, I get a timeout.
Looking at the
On 04/18/2013 12:06 AM, aaron morton wrote:
What version are you using ?
And what JDBC driver ?
Sounds like the driver is not converting the value to bytes for you.
I guess the problem may because of undefined
key_validation_class,default_validation_class and comparator etc.
If you are using
Hi Aaron,
Thank you for your feedback. I have also installed DataStax OPS center
and its nothing shows progress of repair. Previously every repair
progress also shown on OPS center and once it 100%, reapir also
completed on nodes. but now reapir is in progress on node but OPS
center
41 matches
Mail list logo