Hello, everybody!
I'm thinking about running hadoop jobs on the top of the cassandra
cluster. My understanding is - hadoop jobs read data from local nodes
only. Does it mean the consistency level is always ONE?
Thank you,
Andrey
Why don't you look into Brisk:
http://www.datastax.com/docs/0.8/brisk/about_brisk
On Thu, Oct 18, 2012 at 2:46 PM, Andrey Ilinykh ailin...@gmail.com wrote:
Hello, everybody!
I'm thinking about running hadoop jobs on the top of the cassandra
cluster. My understanding is - hadoop jobs read data
A recent thread made it sound like Brisk was no longer a datastax supported
thing (it's DataStax Enterpise, or DSE, now):
http://www.mail-archive.com/user@cassandra.apache.org/msg24921.html
In particular this response:
http://www.mail-archive.com/user@cassandra.apache.org/msg25061.html
On Thu,
@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: hadoop consistency level
Why don't you look into Brisk:
http://www.datastax.com/docs/0.8/brisk/about_brisk
On Thu, Oct 18, 2012 at 2:46 PM, Andrey Ilinykh
ailin...@gmail.commailto:ailin...@gmail.com wrote:
Hello, everybody!
I'm
@cassandra.apache.org
Date: Thursday, October 18, 2012 11:49 AM
To: user@cassandra.apache.org user@cassandra.apache.org
Subject: Re: hadoop consistency level
Why don't you look into Brisk:
http://www.datastax.com/docs/0.8/brisk/about_brisk
On Thu, Oct 18, 2012 at 2:46 PM, Andrey Ilinykh ailin
@cassandra.apache.org
Subject: Re: hadoop consistency level
Why don't you look into Brisk:
http://www.datastax.com/docs/0.8/brisk/about_brisk
On Thu, Oct 18, 2012 at 2:46 PM, Andrey Ilinykh ailin...@gmail.com
wrote:
Hello, everybody!
I'm thinking about running hadoop jobs on the top
On Thu, Oct 18, 2012 at 12:00 PM, Michael Kjellman
mkjell...@barracuda.com wrote:
Unless you have Brisk (however as far as I know there was one fork that got
it working on 1.0 but nothing for 1.1 and is not being actively maintained
by Datastax) or go with CFS (which comes with DSE) you are not
Well there is *some* data locality, it's just not guaranteed. My
understanding (and someone correct me if I'm wrong) is that
ColumnFamilyInputFormat implements InputSplit and the getLocations()
method.
http://hadoop.apache.org/docs/mapreduce/current/api/org/apache/hadoop/mapre
On Thu, Oct 18, 2012 at 1:24 PM, Michael Kjellman
mkjell...@barracuda.com wrote:
Well there is *some* data locality, it's just not guaranteed. My
understanding (and someone correct me if I'm wrong) is that
ColumnFamilyInputFormat implements InputSplit and the getLocations()
method.
Not sure I understand your question (if there is one..)
You are more than welcome to do CL ONE and assuming you have hadoop nodes
in the right places on your ring things could work out very nicely. If you
need to guarantee that you have all the data in your job then you'll need
to use QUORUM.
If
On Thu, Oct 18, 2012 at 1:34 PM, Michael Kjellman
mkjell...@barracuda.com wrote:
Not sure I understand your question (if there is one..)
You are more than welcome to do CL ONE and assuming you have hadoop nodes
in the right places on your ring things could work out very nicely. If you
need to
I believe that reading with CL.ONE will still cause read repair to be run
(in the background) 'read_repair_chance' of the time.
-Bryan
On Thu, Oct 18, 2012 at 1:52 PM, Andrey Ilinykh ailin...@gmail.com wrote:
On Thu, Oct 18, 2012 at 1:34 PM, Michael Kjellman
mkjell...@barracuda.com wrote:
On Oct 18, 2012, at 3:52 PM, Andrey Ilinykh ailin...@gmail.com wrote:
On Thu, Oct 18, 2012 at 1:34 PM, Michael Kjellman
mkjell...@barracuda.com wrote:
Not sure I understand your question (if there is one..)
You are more than welcome to do CL ONE and assuming you have hadoop nodes
in the
On Thu, Oct 18, 2012 at 2:31 PM, Jeremy Hanna
jeremy.hanna1...@gmail.com wrote:
On Oct 18, 2012, at 3:52 PM, Andrey Ilinykh ailin...@gmail.com wrote:
On Thu, Oct 18, 2012 at 1:34 PM, Michael Kjellman
mkjell...@barracuda.com wrote:
Not sure I understand your question (if there is one..)
You
14 matches
Mail list logo