quot;9043")
sqlContext.setConf("host","localhost")
sqlContext.setConf("port","9043”)
Thanks
Andy
From: Saurabh Bajaj <bajaj.onl...@gmail.com>
Date: Tuesday, March 8, 2016 at 9:13 PM
To: Andrew Davidson <a...@santacruzintegration.com>
Cc: Te
t 6:11 PM
>> To: Andrew Davidson <a...@santacruzintegration.com>
>> Cc: "user @spark" <user@spark.apache.org>
>> Subject: Re: pyspark spark-cassandra-connector java.io.IOException:
>> Failed to open native connection to Cassandra at {192.168.1.126}:9042
>
ail.com>
> Date: Tuesday, March 8, 2016 at 6:11 PM
> To: Andrew Davidson <a...@santacruzintegration.com>
> Cc: "user @spark" <user@spark.apache.org>
> Subject: Re: pyspark spark-cassandra-connector java.io.IOException:
> Failed to open native connection t
Hi Ted
I believe by default cassandra listens on 9042
From: Ted Yu <yuzhih...@gmail.com>
Date: Tuesday, March 8, 2016 at 6:11 PM
To: Andrew Davidson <a...@santacruzintegration.com>
Cc: "user @spark" <user@spark.apache.org>
Subject: Re: pyspark spark-cassandra-c
Have you contacted spark-cassandra-connector related mailing list ?
I wonder where the port 9042 came from.
Cheers
On Tue, Mar 8, 2016 at 6:02 PM, Andy Davidson wrote:
>
> I am using spark-1.6.0-bin-hadoop2.6. I am trying to write a python
> notebook that reads
I am using spark-1.6.0-bin-hadoop2.6. I am trying to write a python notebook
that reads a data frame from Cassandra.
I connect to cassadra using an ssh tunnel running on port 9043. CQLSH works
how ever I can not figure out how to configure my notebook. I have tried
various hacks any idea what I
Hi,
I'm trying to connect to Cassandra through PySpark using the
spark-cassandra-connector from datastax based on the work of Mike
Sukmanowsky.
I can use Spark and Cassandra through the datastax connector in Scala just
fine. Where things fail in PySpark is that an exception is raised
and
cassandra_outputformat.py examples that come with Spark.
https://github.com/Parsely/pyspark-cassandra.
The new example shows reading and writing to Cassandra including proper
handling of CQL 3.1 collections: lists, sets and maps. Think it also
clarifies the format RDDs are required be in to write data
-Spark connector, but that's on going.
In the meanwhile, I've basically updated the cassandra_inputformat.py and
cassandra_outputformat.py examples that come with Spark.
https://github.com/Parsely/pyspark-cassandra.
The new example shows reading and writing to Cassandra including proper
Hi ,
I try to evaluate different option of spark + cassandra and I have couple
of additional questions.
My aim is to use cassandra only without hadoop:
1) Is it possible to use only cassandra as input/output parameter for
PySpark?
2) In case I'll use Spark (java,scala) is it possible to
Thanks for the clarification, Yadid. By Hadoop jobs, I meant Spark jobs
that use Hadoop inputformats (as shown in the cassandra_inputformat.py
example).
A future possibility of accessing Cassandra from PySpark is when SparkSQL
supports Cassandra as a data source.
On Wed, Sep 10, 2014 at 11:37
Hi All ,
Is it possible to have cassandra as input data for PySpark. I found
example for java -
http://java.dzone.com/articles/sparkcassandra-stack-perform?page=0,0 and I
am looking something similar for python.
Thanks
Oleg.
In Spark 1.1, it is possible to read from Cassandra using Hadoop jobs. See
examples/src/main/python/cassandra_inputformat.py for an example. You may
need to write your own key/value converters.
On Tue, Sep 2, 2014 at 11:10 AM, Oleg Ruchovets oruchov...@gmail.com
wrote:
Hi All ,
Is it
13 matches
Mail list logo