, 2015, Addanki, Santosh Kumar
mailto:santosh.kumar.adda...@sap.com>> wrote:
Hi Colleagues
We need to call a Scala Class from pySpark in Ipython notebook.
We tried something like below :
from py4j.java_gateway import java_import
java_import(sparkContext._jvm,'')
myScal
Hi Colleagues
We need to call a Scala Class from pySpark in Ipython notebook.
We tried something like below :
from py4j.java_gateway import java_import
java_import(sparkContext._jvm,'')
myScalaClass = sparkContext._jvm.SimpleScalaClass ()
myScalaClass.sayHello("World") Works Fine
But
When
Hi Colleagues,
Currently we have implemented External Data Source API and are able to push
filters and projections.
Could you provide some info on how perhaps the joins could be pushed to the
original Data Source if both the data sources are from same database
Briefly looked at DataSourceStra
Hi,
We implemented an External Data Source by extending the TableScan . We added
the classes to the classpath
The data source works fine when run in Spark Shell .
But currently we are unable to use this same data source in Python Environment.
So when we execute the following below in an Ipython
Hi,
When we try to call saveAsParquetFile on a schemaRDD we get the following error
:
Py4JJavaError: An error occurred while calling o384.saveAsParquetFile.
: java.lang.NoClassDefFoundError:
org/apache/hadoop/mapreduce/lib/output/DirectFileOutputCommitter
at
org.apache.spark.sql.parqu
Hi,
I have a schemaRDD created like below :
schemaTransactions = sqlContext.applySchema(transactions,schema);
When I try to save the schemaRDD as a table using :
schemaTransactions.saveAsTable("transactions") I get the error below
Py4JJavaError: An error occurred while calling o70.saveAsTabl
Hi
We are currently using Mapr Distribution.
To read the files from the file system we specify as follows :
test = sc.textFile("mapr/mycluster/user/mapr/test.csv")
This works fine from Spark Context.
But ...
Currently we are trying to create a table in hive using the hiveContext from
Spark
File-System for scheme: maprfs
Best Regards
Santosh
From: Vladimir Rodionov [mailto:vrodio...@splicemachine.com]
Sent: Wednesday, October 01, 2014 3:59 PM
To: Addanki, Santosh Kumar
Cc: user@spark.apache.org
Subject: Re: Spark And Mapr
There is doc on MapR:
http://doc.mapr.com/display/
Hi
We were using Horton 2.4.1 as our Hadoop distribution and now switched to MapR
Previously to read a text file we would use :
test = sc.textFile(\"hdfs://10.48.101.111:8020/user/hdfs/test\")"
What would be the equivalent of the same for Mapr.
Best Regards
Santosh
@gmail.com]
Sent: Wednesday, September 17, 2014 10:14 PM
To: user@spark.apache.org; Addanki, Santosh Kumar
Subject: Re: SchemaRDD and RegisterAsTable
The registered table is stored within the spark context itself. To have the
table available for the thrift server to get access to, you can save the
Hi,
We built out SPARK 1.1.0 Version with MVN using
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive clean package
And the Thrift Server has been configured to use the Hive Meta Store.
When a schemaRDD is registered as table where does the metadata of this table
get stored. Can it be stor
11 matches
Mail list logo