ib/py4j-0.8.2.1-src.zip:/data/hadoop_local/usercache/hdptest/filecache/38/spark-assembly-1.4.1-hadoop2.5.2.jar:pyspark.zip:py4j-0.8.2.1-src.zip
It seem that PYSPARK_PYTHON doesn't work in spark worker , can someone please
help me to solve it ?
Thanks~
guoqing0...@yahoo.com.hk
name was
modified.
From: Josh Rosen
Date: 2015-09-17 11:42
To: guoqing0...@yahoo.com.hk
CC: Ted Yu; user
Subject: Re: Re: Table is modified by DataFrameWriter
What are your JDBC properties configured to? Do you have overwrite mode enabled?
On Wed, Sep 16, 2015 at 7:39 PM, guoqing0
Spark-1.4.1
From: Ted Yu
Date: 2015-09-17 10:29
To: guoqing0...@yahoo.com.hk
CC: user
Subject: Re: Table is modified by DataFrameWriter
Can you tell us which release you were using ?
Thanks
On Sep 16, 2015, at 7:11 PM, "guoqing0...@yahoo.com.hk"
wrote:
Hi all,
I found the table
Hi all,
I found the table structure was modified when use DataFrameWriter.jdbc to save
the content of DataFrame ,
sqlContext.sql("select '2015-09-17',count(1) from
test").write.jdbc(url,test,properties)
table structure before saving:
app_key text
t_amount bigint(20)
saved:
_c0 text
_c1 bigi
Thank u very much ! when will the Spark 1.5.1 come out.
guoqing0...@yahoo.com.hk
From: Yin Huai
Date: 2015-09-12 04:49
To: guoqing0...@yahoo.com.hk
CC: user
Subject: Re: java.util.NoSuchElementException: key not found
Looks like you hit https://issues.apache.org/jira/browse/SPARK-10422, it
(RDD.scala:262)
at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
guoqing0...@yahoo.com.hk
Hi all ,
Is the Dataframe support the insert operation , like sqlContext.sql("insert
into table1 xxx select xxx from table2") ?
guoqing0...@yahoo.com.hk
Hi,
I got a error when running spark streaming as below .
java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invok
Thanks very much , Which version will be support In the upcome 1.4 ? I hope it
will be support more versions.
guoqing0...@yahoo.com.hk
From: Cheng, Hao
Date: 2015-05-21 11:20
To: Ted Yu; guoqing0...@yahoo.com.hk
CC: user
Subject: RE: Spark build with Hive
Yes, ONLY support 0.12.0 and 0.13.1
2.4.X with Hive 12 support
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -Phive-0.12.0
-Phive-thriftserver -DskipTests clean package
guoqing0...@yahoo.com.hk
Hive on Spark and SparkSQL which should be better , and what are the key
characteristics and the advantages and the disadvantages between ?
guoqing0...@yahoo.com.hk
Assume that i had several mathines with 8cores , 1 core per work with 8 workers
, 8 cores per work with 1 work , which one is better ?
Appreciate for your help , it works . i`m curious why the enclosing class
cannot serialized , is it need to extends java.io.Serializable ? if object
never serialized how it works in the task .whether there`s any association with
the spark.closure.serializer .
guoqing0...@yahoo.com.hk
From
Thank you for your pointers , it`s very helpful to me , in this scenario how
can i use the implicit def in the enclosing class ?
From: Tathagata Das
Date: 2015-04-30 07:00
To: guoqing0...@yahoo.com.hk
CC: user
Subject: Re: implicit function in SparkStreaming
I believe that the implicit def is
1))
else if(message.length == 1) {
(message(0), "")
}
else
("","")
}
def filter(stream:DStream[String]) :DStream[String] = {
stream.filter(s => {
(s._1=="Action" && s._2=="TRUE")
})
Could you please give me some pointers ? Thank you .
guoqing0...@yahoo.com.hk
Thank you very much for your suggestion.
Regards,
From: madhu phatak
Date: 2015-04-24 13:06
To: guoqing0...@yahoo.com.hk
CC: user
Subject: Re: Is the Spark-1.3.1 support build with scala 2.8 ?
Hi,
AFAIK it's only build with 2.10 and 2.11. You should integrate
kafka_2.10.0-0.8.0 to ma
Is the Spark-1.3.1 support build with scala 2.8 ? Wether it can integrated
with kafka_2.8.0-0.8.0 If build with scala 2.10 .
Thanks.
:40
To: guoqing0...@yahoo.com.hk
CC: user
Subject: Re: problem with spark thrift server
Hi
What do you mean disable the driver? what are you trying to achieve.
Thanks
Arush
On Thu, Apr 23, 2015 at 12:29 PM, guoqing0...@yahoo.com.hk
wrote:
Hi ,
I have a question about spark thrift server , i
Hi all ,
My understanding for this problem is SQLConf will be overwrite by the
hiveconfig in initialization phase when setConf(key: String, value: String)
being called in the first time as below code snippets , so it is correctly in
later. I`m not sure whether it is right , any point are welco
Hi ,
I have a question about spark thrift server , i deployed the spark on yarn and
found if the spark driver disable , the spark application will be crashed on
yarn. appreciate for any suggestions and idea .
Thank you!
Hi all ,
I am a begginner of Spark get some problems, i deploy the spark on yarn using
"start-thriftserver.sh --master yarn" , it should be yarn-client mode , and i
found the SPOF in driver process (SparkSubmit) , the SparkSQL application in
yarn will be crashed if the spark dirver down , so ho
21 matches
Mail list logo