can i enable spark to use dfs.client.read.shortcircuit property to improve
performance and ready natively on local nodes instead of hdfs api ?
The information contained in this message may be confidential and legally
protected under applicable law. The message
From: Akhil Das [ak...@sigmoidanalytics.com]
Sent: Monday, October 06, 2014 1:20 PM
To: Jahagirdar, Madhu
Cc: user
Subject: Re: Dstream Transformations
AFAIK spark doesn't restart worker nodes itself. You can have multiple worker
nodes and in that case if one worker node goes down, then spark
To: Jahagirdar, Madhu
Cc: Akhil Das; user
Subject: Re: Dstream Transformations
From the Spark Streaming Programming Guide
(http://spark.apache.org/docs/latest/streaming-programming-guide.html#failure-of-a-worker-node):
...output operations (like foreachRDD) have at-least once semantics
All,
We are using Spark Streaming to receive data from twitter stream. This is
running behind proxy. We have done the following configurations inside spark
steaming for twitter4j to work behind proxy.
def main(args: Array[String]) {
val filters = Array(Modi)
Currently the createParquetMethod needs BeanClass as one of the parameters.
javahiveContext.createParquetFile(XBean.class,
IMPALA_TABLE_LOC, true, new Configuration())
When we enable checkpoint and use JsonRDD we get the following error: Is this
bug ?
Exception in thread main java.lang.NullPointerException
at org.apache.spark.rdd.RDD.init(RDD.scala:125)
at org.apache.spark.sql.SchemaRDD.init(SchemaRDD.scala:103)
Michael any idea on this?
From: Jahagirdar, Madhu
Sent: Thursday, November 06, 2014 2:36 PM
To: mich...@databricks.com; user
Subject: CheckPoint Issue with JsonRDD
When we enable checkpoint and use JsonRDD we get the following error: Is this
bug
Foreach iterates through the partitions in the RDD and executes the operations
for each partitions i guess.
On 29-Dec-2014, at 10:19 pm, SamyaMaiti samya.maiti2...@gmail.com wrote:
Hi All,
Please clarify.
Can we say 1 RDD is generated every batch interval?
If the above is true. Then, is
All,
We are getting the below error when we are using Drill JDBC driver with spark,
please let us know what could be the issue.
java.lang.IllegalAccessError: class io.netty.buffer.UnsafeDirectLittleEndian
cannot access its superclass io.netty.buffer.WrappedByteBuf
at
All,
Can we run different version of Spark using the same Mesos Dispatcher. For
example we can run drivers with Spark 1.3 and Spark 1.4 at the same time ?
Regards,
Madhu Jahagirdar
The information contained in this message may be confidential and legally
, Madhu
Cc: user; d...@spark.apache.org
Subject: Re: Spark Mesos Dispatcher
Yes.
Sent from my iPhone
On 19 Jul, 2015, at 10:52 pm, Jahagirdar, Madhu
madhu.jahagir...@philips.commailto:madhu.jahagir...@philips.com wrote:
All,
Can we run different version of Spark using the same Mesos Dispatcher
11 matches
Mail list logo