Now I am getting different error as below :
com.datastax.spark.connector.types.TypeConversionException: Cannot convert
object [] of type class
org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema to
com.datastax.driver.core.LocalDate.
at
What are you trying to do? It looks like you are mixing multiple
SparkContexts together.
On Fri, Nov 4, 2016 at 5:15 PM, Lev Tsentsiper
wrote:
> My code throws an exception when I am trying to create new DataSet from
> within SteamWriter sink
>
> Simplified version
My code throws an exception when I am trying to create new DataSet from within
SteamWriter sink
Simplified version of the code
val df = sparkSession.readStream
.format("json")
.option("nullValue", " ")
.option("headerFlag", "true")
.option("spark.sql.shuffle.partitions", 1)
rify.platform.pipeline.TableWriter$$anonfun$close$5.apply(TableWriter.scala:109)
This code works when run locally, but fails in cluster deployment.
Can anyone suggest better way to handle creation and processing of DataSet
within ForeachWriter?
Thanks you
--
View this message in con
I'm running into an error that's not making a lot of sense to me, and
couldn't find sufficient info on the web to answer it myself. BTW, you can
reply at Stack Overflow too:
http://stackoverflow.com/questions/36254005/nosuchelementexception-in-chisqselector-fit-method-version-1-6-0
I've written
Hi All,
I'm running into an error that's not making a lot of sense to me, and
couldn't find sufficient info on the web to answer it myself.
BTW, you can also reply on Stack Overflow:
http://stackoverflow.com/questions/36254005/nosuchelementexception-in-chisqselector-fit-method-version-1-6-0
Hi,
I was working with a simple task (running locally). Just reading a file (35 mb)
with about 200 features and making a random forest with 5 trees with 5 depth.
While saving the file with:
predictions.select("VisitNumber", "probability")
.write.format("json") // tried different formats
Any suggestions any one?
Using version 1.5.1.
Regards
Ankush Khanna
On Nov 10, 2015, at 11:37 AM, Ankush Khanna wrote:
Hi,
I was working with a simple task (running locally). Just reading a file (35 mb)
with about 200 features and making a random forest with 5 trees
Hi,
I got NoSuchElementException when I tried to iterate through a Map which
contains some elements (not null, not empty). When I debug my code
(below). It seems the first part of the code which fills the Map is
executed after the second part that iterates the Map. The 1st part and
2nd part
Hi
I get exactly the same problem here, do you've found the problem ?
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchElementException-key-not-found-when-changing-the-window-lenght-and-interval-in-Spark-Streaming-tp9010p9283.html
Sent from
the problem ?
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchElementException-key-not-found-when-changing-the-window-lenght-and-interval-in-Spark-Streaming-tp9010p9283.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchElementException-key-not-found-tp6743p7157.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
I think I know what is going on! This probably a race condition in the
DAGScheduler. I have added a JIRA for this. The fix is not trivial though.
https://issues.apache.org/jira/browse/SPARK-2002
A not-so-good workaround for now would be not use coalesced RDD, which is
avoids the race condition.
Hi Tathagata,
Thanks for your help! By not using coalesced RDD, do you mean not
repartitioning my Dstream?
Thanks,
Mike
On Tue, Jun 3, 2014 at 12:03 PM, Tathagata Das tathagata.das1...@gmail.com
wrote:
I think I know what is going on! This probably a race condition in the
DAGScheduler. I
I am not sure what DStream operations you are using, but some operation is
internally creating CoalescedRDDs. That is causing the race condition. I
might be able help if you can tell me what DStream operations you are using.
TD
On Tue, Jun 3, 2014 at 4:54 PM, Michael Chang m...@tellapart.com
Hi all,
Seeing a random exception kill my spark streaming job. Here's a stack
trace:
java.util.NoSuchElementException: key not found: 32855
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:58)
at
Do you have the info level logs of the application? Can you grep the
value 32855
to find any references to it? Also what version of the Spark are you using
(so that I can match the stack trace, does not seem to match with Spark
1.0)?
TD
On Mon, Jun 2, 2014 at 3:27 PM, Michael Chang
17 matches
Mail list logo