0-can-not-be-represented-as-java-sql-timestamp-error
On Wed, Jun 5, 2019 at 6:29 PM Anthony May wrote:
> Hi,
>
> We have a legacy process of scraping a MySQL Database. The Spark job uses
> the DataFrame API and MySQL JDBC driver to read the tables and save them as
> JSON
Hi,
We have a legacy process of scraping a MySQL Database. The Spark job uses
the DataFrame API and MySQL JDBC driver to read the tables and save them as
JSON files. One table has DateTime columns that contain values invalid for
java.sql.Timestamp so it's throwing the exception:
java.sql.SQLExcept
We use sbt for easy cross project dependencies with multiple scala versions
in a mono-repo for which it pretty good albeit with some quirks. As our
projects have matured and change less we moved away from cross project
dependencies but it was extremely useful early in the projects. We knew
that a l
A sensible default strategy is to use the same language in which a system
was developed or a highly compatible language. That would be Scala for
Spark, however I assume you don't currently know Scala to the same degree
as Python or at all. In which case to help you make the decision you should
also
e. For example, when I start a kafka
> cluster, the prompt is not returned and the debug log is printed to the
> terminal. I want that set up with my spark server.
>
> I hope that explains my retrograde requirement :)
>
>
>
> On 11-Jul-2016, at 6:49 PM, Anthony May wrote:
Starting the Spark Shell gives you a Spark Context to play with straight
away. The output is printed to the console.
On Mon, 11 Jul 2016 at 11:47 Sivakumaran S wrote:
> Hello,
>
> Is there a way to start the spark server with the log output piped to
> screen? I am currently running spark in the
Hi Andrés,
What error are you seeing? Can you paste the stack trace?
Anthony
On Fri, 27 May 2016 at 08:37 Andrés Ivaldi wrote:
> Hello, yesterday I updated Spark 1.6.0 to 1.6.1 and my tests starts to
> fail because is not possible create new tables in SQLServer, I'm using
> SaveMode.Overwrite
It's on the 1.6 branch
On Thu, May 26, 2016 at 4:43 PM Andrés Ivaldi wrote:
> I see, I'm using Spark 1.6.0 and that change seems to be for 2.0 or maybe
> it's in 1.6.1 looking at the history.
> thanks I'll see if update spark to 1.6.1
>
> On Thu, May 26,
It doesn't appear to be configurable, but it is inserting by column name:
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L102
On Thu, 26 May 2016 at 16:02 Andrés Ivaldi wrote:
> Hello,
> I'realize that when dat
It looks like it might only be available via REST,
http://spark.apache.org/docs/latest/monitoring.html#rest-api
On Fri, 13 May 2016 at 11:24 Dood@ODDO wrote:
> On 5/13/2016 10:16 AM, Anthony May wrote:
> >
> http://spark.apache.org/docs/latest/api/scal
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkStatusTracker
Might be useful
On Fri, 13 May 2016 at 11:11 Ted Yu wrote:
> Have you looked
> at core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ?
>
> Cheers
>
> On Fri, May 13, 2016 at 10:05 AM
rk.eventLog.enabled=true
>
> --verbose
>
> pi.py
>
>
> I am able to run the job successfully. I just want to get it killed
> automatically whenever I kill my application.
>
>
> On Fri, May 6, 2016 at 11:58 AM, Anthony May wrote:
>
>>
Greetings Satish,
What are the arguments you're passing in?
On Fri, 6 May 2016 at 12:50 satish saley wrote:
> Hello,
>
> I am submitting a spark job using SparkSubmit. When I kill my application,
> it does not kill the corresponding spark job. How would I kill the
> corresponding spark job? I k
Yeah, there isn't even a RC yet and no documentation but you can work off
the code base and test suites:
https://github.com/apache/spark
And this might help:
https://github.com/apache/spark/blob/master/sql/core/src/test/scala/org/apache/spark/sql/streaming/DataFrameReaderWriterSuite.scala
On Fri,
14 matches
Mail list logo