-can-not-be-represented-as-java-sql-timestamp-error
On Wed, Jun 5, 2019 at 6:29 PM Anthony May wrote:
> Hi,
>
> We have a legacy process of scraping a MySQL Database. The Spark job uses
> the DataFrame API and MySQL JDBC driver to read the tables and save them as
> JSON files
Hi,
We have a legacy process of scraping a MySQL Database. The Spark job uses
the DataFrame API and MySQL JDBC driver to read the tables and save them as
JSON files. One table has DateTime columns that contain values invalid for
java.sql.Timestamp so it's throwing the exception:
We use sbt for easy cross project dependencies with multiple scala versions
in a mono-repo for which it pretty good albeit with some quirks. As our
projects have matured and change less we moved away from cross project
dependencies but it was extremely useful early in the projects. We knew
that a
A sensible default strategy is to use the same language in which a system
was developed or a highly compatible language. That would be Scala for
Spark, however I assume you don't currently know Scala to the same degree
as Python or at all. In which case to help you make the decision you should
rt the master in the console. For example, when I start a kafka
> cluster, the prompt is not returned and the debug log is printed to the
> terminal. I want that set up with my spark server.
>
> I hope that explains my retrograde requirement :)
>
>
>
> On 11-Jul-2016,
Starting the Spark Shell gives you a Spark Context to play with straight
away. The output is printed to the console.
On Mon, 11 Jul 2016 at 11:47 Sivakumaran S wrote:
> Hello,
>
> Is there a way to start the spark server with the log output piped to
> screen? I am currently
Hi Andrés,
What error are you seeing? Can you paste the stack trace?
Anthony
On Fri, 27 May 2016 at 08:37 Andrés Ivaldi wrote:
> Hello, yesterday I updated Spark 1.6.0 to 1.6.1 and my tests starts to
> fail because is not possible create new tables in SQLServer, I'm using
y 26, 2016 at 3:33 PM, Anthony May <anthony...@gmail.com> wrote:
>
>> It doesn't appear to be configurable, but it is inserting by column name:
>>
>> https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUti
It doesn't appear to be configurable, but it is inserting by column name:
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L102
On Thu, 26 May 2016 at 16:02 Andrés Ivaldi wrote:
> Hello,
>
It looks like it might only be available via REST,
http://spark.apache.org/docs/latest/monitoring.html#rest-api
On Fri, 13 May 2016 at 11:24 Dood@ODDO <oddodao...@gmail.com> wrote:
> On 5/13/2016 10:16 AM, Anthony May wrote:
> >
> http://spark.apache.org/docs/latest/ap
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkStatusTracker
Might be useful
On Fri, 13 May 2016 at 11:11 Ted Yu wrote:
> Have you looked
> at core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ?
>
> Cheers
>
> On Fri,
f
> spark.eventLog.enabled=true
>
> --verbose
>
> pi.py
>
>
> I am able to run the job successfully. I just want to get it killed
> automatically whenever I kill my application.
>
>
> On Fri, May 6, 2016 at 11:58 AM, Anthony May &
Greetings Satish,
What are the arguments you're passing in?
On Fri, 6 May 2016 at 12:50 satish saley wrote:
> Hello,
>
> I am submitting a spark job using SparkSubmit. When I kill my application,
> it does not kill the corresponding spark job. How would I kill the
>
Yeah, there isn't even a RC yet and no documentation but you can work off
the code base and test suites:
https://github.com/apache/spark
And this might help:
https://github.com/apache/spark/blob/master/sql/core/src/test/scala/org/apache/spark/sql/streaming/DataFrameReaderWriterSuite.scala
On Fri,
14 matches
Mail list logo