Re: Insert into JDBC

2016-05-26 Thread Andrés Ivaldi
Done, version 1.6.1 has the fix, updated and work fine Thanks. On Thu, May 26, 2016 at 4:15 PM, Anthony May wrote: > It's on the 1.6 branch > > On Thu, May 26, 2016 at 4:43 PM Andrés Ivaldi wrote: > >> I see, I'm using Spark 1.6.0 and that change

Re: Insert into JDBC

2016-05-26 Thread Anthony May
It's on the 1.6 branch On Thu, May 26, 2016 at 4:43 PM Andrés Ivaldi wrote: > I see, I'm using Spark 1.6.0 and that change seems to be for 2.0 or maybe > it's in 1.6.1 looking at the history. > thanks I'll see if update spark to 1.6.1 > > On Thu, May 26, 2016 at 3:33 PM,

Re: Insert into JDBC

2016-05-26 Thread Andrés Ivaldi
I see, I'm using Spark 1.6.0 and that change seems to be for 2.0 or maybe it's in 1.6.1 looking at the history. thanks I'll see if update spark to 1.6.1 On Thu, May 26, 2016 at 3:33 PM, Anthony May wrote: > It doesn't appear to be configurable, but it is inserting by

Re: Insert into JDBC

2016-05-26 Thread Anthony May
It doesn't appear to be configurable, but it is inserting by column name: https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L102 On Thu, 26 May 2016 at 16:02 Andrés Ivaldi wrote: > Hello, >

Insert into JDBC

2016-05-26 Thread Andrés Ivaldi
Hello, I'realize that when dataframe executes insert it is inserting by scheme order column instead by name, ie dataframe.write(SaveMode).jdbc(url, table, properties) Reading the profiler the execution is insert into TableName values(a,b,c..) what i need is insert into TableNames