Done, version 1.6.1 has the fix, updated and work fine

Thanks.

On Thu, May 26, 2016 at 4:15 PM, Anthony May <anthony...@gmail.com> wrote:

> It's on the 1.6 branch
>
> On Thu, May 26, 2016 at 4:43 PM Andrés Ivaldi <iaiva...@gmail.com> wrote:
>
>> I see, I'm using Spark 1.6.0 and that change seems to be for 2.0 or maybe
>> it's in 1.6.1 looking at the history.
>> thanks I'll see if update spark  to 1.6.1
>>
>> On Thu, May 26, 2016 at 3:33 PM, Anthony May <anthony...@gmail.com>
>> wrote:
>>
>>> It doesn't appear to be configurable, but it is inserting by column name:
>>>
>>> https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L102
>>>
>>> On Thu, 26 May 2016 at 16:02 Andrés Ivaldi <iaiva...@gmail.com> wrote:
>>>
>>>> Hello,
>>>>  I'realize that when dataframe executes insert it is inserting by
>>>> scheme order column instead by name, ie
>>>>
>>>> dataframe.write(SaveMode).jdbc(url, table, properties)
>>>>
>>>> Reading the profiler the execution is
>>>>
>>>> insert into TableName values(a,b,c..)
>>>>
>>>> what i need is
>>>> insert into TableNames (colA,colB,colC) values(a,b,c)
>>>>
>>>> could be some configuration?
>>>>
>>>> regards.
>>>>
>>>> --
>>>> Ing. Ivaldi Andres
>>>>
>>>
>>
>>
>> --
>> Ing. Ivaldi Andres
>>
>


-- 
Ing. Ivaldi Andres

Reply via email to