Github user CK50 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10066#discussion_r46405509
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/jdbc/CassandraDialect.scala ---
    @@ -0,0 +1,53 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.sql.jdbc
    +
    +import java.sql.Types
    +
    +import org.apache.spark.sql.types._
    +
    +
    +private case object CassandraDialect extends JdbcDialect {
    +
    +  override def canHandle(url: String): Boolean =
    +    url.startsWith("jdbc:datadirect:cassandra") ||
    +    url.startsWith("jdbc:weblogic:cassandra")
    +
    +  override def getInsertStatement(table: String, rddSchema: StructType): 
String = {
    +    val sql = new StringBuilder(s"INSERT INTO $table ( ")
    +    var fieldsLeft = rddSchema.fields.length
    +    var i = 0
    +    // Build list of column names
    +    while (fieldsLeft > 0) {
    +      sql.append(rddSchema.fields(i).name)
    +      if (fieldsLeft > 1) sql.append(", ")
    --- End diff --
    
    Yes, this is exactly the problem encountered.
    
    What if we drop the new Dialect and add a new signature on 
    DataFrameWriter instead (new columnMapping param):
    
    def jdbc(url: String, table: String, connectionProperties: Properties, 
    columnMapping: Map<String,String>): Unit
    
    The old signature then continues using the column-name-free INSERT 
    syntax, but for any advanced use-cases (or technologies, which do not 
    support column-name-free syntax) the new API can be used.
    
    This ensures full backwards compatibility for all technologies
    
    If this is the way to go, I'd better start a new PR?
    
    My preference would still to keep the refactoring of moving generation 
    of INSERT statement into the Dialect (instead of in JDBCUtils). Does 
    this make sense?
    
    
    On 02.12.2015 11:46, Sean Owen wrote:
    >
    > In 
    > sql/core/src/main/scala/org/apache/spark/sql/jdbc/CassandraDialect.scala 
    > <https://github.com/apache/spark/pull/10066#discussion_r46399876>:
    >
    > > +
    > > +
    > > +private case object CassandraDialect extends JdbcDialect {
    > > +
    > > +  override def canHandle(url: String): Boolean =
    > > +    url.startsWith("jdbc:datadirect:cassandra") ||
    > > +    url.startsWith("jdbc:weblogic:cassandra")
    > > +
    > > +  override def getInsertStatement(table: String, rddSchema: 
StructType): String = {
    > > +    val sql = new StringBuilder(s"INSERT INTO $table ( ")
    > > +    var fieldsLeft = rddSchema.fields.length
    > > +    var i = 0
    > > +    // Build list of column names
    > > +    while (fieldsLeft > 0) {
    > > +      sql.append(rddSchema.fields(i).name)
    > > +      if (fieldsLeft > 1) sql.append(", ")
    >
    > You're just saying that inserting a DataFrame of m columns into a 
    > table of n > m columns doesn't work, right? Yes without column name 
    > mappings, I expect this to fail anytime m != n, for any database. 
    > Right now this assumes m = n implicitly.
    >
    > You're right that adding names requires a mapping from data frame 
    > column names to DB column names. Hm, I wonder if this needs an 
    > optional |Map| allowing for overrides.
    >
    > I don't think the regression tests cover all databases, no. I also 
    > don't think this can be specific to Oracle anyway.
    >
    > My workflow for squashing N of the last commits is:
    >
    >   * |git rebase -i HEAD~N|
    >   * Change all but the first "pick" to "squash" in the editor and save
    >   * Edit the commit message down to just 1 logical message and save
    >   * |git push --force origin [your branch]|
    >
    > —
    > Reply to this email directly or view it on GitHub 
    > <https://github.com/apache/spark/pull/10066/files#r46399876>.
    >
    
    
    -- 
    Oracle <http://www.oracle.com>
    Christian Kurz | Consulting Member of Technical Staff
    Phone: +49 228 30899431 <tel:+49%20228%2030899431> | Mobile: +49 170 
    2964124 <tel:+49%20170%202964124>
    Oracle Product Development
    
    ORACLE Deutschland B.V. & Co. KG | Hamborner Str. 51 | 40472 Düsseldorf
    
    ORACLE Deutschland B.V. & Co. KG
    Hauptverwaltung: Riesstr. 25, D-80992 München
    Registergericht: Amtsgericht München, HRA 95603
    
    Komplementärin: ORACLE Deutschland Verwaltung B.V.
    Hertogswetering 163/167, 3543 AS Utrecht, Niederlande
    Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697
    Geschäftsführer: Alexander van der Ven, Astrid Kepper, Val Maher
    
    Green Oracle <http://www.oracle.com/commitment> Oracle is committed to 
    developing practices and products that help protect the environment



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to