[GitHub] spark pull request #20057: [SPARK-22880][SQL] Add cascadeTruncate option to ...

2018-02-20 Thread klinvill
Github user klinvill commented on a diff in the pull request:

https://github.com/apache/spark/pull/20057#discussion_r169526767
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/jdbc/TeradataDialect.scala ---
@@ -31,4 +31,19 @@ private case object TeradataDialect extends JdbcDialect {
 case BooleanType => Option(JdbcType("CHAR(1)", java.sql.Types.CHAR))
 case _ => None
   }
+
+  override def isCascadingTruncateTable(): Option[Boolean] = Some(false)
+
+  /**
+   * The SQL query used to truncate a table.
+   * @param table The table to truncate.
+   * @param cascade Whether or not to cascade the truncation. Default 
value is the
+   *value of isCascadingTruncateTable(). Ignored for 
Teradata as it is unsupported
+   * @return The SQL query to use for truncating a table
+   */
+  override def getTruncateQuery(
+  table: String,
+  cascade: Option[Boolean] = isCascadingTruncateTable): String = {
+s"TRUNCATE TABLE $table"
--- End diff --

Hi @dongjoon-hyun, I was the original author of the TeradataDialect and 
@gatorsmile reviewed and committed it. You are correct, Teradata does not 
support the TRUNCATE statement. Instead Teradata uses a DELETE statement so you 
should be able to use `DELETE FROM $table ALL` instead of `TRUNCATE TABLE 
$table`


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16746: [SPARK-15648][SQL] Add teradataDialect for JDBC connecti...

2017-05-23 Thread klinvill
Github user klinvill commented on the issue:

https://github.com/apache/spark/pull/16746
  
Thanks for the help and review!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16746: [SPARK-15648][SQL] Add teradataDialect for JDBC connecti...

2017-03-28 Thread klinvill
Github user klinvill commented on the issue:

https://github.com/apache/spark/pull/16746
  
Hi @dongjoon-hyun @gatorsmile, just circling back. Is it going to be 
impractical to check the PR against a VM rather than against a docker image?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16746: [SPARK-15648][SQL] Add teradataDialect for JDBC connecti...

2017-01-31 Thread klinvill
Github user klinvill commented on the issue:

https://github.com/apache/spark/pull/16746
  
@dongjoon-hyun Yup, I was using a real instance for testing. The best way 
to test without a real instance is probably going to be using the Teradata 
Express vm: 
http://downloads.teradata.com/download/database/teradata-express-for-vmware-player.
 You can also build an instance using an AMI but it's fairly expensive for an 
AMI so I'd recommend the express vm instead. Unfortunately there's currently 
not a dockerized version available.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16746: [SPARK-15648][SQL] Add teradataDialect for JDBC c...

2017-01-31 Thread klinvill
Github user klinvill commented on a diff in the pull request:

https://github.com/apache/spark/pull/16746#discussion_r98814211
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/jdbc/TeradataDialect.scala ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.jdbc
+
+import java.sql.Types
+import org.apache.spark.sql.types._
+
+
+private case object TeradataDialect extends JdbcDialect {
+
+  override def canHandle(url: String): Boolean = { 
url.startsWith("jdbc:teradata") }
+
+  override def getJDBCType(dt: DataType): Option[JdbcType] = dt match {
+case StringType => Some(JdbcType("VARCHAR(255)", 
java.sql.Types.VARCHAR))
+case BooleanType => Option(JdbcType("CHAR(1)", java.sql.Types.CHAR))
+case _ => None
+  }
--- End diff --

quoteIdentifier and getTableExistsQuery will both work for Teradata. 
Teradata does not cascade by default but it also doesn't have a TRUNCATE TABLE 
command (DELETE is used instead) so any commands that use TRUNCATE TABLE will 
fail.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16746: [SPARK-15648][SQL] Add teradataDialect for JDBC c...

2017-01-31 Thread klinvill
Github user klinvill commented on a diff in the pull request:

https://github.com/apache/spark/pull/16746#discussion_r98710451
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/jdbc/TeradataDialect.scala ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.jdbc
+
+import java.sql.Types
--- End diff --

Thanks! Fixed in latest commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16746: [SPARK-15648][SQL] Add teradataDialect for JDBC c...

2017-01-31 Thread klinvill
Github user klinvill commented on a diff in the pull request:

https://github.com/apache/spark/pull/16746#discussion_r98706364
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/jdbc/TeradataDialect.scala ---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.jdbc
+
+import java.sql.Types
+import org.apache.spark.sql.types._
+
+
+private case object TeradataDialect extends JdbcDialect {
+
+  override def canHandle(url: String): Boolean = { 
url.startsWith("jdbc:teradata") }
+
+  override def getJDBCType(dt: DataType): Option[JdbcType] = dt match {
+case StringType => Some(JdbcType("VARCHAR(255)", 
java.sql.Types.VARCHAR))
+case BooleanType => Option(JdbcType("CHAR(1)", java.sql.Types.CHAR))
+case _ => None
+  }
--- End diff --

Hi @dongjoon-hyun,
Teradata still doesn't support LIMIT (it uses TOP instead) but the spark 
code that was originally using limit has been changed to use "where 1=0 
instead".

```  
/**
   * Get the SQL query that should be used to find if the given table 
exists. Dialects can
   * override this method to return a query that works best in a particular 
database.
   * @param table  The name of the table.
   * @return The SQL query to use for checking the table.
   */
  def getTableExistsQuery(table: String): String = {
s"SELECT * FROM $table WHERE 1=0"
  }

  /**
   * The SQL query that should be used to discover the schema of a table. 
It only needs to
   * ensure that the result set has the same schema as the table, such as 
by calling
   * "SELECT * ...". Dialects can override this method to return a query 
that works best in a
   * particular database.
   * @param table The name of the table.
   * @return The SQL query to use for discovering the schema.
   */
  @Since("2.1.0")
  def getSchemaQuery(table: String): String = {
s"SELECT * FROM $table WHERE 1=0"
  }
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16746: [SPARK-15648][SQL] Add teradataDialect for JDBC connecti...

2017-01-30 Thread klinvill
Github user klinvill commented on the issue:

https://github.com/apache/spark/pull/16746
  
I just tested it manually with a Teradata instance I have running. I didn't 
test it too extensively other than making sure that a write to a teradata table 
using a string datatype was working correctly for smaller strings (<255 
characters).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16746: [SPARK-15648][SQL] Add teradataDialect for JDBC connecti...

2017-01-30 Thread klinvill
Github user klinvill commented on the issue:

https://github.com/apache/spark/pull/16746
  
Unfortunately I don't think there's a docker image for Teradata available 
yet. They do have the VM version and an AMI. Would either of those be 
sufficient?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16746: [SPARK-15648][SQL] Add teradataDialect for JDBC c...

2017-01-30 Thread klinvill
GitHub user klinvill opened a pull request:

https://github.com/apache/spark/pull/16746

[SPARK-15648][SQL] Add teradataDialect for JDBC connection to Teradata

The contribution is my original work and I license the work to the project 
under the project’s open source license.

Note: the Teradata JDBC connector limits the row size to 64K. The default 
string datatype equivalent I used is a 255 character/byte length varchar. This 
effectively limits the max number of string columns to 250 when using the 
Teradata jdbc connector.

## What changes were proposed in this pull request?

Added a teradataDialect for JDBC connection to Teradata. The Teradata 
dialect uses VARCHAR(255) in place of TEXT for string datatypes, and CHAR(1) in 
place of BIT(1) for boolean datatypes.

## How was this patch tested?

(Please explain how this patch was tested. E.g. unit tests, integration 
tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, 
remove this)

I added two unit tests to double check that the types get set correctly for 
a teradata jdbc url. I also ran a couple manual tests to make sure the jdbc 
connector worked with teradata and to make sure that an error was thrown if a 
row could potentially exceed 64K (this error comes from the teradata jdbc 
connector, not from the spark code). I did not check how string columns longer 
than 255 characters are handled.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/klinvill/spark master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/16746.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #16746


commit 4b8d9a6d6856ed88963950921cfc64978ee2388a
Author: Kirby Linvill <kirby.linv...@teradata.com>
Date:   2017-01-26T17:47:04Z

SPARK-15648: Added teradataDialect for JDBC connection

Note: the Teradata JDBC connector limits the row size to 64K. The default 
string datatype equivalent is a 255 character/byte length varchar.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org