[
https://issues.apache.org/jira/browse/SPARK-20427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942090#comment-16942090
]
Paul Wu edited comment on SPARK-20427 at 10/1/19 4:13 PM:
--
Some one asked me
[
https://issues.apache.org/jira/browse/SPARK-20427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942090#comment-16942090
]
Paul Wu edited comment on SPARK-20427 at 10/1/19 3:49 PM:
--
Some one asked me
[
https://issues.apache.org/jira/browse/SPARK-20427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942090#comment-16942090
]
Paul Wu commented on SPARK-20427:
-
Some one asked me this problem months ago and I found a solution for
[
https://issues.apache.org/jira/browse/SPARK-27077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-27077:
Description:
I am not very sure this is a Spark core issue or a Vertica issue, however I
intended to
Paul Wu created SPARK-27077:
---
Summary: DataFrameReader and Number of Connection Limitation
Key: SPARK-27077
URL: https://issues.apache.org/jira/browse/SPARK-27077
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-22371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466099#comment-16466099
]
Paul Wu commented on SPARK-22371:
-
Got the same problem with 2.3 and also the program stalled:
{{
[
https://issues.apache.org/jira/browse/SPARK-23617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu resolved SPARK-23617.
-
Resolution: Duplicate
Fix Version/s: 2.3.0
As commented by Hyukjin Kwon, the issue is duplicated
[
https://issues.apache.org/jira/browse/SPARK-23617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-23617:
Description:
One can register a function using Scala:
{{spark.udf.register("uuid",
[
https://issues.apache.org/jira/browse/SPARK-23617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-23617:
Description:
One can register a function using Scala:
spark.udf.register("uuid",
[
https://issues.apache.org/jira/browse/SPARK-23617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-23617:
Description:
One can register a function using Scala:
{{spark.udf.register("uuid",
Paul Wu created SPARK-23617:
---
Summary: Register a Function without params with Spark SQL Java API
Key: SPARK-23617
URL: https://issues.apache.org/jira/browse/SPARK-23617
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-23193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-23193:
Summary: Insert into Spark Table statement cannot specify column names
(was: Insert into Spark Table
Paul Wu created SPARK-23193:
---
Summary: Insert into Spark Table cannot specify column names
Key: SPARK-23193
URL: https://issues.apache.org/jira/browse/SPARK-23193
Project: Spark
Issue Type: Bug
Paul Wu created SPARK-21740:
---
Summary: DataFrame.write does not work with Phoenix JDBC Driver
Key: SPARK-21740
URL: https://issues.apache.org/jira/browse/SPARK-21740
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105439#comment-16105439
]
Paul Wu commented on SPARK-17614:
-
Oh, sorry. I thought I could use a query hereas I do with other
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105425#comment-16105425
]
Paul Wu commented on SPARK-17614:
-
So create a new issue? Or this is not an issue to you?
>
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105402#comment-16105402
]
Paul Wu edited comment on SPARK-17614 at 7/28/17 6:14 PM:
--
The fix does not
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105402#comment-16105402
]
Paul Wu edited comment on SPARK-17614 at 7/28/17 6:09 PM:
--
The fix does not
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu reopened SPARK-17614:
-
The fix does not support the syntax on the syntax like this:
{{.jdbc(JDBC_URL, "(select * from emp)",
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105402#comment-16105402
]
Paul Wu edited comment on SPARK-17614 at 7/28/17 6:08 PM:
--
The fix does not
[
https://issues.apache.org/jira/browse/SPARK-19296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831033#comment-15831033
]
Paul Wu edited comment on SPARK-19296 at 1/20/17 9:52 PM:
--
We found this Util is
[
https://issues.apache.org/jira/browse/SPARK-19296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831033#comment-15831033
]
Paul Wu commented on SPARK-19296:
-
We found this Util is very useful in general (much, much better than
Paul Wu created SPARK-19296:
---
Summary: Awkward changes for JdbcUtils.saveTable in Spark 2.1.0
Key: SPARK-19296
URL: https://issues.apache.org/jira/browse/SPARK-19296
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-18123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-18123:
Description:
Blindly quoting every field name for inserting is the issue (Line 110-119,
[
https://issues.apache.org/jira/browse/SPARK-18123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15610739#comment-15610739
]
Paul Wu commented on SPARK-18123:
-
Just tried if it worked for the issue. But after I found the code, I
[
https://issues.apache.org/jira/browse/SPARK-18123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-18123:
Description:
Blindly quoting every field name for inserting is the issue (Line 110-119,
Paul Wu created SPARK-18123:
---
Summary:
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils.saveTable the case
senstivity issue
Key: SPARK-18123
URL: https://issues.apache.org/jira/browse/SPARK-18123
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-17614:
Comment: was deleted
(was: Create pull request: https://github.com/apache/spark/pull/15183)
>
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510709#comment-15510709
]
Paul Wu commented on SPARK-17614:
-
Create pull request: https://github.com/apache/spark/pull/15183
>
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510626#comment-15510626
]
Paul Wu edited comment on SPARK-17614 at 9/21/16 5:42 PM:
--
No, Custom
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-17614:
Priority: Major (was: Minor)
> sparkSession.read() .jdbc(***) use the sql syntax "where 1=0" that
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510626#comment-15510626
]
Paul Wu commented on SPARK-17614:
-
No, Custom JdbcDialect won't resolve the problem since DataFrameReader
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15510525#comment-15510525
]
Paul Wu commented on SPARK-17614:
-
Thanks. I tried to register my custom dialect as following, but it
[
https://issues.apache.org/jira/browse/SPARK-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507950#comment-15507950
]
Paul Wu commented on SPARK-17614:
-
Work around: Rebuild the Cassandra JDBC wrapper by modifying
Paul Wu created SPARK-17614:
---
Summary: sparkSession.read() .jdbc(***) use the sql syntax "where
1=0" that Cassandra does not support
Key: SPARK-17614
URL: https://issues.apache.org/jira/browse/SPARK-17614
[
https://issues.apache.org/jira/browse/SPARK-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-9255:
---
Attachment: (was: timestamp_bug.zip)
Timestamp handling incorrect for Spark 1.4.1 on Linux
[
https://issues.apache.org/jira/browse/SPARK-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641175#comment-14641175
]
Paul Wu commented on SPARK-9255:
Related https://issues.apache.org/jira/browse/SPARK-9058,
[
https://issues.apache.org/jira/browse/SPARK-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14639949#comment-14639949
]
Paul Wu commented on SPARK-9255:
[~srowen] I don't think it is due to version
[
https://issues.apache.org/jira/browse/SPARK-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-9255:
---
Attachment: timestamp_bug.zip
the project can run without issues. But when it is deployed to the
Timestamp
Paul Wu created SPARK-9255:
--
Summary: Timestamp handling incorrect for Spark 1.4.1 on Linux
Key: SPARK-9255
URL: https://issues.apache.org/jira/browse/SPARK-9255
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-9255:
---
Description:
This is a very strange case involving timestamp I can run the program on
Windows using dev
[
https://issues.apache.org/jira/browse/SPARK-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14637336#comment-14637336
]
Paul Wu edited comment on SPARK-9255 at 7/22/15 6:52 PM:
-
The
[
https://issues.apache.org/jira/browse/SPARK-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu closed SPARK-9087.
--
Resolution: Fixed
Fix Version/s: 1.4.1
1.4.1 fixed the issue.
Broken SQL on where condition involving
Paul Wu created SPARK-9087:
--
Summary: Broken SQL on where condition involving timestamp and
time string.
Key: SPARK-9087
URL: https://issues.apache.org/jira/browse/SPARK-9087
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555680#comment-14555680
]
Paul Wu commented on SPARK-7804:
Unfortunately, JdbcRDD was poorly designed since the
[
https://issues.apache.org/jira/browse/SPARK-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555680#comment-14555680
]
Paul Wu edited comment on SPARK-7804 at 5/22/15 12:00 PM:
--
Paul Wu created SPARK-7804:
--
Summary: Incorrect results from JDBCRDD -- one record repeatly and
incorrect field value
Key: SPARK-7804
URL: https://issues.apache.org/jira/browse/SPARK-7804
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-7804:
---
Description:
Getting only one record repeated in the RDD and repeated field value:
I have a table like:
[
https://issues.apache.org/jira/browse/SPARK-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555602#comment-14555602
]
Paul Wu commented on SPARK-7804:
Thanks -- you are right. The cache() was a problem and
Paul Wu created SPARK-7746:
--
Summary: SetFetchSize for JDBCRDD's prepareStatement
Key: SPARK-7746
URL: https://issues.apache.org/jira/browse/SPARK-7746
Project: Spark
Issue Type: New Feature
[
https://issues.apache.org/jira/browse/SPARK-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14498305#comment-14498305
]
Paul Wu commented on SPARK-6936:
You are right: I used spark-hive_2.10 instead of
[
https://issues.apache.org/jira/browse/SPARK-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14498247#comment-14498247
]
Paul Wu commented on SPARK-6936:
Not sure about HiveContext. I tried to do the following
[
https://issues.apache.org/jira/browse/SPARK-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Wu updated SPARK-6936:
---
Comment: was deleted
(was: Not sure about HiveContext. I tried to do the following program and I got
Paul Wu created SPARK-6936:
--
Summary: SQLContext.sql() caused deadlock in multi-thread env
Key: SPARK-6936
URL: https://issues.apache.org/jira/browse/SPARK-6936
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14496933#comment-14496933
]
Paul Wu commented on SPARK-6936:
1. The query is something like this (sorry since the data
[
https://issues.apache.org/jira/browse/SPARK-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14497267#comment-14497267
]
Paul Wu commented on SPARK-6936:
Currently, I synchronized the part, which seems to be
56 matches
Mail list logo