Github user deanchen commented on the issue:
https://github.com/apache/spark/pull/14279
@srowen ltgm, thanks for chiming in!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user deanchen commented on the issue:
https://github.com/apache/spark/pull/14279
Would be great to get this in too. Currently using a hack where we iterate
through all the date columns at the end of our run and manually converting the
values to the string values formatted as
Github user deanchen commented on the issue:
https://github.com/apache/spark/pull/14118
Would be great to get a resolution to this. We're running into issues in
production attempting to parse csv's with nullable dates. Personally prefer
option b for our use case.
-
Github user deanchen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13988#discussion_r71100480
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityGenerator.scala
---
@@ -0,0 +1,83 @@
+/*
+ * Licensed to
Github user deanchen commented on the issue:
https://github.com/apache/spark/pull/13912
@srowen @rxin Would love to see this get merged as this has been a pain
point for us. Not a fan of timezoneless dates as an engineer but the need to
passthrough or write timezoneless dates to
Github user deanchen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13988#discussion_r71097167
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityGenerator.scala
---
@@ -0,0 +1,83 @@
+/*
+ * Licensed to
Github user deanchen commented on the pull request:
https://github.com/apache/spark/pull/5586#issuecomment-103317088
@XuTingjun Have you tried authenticating to your hbase server without
Spark? Looks like a failure caused by a misconfiguration.
---
If your project is set up for it
GitHub user deanchen opened a pull request:
https://github.com/apache/spark/pull/5866
Fix typo in Dataframes.py introduced in [SPARK-3444]
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/deanchen/spark patch-1
Alternatively you
Github user deanchen commented on the pull request:
https://github.com/apache/spark/pull/5586#issuecomment-97659943
@XuTingjun This looks like a generic Spark driver error when an executor
crashes. Can you please dig up the executor stack trace containing the root
cause?
---
If
Github user deanchen commented on the pull request:
https://github.com/apache/spark/pull/5586#issuecomment-96842084
Yes, including the HBase jars on the driver and/or executor (eg.
_/usr/lib/hbase/lib/hbase-client.jar:/usr/lib/hbase/lib/hbase-common.jar:/usr/lib/hbase/lib/hbase
Github user deanchen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5275#discussion_r29117639
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/package.scala ---
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software
Github user deanchen commented on the pull request:
https://github.com/apache/spark/pull/5586#issuecomment-94471741
The HBaseConfiguration object will read from hbase-default.xml or
hbase-site.xml in the classpath. Do you have hbase config in another file? The
zookeeper configs are
Github user deanchen commented on the pull request:
https://github.com/apache/spark/pull/5586#issuecomment-94352574
@XuTingjun noticed you were also interested in this feature on
https://github.com/apache/spark/pull/5031
---
If your project is set up for it, you can reply to this
GitHub user deanchen opened a pull request:
https://github.com/apache/spark/pull/5586
[SPARK-6918][YARN] Secure HBase support.
Obtain HBase security token with Kerberos credentials locally to be sent to
executors.
Similar to obtainTokenForNamenodes. Fails gracefully if
Github user deanchen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5477#discussion_r28204600
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ExecutorRunnable.scala ---
@@ -290,10 +290,19 @@ class ExecutorRunnable
GitHub user deanchen opened a pull request:
https://github.com/apache/spark/pull/5477
[SPARK-6868][YARN] Fix broken container log link on executor page when
HTTPS_ONLY.
Correct http schema in YARN container log link in Spark UI when container
logs when YARN is configured to be
Github user deanchen commented on the pull request:
https://github.com/apache/spark/pull/5193#issuecomment-87067449
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user deanchen commented on the pull request:
https://github.com/apache/spark/pull/5193#issuecomment-87023940
@pwendell this is a blocker for any jobs using Avro and Kryo that passes
GenericData.Records between stages so I think you should consider merging it in
to 1.3 so it
Github user deanchen commented on the pull request:
https://github.com/apache/spark/pull/5193#issuecomment-86630774
Here are the 3 issues resolved in 1.7.7 compared to 1.7.6
https://issues.apache.org/jira/browse/AVRO/fixforversion/12326041/?selectedTab=com.atlassian.jira.jira
GitHub user deanchen opened a pull request:
https://github.com/apache/spark/pull/5193
[SPARK-6544][build] Increment Avro version from 1.7.6 to 1.7.7
Fixes bug causing Kryo serialization to fail with Avro files in between
stages.
https://issues.apache.org/jira/browse/AVRO
20 matches
Mail list logo