Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-186440716
@marmbrus I think that will be alright. Users who want this can either
using ArrayType(StringType) and take the serialization hit until 2.0 or build
their own Spark
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-184465890
@marmbrus Updated description and code. Hopefully Good Enough :tm:.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-183982018
@marmbrus Can we get this merged? Was my description ok?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-180629766
@marmbrus Tests are good to go.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-180631463
Is that alright?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-179001497
Can we merge this before 1.6.1? ping @marmbrus @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-178055015
Moved the enum column to `c14` to keep real and double arrays next to each
other.
---
If your project is set up for it, you can reply to this email and have your
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-177720072
ping @JoshRosen What do you think?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-176822999
Do I need to rebase to merge cleanly?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10936#issuecomment-176321781
I had an unused import. My apologies!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10936#issuecomment-176338205
@rxin can you start the test again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-176441148
Not the most ideal test, but decoupled from all Spark types other than
DecimalType.
---
If your project is set up for it, you can reply to this email and have
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10936#issuecomment-176435017
Sweet. Anything else?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-176417265
In general, `createArrayOf` doesn't use SQL data type
length/precision/scale parameters (i.e. `(n)` in `VARCHAR(n)`), but CREATE
TABLE statement does. I'm already
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-176382091
There is a corner case in this during `DataFrame.jdbc.write` with
`SaveMode.OVERWRITE`. Thanks to @maropu for pointing it out. Do not merge this
yet.
---
If your
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10898#issuecomment-176380347
@maropu I see a corner case in `schemaString`. I believe I can get the
right behavior now.
---
If your project is set up for it, you can reply to this email
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10898#issuecomment-176232874
@maropu I have submitted an implementation of this in #10928 and am not
getting the error you describe.
---
If your project is set up for it, you can reply
Github user blbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/10936#discussion_r51051135
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/InferSchema.scala
---
@@ -135,7 +136,10 @@ private[json] object
GitHub user blbradley opened a pull request:
https://github.com/apache/spark/pull/10928
[SPARK-12966][SQL] ArrayType(DecimalType) support in Postgres JDBC
Write tests in `PostgresIntegrationSuite` are lacking. I may improve them
soon.
You can merge this pull request into a Git
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10928#issuecomment-175314078
Sweet. Anything else?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user blbradley opened a pull request:
https://github.com/apache/spark/pull/10936
[SPARK-12749][SQL] add json option to parse floating-point types as
DecimalType
I tried to add this via `USE_BIG_DECIMAL_FOR_FLOATS` option from Jackson
with no success.
Added test
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10898#issuecomment-174528746
You should not be converting to doubles when testing BigDecimal or
DecimalType..
---
If your project is set up for it, you can reply to this email and have your
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10898#issuecomment-174529393
Also, we should be handling the precision and scale returned from Postgres.
I've looked deep enough to see that this is possible.
---
If your project is set up
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10898#issuecomment-174574309
@maropu Indeed, but they are not available in the metadata pased to
`dialect.getCatalystType`. They probably need to be added to the metadata and
logic added
Github user blbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/10898#discussion_r50717619
--- Diff:
docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
---
@@ -82,6 +83,10 @@ class
Github user blbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/10898#discussion_r50716977
--- Diff:
docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
---
@@ -82,6 +83,10 @@ class
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10887#issuecomment-174300376
I don't know if it breaks anything yet. I ran the tests locally, but I'm
not sure how to tell if/how many tests failed.
---
If your project is set up for it, you
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10887#issuecomment-174302616
@srowen (SPARK-11796)[https://issues.apache.org/jira/browse/SPARK-11796]
brought the Docker integration tests to 4.5. They still break depending on how
they are run
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10887#issuecomment-174321435
I didn't know there was a dependency test. That helps!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10887#issuecomment-174309171
Tests on `core` sub-project passed. Can we give Jenkins a roll?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/9876#issuecomment-174248297
@srowen httpclient 4.5.* depends on versions of httpcore 4.4.*
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user blbradley opened a pull request:
https://github.com/apache/spark/pull/10887
[SPARK-12972][CORE] update httpclient to 4.5.1
looking to fix some dependency issues
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10695#issuecomment-173436739
This is blocking me. Can we get it merged soon?
I'm waiting to submit another PR to fix DecimalType also.
---
If your project is set up for it, you can
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10695#issuecomment-172248501
Looks good, but you should squash your commits into one.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/10708#issuecomment-170700621
This contribution is my original work and I license the work to the project
under the project's open source license.
---
If your project is set up for it, you can
GitHub user blbradley opened a pull request:
https://github.com/apache/spark/pull/10708
[SPARK-12758] [SQL] add note to Spark SQL Migration guide about
TimestampType casting
Warning users about casting changes.
You can merge this pull request into a Git repository by running
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/9027#issuecomment-170614673
@dragos Link?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user blbradley commented on the pull request:
https://github.com/apache/spark/pull/9027#issuecomment-167848940
@dragos Where can you see that fine-grained mode is slated for removal? All
I see is #9795.
---
If your project is set up for it, you can reply to this email
38 matches
Mail list logo