[
https://issues.apache.org/jira/browse/SPARK-26538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-26538:
------------------------------------
Assignee: Apache Spark
> Postgres numeric array support
> ------------------------------
>
> Key: SPARK-26538
> URL: https://issues.apache.org/jira/browse/SPARK-26538
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.2.2, 2.3.2, 2.4.1
> Environment: PostgreSQL 10.4, 9.6.9.
> Reporter: Oleksii
> Assignee: Apache Spark
> Priority: Minor
>
> Consider the following table definition:
> {code:sql}
> create table test1
> (
> v numeric[],
> d numeric
> );
> insert into test1 values('{1111.222,2222.332}', 222.4555);
> {code}
> When reading the table into a Dataframe, I get the following schema:
> {noformat}
> root
> |-- v: array (nullable = true)
> | |-- element: decimal(0,0) (containsNull = true)
> |-- d: decimal(38,18) (nullable = true){noformat}
> Notice that for both columns precision and scale were not specified, but in
> case of the array element I got both set to 0, while in the other case
> defaults were set.
> Later, when I try to read the Dataframe, I get the following error:
> {noformat}
> java.lang.IllegalArgumentException: requirement failed: Decimal precision 4
> exceeds max precision 0
> at scala.Predef$.require(Predef.scala:224)
> at org.apache.spark.sql.types.Decimal.set(Decimal.scala:114)
> at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:453)
> at
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$16$$anonfun$apply$6$$anonfun$apply$7.apply(JdbcUtils.scala:474)
> ...{noformat}
> I would expect to get array elements of type decimal(38,18) and no error when
> reading in this case.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]