[ https://issues.apache.org/jira/browse/PHOENIX-6667 ]


    Istvan Toth deleted comment on PHOENIX-6667:
    --------------------------------------

was (Author: stoty):
The failures are all unrelated, all tests have passed.
Unfortunately we don't even run the relevant tests here, but it has passed on 
my machine.

> Spark3 connector requires that all columns are specified when writing
> ---------------------------------------------------------------------
>
>                 Key: PHOENIX-6667
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-6667
>             Project: Phoenix
>          Issue Type: Bug
>          Components: connectors, spark-connector
>    Affects Versions: connectors-6.0.0
>            Reporter: Istvan Toth
>            Assignee: Attila Zsolt Piros
>            Priority: Major
>
> For Spark 2, it was possible to omit some columns from the dataframe, the 
> same way it is not mandatory to specify all columns when upserting via SQL.
> Spark3 has added new checks, which require that EVERY sql column is specifed 
> in the DataFrame.
> Consequently, when using the current API, writing will fail unless you 
> specify all columns.
> This is a loss of functionality WRT Phoenix (and other SQL datastores) 
> compared to Spark2.
> I don't think that we can do anything from the Phoenix side, just documenting 
> the regression here.
> Maybe future Spark versions will make this configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to