Istvan Toth created PHOENIX-6667:
------------------------------------

             Summary: Spark3 connector requires that all columns are specified 
when writing
                 Key: PHOENIX-6667
                 URL: https://issues.apache.org/jira/browse/PHOENIX-6667
             Project: Phoenix
          Issue Type: Bug
          Components: connectors, spark-connector
    Affects Versions: connectors-6.0.0
            Reporter: Istvan Toth


For Spark 2, it was possible to omit some columns from the dataframe, the same 
way it is not mandatory to specify all columns when upserting via SQL.

Spark3 has added new checks, which require that EVERY sql column is specifed in 
the DataFrame.

Consequently, when using the current API, writing will fail unless you specify 
all columns.

This is a loss of functionality WRT Phoenix (and other SQL datastores) compared 
to Spark2.

I don't think that we can do anything from the Phoenix side, just documenting 
the regression here.

Maybe future Spark versions will make this configurable.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to