Github user robbyki commented on the issue:
https://github.com/apache/spark/pull/5618
How can I create a schema outside of spark containing columns with varchar
and nvarchar and then save a dataframe with truncate = true and avoid an
invalid datatype error for TEXT in Netezza by registering a new dialect? My
current dialect has StringType mapped to either varchar or nvarchar but I can't
have both and I'm failing to understand how to customize and persists table
schemas without my dialects over-writing everything. Using spark 2.1.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]