Greetings Arrow Community,

I am working with on a project that drops data into Parquet files on an Amazon 
S3 bucket and reads from there into Redshift. From what I can tell, AWS 
Redshift will not read a parquet file properly that contains a VARCHAR data 
type with no max_length specification. 

Is there any way to pass that type information through to the parquet 
serializer? I searched through the documentation but nothing stood out at me.

For reference, the best documentation I could find on the AWS side about not 
supporting a blanket VARCHAR without max_length is in the "Invalid Column Type 
Error" from this documentation:

https://aws.amazon.com/premiumsupport/knowledge-center/redshift-spectrum-data-errors/

Thanks,
Will


Reply via email to