Hi Team,

We currently have implemented pyspark spark-streaming application on
databricks, where we read data from s3 and write to the snowflake table
using snowflake connector jars (net.snowflake:snowflake-jdbc v3.14.5 and
net.snowflake:spark-snowflake v2.12:2.14.0-spark_3.3) .

Currently facing an issue where if we give a large number of columns, it
trims the data in a copy statement, thereby unable to write to the
snowflake as the data mismatch happens.

Using databricks 11.3 LTS with Spark 3.3.0 and Scala 2.12 version.

Can you please help on how I can resolve this issue ? I tried searching
online, but did not get any such articles.

Looking forward to hearing from you.

Regards,
Varun Shah

Reply via email to