You can use the map datatype on the Hive table for the columns that are 
uncertain:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-ComplexTypes

However, maybe you can share more concrete details, because there could be also 
other solutions.

> Am 07.08.2019 um 20:40 schrieb anbutech <anbutec...@outlook.com>:
> 
> Hi All,
> 
> I have a scenario in (Spark scala/Hive):
> 
> Day 1:
> 
> i have a file with 5 columns which needs to be processed and loaded into
> hive tables.
> day2:
> 
> Next day the same feeds(file) has 8 columns(additional fields) which needs
> to be processed and loaded into hive tables
> 
> How do we approach this problem without changing the target table schema.Is
> there any way we can achieve this.
> 
> Thanks
> Anbu
> 
> 
> 
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
> 
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> 

Reply via email to