Hi,

I am trying to read data from Hive as DataFrame, then trying to write the
DF into the Oracle data base. In this case, the date field/column in hive
is with Type Varchar(20)
but the corresponding column type in Oracle is Date. While reading from
hive , the hive table names are dynamically decided(read from another
table) based on some job condition(ex. Job1). There are multiple tables
like this, so column and the table names are decided only run time. So I
can't do type conversion explicitly when read from Hive.

So is there any utility/api available in Spark to achieve this conversion
issue?


Thanks,
Guru

Reply via email to