Asmath,
Why upperBound is set to 300 ? how many cores you have ?
check how data is distributed in TeraData DB table.
SELECT distinct( itm_bloon_seq_no ), count(*) as cc FROM TABLE order
by itm_bloon_seq_no desc;
Is this column "itm_bloon_seq_no" already in table or you derived at spark
Hi,
I have teradata table who has more than 2.5 billion records and data size
is around 600 GB. I am not able to pull efficiently using spark SQL and it
is been running for more than 11 hours. here is my code.
val df2 = sparkSession.read.format("jdbc")
.option("url",