Hello,

I am reading too many files in Spark 3.2(Parquet) . It is not giving any
error in the logs. But after spark.read.parquet , it is not able to proceed
further.
Can anyone please suggest if there is any property to improve the parallel
reads? I am reading more than 25000 files .

Kind Regards,
Sachit Murarka

Reply via email to