liupc opened a new pull request #27: [KUDU-3054][SPARK]Init kudu.write_duration accumulator lazily URL: https://github.com/apache/kudu/pull/27 Currently, we encountered a issue in kudu-spark that will causing the following spark sql query failure: ``` Job aborted due to stage failure: Total size of serialized results of 942 tasks (2.0 GB) is bigger than spark.driver.maxResultSize (2.0 GB) ``` After carefully debug, we find out that it's the kudu.write_duration accumulators causing single spark task larger than 2M, thus all tasks size of the stage will bigger than the limit. However, this stage is just reading kudu table and do shuffle exchange, no writing any kudu tables. So I propose to init this accumulator lazily in KuduContext to avoid such issues.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
