Github user squito commented on the issue:
https://github.com/apache/spark/pull/17854
> It took 3~4 minutes to start an executor on an NM (most of the time was
spent on container localization: downloading spark jar, application jar and
etc. from the hdfs staging folder).
I think the biggest improvements might in your cluster setup. I'd ensure
that the spark jars (and all dependencies) are already on the local file
systems of each node, and keep the application jar as small as possible, by
also pushing dependencies of your application onto the local filesystems of
each node. That usually keeps the code of your application jar that needs to
be shipped around pretty small. Even with it on hdfs, one of the copies is
probably on the driver which will still put a lot of pressure on that node.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]