Github user srowen commented on the pull request:

    https://github.com/apache/spark/pull/30#issuecomment-41997425
  
    Yes 65536 is the magic number of course. I know the previous change to 
remove fastutil knocked out about 10K files and brought it under control. 
Excluding jruby probably helped too. 
    
    I wonder why it's back over? Is it a recent new dependency, or is it just 
occurring for the Hadoop/YARN dependency profiles?
    
    The file count isn't showing obvious culprits. My command is pretty 
simplistic and so may not be showing the issue. I think it's worth examining 
the contents in more detail to find the culprit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to