My observation is opposite. When my job runs under default
spark.shuffle.manager, I don't see this exception. However, when it runs
with SORT based, I start seeing this error? How would that be possible?

I am running my job in YARN, and I noticed that the YARN process limits
(cat /proc/$PID/limits) are not consistent with system wide limits (shown
by limit -a), I don't know how that happened. Is there a way to let Spark
driver to propagate this setting (limit -n <number>) to spark executors
before startup?




On Tue, Oct 7, 2014 at 11:53 PM, Andrew Ash <and...@andrewash.com> wrote:

> You will need to restart your Mesos workers to pick up the new limits as
> well.
>
> On Tue, Oct 7, 2014 at 4:02 PM, Sunny Khatri <sunny.k...@gmail.com> wrote:
>
>> @SK:
>> Make sure ulimit has taken effect as Todd mentioned. You can verify via
>> ulimit -a. Also make sure you have proper kernel parameters set in
>> /etc/sysctl.conf (MacOSX)
>>
>> On Tue, Oct 7, 2014 at 3:57 PM, Lisonbee, Todd <todd.lison...@intel.com>
>> wrote:
>>
>>>
>>> Are you sure the new ulimit has taken effect?
>>>
>>> How many cores are you using?  How many reducers?
>>>
>>>         "In general if a node in your cluster has C assigned cores and
>>> you run
>>>         a job with X reducers then Spark will open C*X files in parallel
>>> and
>>>         start writing. Shuffle consolidation will help decrease the total
>>>         number of files created but the number of file handles open at
>>> any
>>>         time doesn't change so it won't help the ulimit problem."
>>>
>>> Quoted from Patrick at:
>>>
>>> http://apache-spark-user-list.1001560.n3.nabble.com/quot-Too-many-open-files-quot-exception-on-reduceByKey-td2462.html
>>>
>>> Thanks,
>>>
>>> Todd
>>>
>>> -----Original Message-----
>>> From: SK [mailto:skrishna...@gmail.com]
>>> Sent: Tuesday, October 7, 2014 2:12 PM
>>> To: u...@spark.incubator.apache.org
>>> Subject: Re: Shuffle files
>>>
>>> - We set ulimit to 500000. But I still get the same "too many open files"
>>> warning.
>>>
>>> - I tried setting consolidateFiles to True, but that did not help either.
>>>
>>> I am using a Mesos cluster.   Does Mesos have any limit on the number of
>>> open files?
>>>
>>> thanks
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/Shuffle-files-tp15185p15869.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>
>


-- 
Chen Song

Reply via email to