It's almost surely the workers, not the driver (shell) that have too
many files open. You can change their ulimit. But it's probably better
to see why it happened -- a very big shuffle? -- and repartition or
design differently to avoid it. The new sort-based shuffle might help
in this regard.

On Fri, Oct 31, 2014 at 3:25 PM, Bill Q <bill.q....@gmail.com> wrote:
> Hi,
> I am trying to make Spark SQL 1.1 to work to replace part of our ETL
> processes that are currently done by Hive 0.12.
>
> A common problem that I have encountered is the "Too many files open" error.
> Once that happened, the query just failed. I started the spark-shell by
> using "ulimit -n 4096 & spark-shell". And it still pops the same error.
>
> Any solutions?
>
> Many thanks.
>
>
> Bill
>
>
>
> --
> Many thanks.
>
>
> Bill
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to