Jingsong Lee created FLINK-29345:
Summary: Too many open files in table store orc writer
Key: FLINK-29345
URL: https://issues.apache.org/jira/browse/FLINK-29345
Project: Flink
Issue Type
Hi Chesnay,
Thanks, and you were right - it wasn’t a case of too many memory segments
triggering too many open files.
It was a configuration issue with Elasticsearch clients being used by a custom
function. This just happened to start being executed at the same time as the
leftOuterJoin
caused by too many open files.
The slaves in my YARN cluster (each with 48 slots and 320gb memory) are
currently set up with a limit of 32767, so I really don’t want to crank this up
much higher.
In perusing the code, I assume the issue is that SpillingBuffer.nextSegment()
can open a writer per
-719a95fa-eca4-4ac4-b2c5-7799315b626d/87cb5c578a889080d681cc00fc11023b.01.channel
(Too many open files)
at java.io.RandomAccessFile.open0(Native Method) ~[?:1.8.0_252]
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
~[?:1.8.0_252]
at java.io.RandomAccessFile
Hi all,
When I do a leftOuterJoin(stream, JoinHint.REPARTITION_SORT_MERGE), I’m running
into an IOException caused by too many open files.
The slaves in my YARN cluster (each with 48 slots and 320gb memory) are
currently set up with a limit of 32767, so I really don’t want to crank this up
Sayat Satybaldiyev created FLINK-9831:
-
Summary: Too many open files for RocksDB
Key: FLINK-9831
URL: https://issues.apache.org/jira/browse/FLINK-9831
Project: Flink
Issue Type: Bug
s is
> expected or is it an issue. Thanks.
>
> java.io.FileNotFoundException:
> /tmp/flink-io-b3043cd6-50c8-446a-8c25-fade1b1862c0/
> cb317fc2578db72b3046468948fa00f2f17039b6104e72fb8c58938e5869cfbc.0.buffer
> (Too many open files)
>
>
(Too many open files)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at
org.apache.flink.streaming.runtime.io.BufferSpiller.createSpillingChannel