[jira] [Created] (FLINK-29345) Too many open files in table store orc writer

2022-09-19 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-29345: Summary: Too many open files in table store orc writer Key: FLINK-29345 URL: https://issues.apache.org/jira/browse/FLINK-29345 Project: Flink Issue Type

Re: Avoiding "too many open files" during leftOuterJoin with Flink 1.11/batch

2020-09-25 Thread Ken Krugler
Hi Chesnay, Thanks, and you were right - it wasn’t a case of too many memory segments triggering too many open files. It was a configuration issue with Elasticsearch clients being used by a custom function. This just happened to start being executed at the same time as the leftOuterJoin

Re: Avoiding "too many open files" during leftOuterJoin with Flink 1.11/batch

2020-09-18 Thread Chesnay Schepler
caused by too many open files. The slaves in my YARN cluster (each with 48 slots and 320gb memory) are currently set up with a limit of 32767, so I really don’t want to crank this up much higher. In perusing the code, I assume the issue is that SpillingBuffer.nextSegment() can open a writer per

Avoiding "too many open files" during leftOuterJoin with Flink 1.11/batch - now with stack trace

2020-09-17 Thread Ken Krugler
-719a95fa-eca4-4ac4-b2c5-7799315b626d/87cb5c578a889080d681cc00fc11023b.01.channel (Too many open files) at java.io.RandomAccessFile.open0(Native Method) ~[?:1.8.0_252] at java.io.RandomAccessFile.open(RandomAccessFile.java:316) ~[?:1.8.0_252] at java.io.RandomAccessFile

Avoiding "too many open files" during leftOuterJoin with Flink 1.11/batch

2020-09-17 Thread Ken Krugler
Hi all, When I do a leftOuterJoin(stream, JoinHint.REPARTITION_SORT_MERGE), I’m running into an IOException caused by too many open files. The slaves in my YARN cluster (each with 48 slots and 320gb memory) are currently set up with a limit of 32767, so I really don’t want to crank this up

[jira] [Created] (FLINK-9831) Too many open files for RocksDB

2018-07-12 Thread Sayat Satybaldiyev (JIRA)
Sayat Satybaldiyev created FLINK-9831: - Summary: Too many open files for RocksDB Key: FLINK-9831 URL: https://issues.apache.org/jira/browse/FLINK-9831 Project: Flink Issue Type: Bug

Re: Too many open files

2018-03-20 Thread Ted Yu
s is > expected or is it an issue. Thanks. > > java.io.FileNotFoundException: > /tmp/flink-io-b3043cd6-50c8-446a-8c25-fade1b1862c0/ > cb317fc2578db72b3046468948fa00f2f17039b6104e72fb8c58938e5869cfbc.0.buffer > (Too many open files) > >

Too many open files

2018-03-20 Thread Govindarajan Srinivasaraghavan
(Too many open files) at java.io.RandomAccessFile.open0(Native Method) at java.io.RandomAccessFile.open(RandomAccessFile.java:316) at java.io.RandomAccessFile.(RandomAccessFile.java:243) at org.apache.flink.streaming.runtime.io.BufferSpiller.createSpillingChannel