Actually, there are too many hyperparameters to experiment with, that is why I'm trying to understand if there is any particular way in which a cluster could be benchmarked.
Another strange behaviour I am observing is: Delaying the operator creation (by distributing the operators across jobs, and submitting multiple jobs to the same cluster instead of one) is helping in creating more operators. Any ideas on why that is happening? Shailesh On Sun, Feb 18, 2018 at 11:16 PM, Pawel Bartoszek < pawelbartosze...@gmail.com> wrote: > Hi, > > You could definitely try to find formula for heap size, but isnt's it > easier just to try out different memory settings and see which works best > for you? > > Thanks, > Pawel > > 17 lut 2018 12:26 "Shailesh Jain" <shailesh.j...@stellapps.com> > napisał(a): > > Oops, hit send by mistake. > > In the configuration section, it is mentioned that for "many operators" > heap size should be increased. > > "JVM heap size (in megabytes) for the JobManager. You may have to increase > the heap size for the JobManager if you are running very large applications > (with many operators), or if you are keeping a long history of them." > > Is there any recommendation on the heap space required when there are > around 200 CEP operators, and close 80 Filter operators? > > Any other leads on calculating the expected heap space allocation to start > the job would be really helpful. > > Thanks, > Shailesh > > > > On Sat, Feb 17, 2018 at 5:53 PM, Shailesh Jain < > shailesh.j...@stellapps.com> wrote: > >> Hi, >> >> I have flink job with almost 300 operators, and every time I'm trying to >> submit the job, the cluster crashes with OutOfMemory exception. >> >> I have 1 job manager and 1 task manager with 2 GB heap space allocated to >> both. >> >> In the configuration section of the documentation >> >> >> >> > >