Hi Paulo!

Hmm, interesting. The high discrepancy between virtual and physical memory
usually means that the process either maps large files into memory, or that
it pre-allocates a lot of memory without immediately using it.
Neither of these things are done by Flink.

Could this be an effect of either the Docker environment (mapping certain
kernel spaces / libraries / whatever) or a result of one of the libraries
(gRPC or so)?

Stephan


On Mon, Dec 19, 2016 at 12:32 PM, Paulo Cezar <paulo.ce...@gogeo.io> wrote:

>   - Are you using RocksDB?
>
> No.
>
>
>   - What is your flink configuration, especially around memory settings?
>
> I'm using default config with 2GB for jobmanager and 5GB for taskmanagers.
> I'm starting flink via "./bin/yarn-session.sh -d -n 5 -jm 2048 -tm 5120 -s
> 4 -nm 'Flink'"
>
>   - What do you use for TaskManager heap size? Any manual value, or do you
> let Flink/Yarn set it automatically based on container size?
>
> No manual values here. YARN config is pretty much default with maximum
> allocation of 12GB of physical memory and ratio between virtual memory to
> physical memory 2.1 (via yarn.nodemanager.vmem-pmem-ratio).
>
>
>   - Do you use any libraries or connectors in your program?
>
> I'm using  flink-connector-kafka-0.10_2.11, a MongoDB client, a gRPC
> client and some http libraries like unirest and Apache HttpClient.
>
>   - Also, can you tell us what OS you are running on?
>
> My YARN cluster runs on Docker containers (docker version 1.12) with
> images based on Ubuntu 14.04. Host OS is Ubuntu 14.04.4 LTS (GNU/Linux
> 3.19.0-65-generic x86_64).
>
>

Reply via email to