Hi, Guys

We have a Apache Spark cluster of 3 nodes, one is master and slave, the
other two are slaves. When starting Spark worker with "systemctl start
spark-worker", when running out apps, sometimes but not always it generates
"java.lang.OutOfMemoryError: unable to create new native thread" error in
Spark worker logs.

If instead starting Spark worker directly (/opt/spark/sbin/start-slave.sh
spark://masterip:7077), it never causes any such error.

We tried tweaking ulimit and java options but did not have any luck.

The unit file (spark-worker.service) is like below:
[Unit]
Description=Spark Worker
After=network.target

[Service]
Type=forking
ExecStart=/opt/spark/sbin/start-slave.sh spark://masterIP:7077
ExecStop=/opt/spark/sbin/stop-slave.sh
StandardOutput=journal
StandardError=journal
LimitNOFILE=infinity
LimitMEMLOCK=infinity
LimitNPROC=infinity
LimitAS=infinity
CPUAccounting=true
CPUShares=100
Restart=always

[Install]
WantedBy=multi-user.target

Any help is appreciated.

Thanks,
Rao
_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Reply via email to