Dear All,

I've got a question about hadoop streaming with its memory management.
Does hadoop streaming have a mechanism to prevent over-usage of memory by
its subprocesses (Map or Reduce function)?

Say, a binary used for reduce phase allocates itself lots and lots of memory
to the point it starves other important processes like a Datanode or
TaskTracker process. Does Hadoop Streaming prevent such cases?

Thank you in advance,

Taeho

Reply via email to