Earlier this week I've posted the following comment for tomorrow's Yarn
meetup. I just realized that most folks may miss that post, thus sending it
to the alias.

We've been working on getting Impala running on YARN and wanted to share
Llama, a system that mediates between Impala and YARN.

Our design and some of the rationale behind it and the code are available
at:

Docs: http://cloudera.github.io/llama...
Code: https://github.com/cloudera/llam...

We think our approach will be applicable to similar frameworks - those with
low latency requirements that seek to run work in processes outside of the
typical container lifecycle.

Thanks for taking a look and for any feedback!

-Alex, Eli, Henry, Karthik, Sandy, and Tucu

Reply via email to