Hi Radek,

I thought about this, but in this case I have to scale number of containers manually, but I would like to leverage all the resources on mesos slaves..

In this way, all I need to increase throughput is to add slave nodes to the cluster and that's it.

Regards,
Uladzimir
On Mar 27, 2017 11:58 PM, Radek Gruchalski <ra...@gruchalski.com> wrote:
Well, you are storing the incoming events in Kafka. So data is not lost. That’s one. Instead of scheduling a container per image, start a number of containers, make each of them a member of consumer group and you’re sorted. By building a framework, you’re making your life really difficult.


Best regards,

Radek Gruchalski
ra...@gruchalski.com


On March 27, 2017 at 10:41:50 PM, vvshvv (vvs...@gmail.com) wrote:

Sorry, I replied not to all the mailing list.


Regards,
Uladzimir
On Mar 27, 2017 11:39 PM, vvshvv <vvs...@gmail.com> wrote:
When user uploads some image to a REST endpoint, it should be processed in a separate Docker container. Actually, I can implement my custom mesos framework that will consume uploading events from a Kafka topic and will run tasks on slaves (in containers), but it is slightly difficult, because when message arrives, cluster can have no available resources to process it and it has to wait until mesos offers some resources in a queue. And when messages​ are waiting in a queue in memory, process can crash and I will lose all pending events.

At the same time I am aware of Marathon, but I see that it is suitable only for long tasks and I am not sure that it will handle large amount of jobs.


Regards,
Uladzimir
On Mar 27, 2017 11:28 PM, vvshvv <vvs...@gmail.com> wrote:
It seems you did not understand what I meant. My processing includes metadata extraction and resizing, each job should have isolated environment (limited cpu and memory). Processing will be either in Golang or even C++.



On Vaibhav Khanduja <vaibhavkhand...@gmail.com>, Mar 27, 2017 11:20 PM wrote:
Try deeplearning4j with spark extensions ..




On Mon, Mar 27, 2017 at 1:09 PM, daemeon reiydelle <daeme...@gmail.com> wrote:
Spark cached RDD's


.......


Daemeon C.M. Reiydelle
USA (+1) 415.501.0198
London (+44) (0) 20 8144 9872


On Mon, Mar 27, 2017 at 11:52 AM, vvshvv <vvs...@gmail.com> wrote:
Hi,

I want to run some image processing tasks on mesos cluster, but at the same time there will be a lot of tasks (1k-10k).

What framework do you suggest to use in such case?

Regards,
Uladzimir


Reply via email to