Another approach (cli-based & short lived framework) for running multiple
arbitrary one-off tasks…
https://github.com/asamerh4/mesos-batch
Cheers,
Hubert
From: vvshvv [mailto:vvs...@gmail.com]
Sent: Montag, 27. März 2017 20:53
To: user@mesos.apache.org
Subject: framework for short living tasks
Makes sense. It might be difficult to store the offsets correctly in Kafka.
Your framework would be the one consuming Kafka.
I guess you could, in case of a failure, submit the message back to Kafka
so you do not have to manage the offsets. There’s an interesting challenge
there to find out when
Hi Radek,
I thought about this, but in this case I have to scale number of containers manually, but I would like to leverage all the resources on mesos slaves..
In this way, all I need to increase throughput is to add slave nodes to the cluster and that's it.
Regards,
UladzimirOn Mar 27, 2017
Well, you are storing the incoming events in Kafka. So data is not lost.
That’s one. Instead of scheduling a container per image, start a number of
containers, make each of them a member of consumer group and you’re sorted.
By building a framework, you’re making your life really difficult.
–
Best
Sorry, I replied not to all the mailing list.
Regards,
UladzimirOn Mar 27, 2017 11:39 PM, vvshvv wrote:When user uploads some image to a REST endpoint, it should be processed in a separate Docker container. Actually, I can implement my custom mesos framework that will consume
Try deeplearning4j with spark extensions ..
On Mon, Mar 27, 2017 at 1:09 PM, daemeon reiydelle
wrote:
> Spark cached RDD's
>
>
> *...*
>
>
>
> *Daemeon C.M. ReiydelleUSA (+1) 415.501.0198 <(415)%20501-0198>London
> (+44) (0) 20 8144 9872 <+44%2020%208144%209872>*
>
>
6 matches
Mail list logo