The sleep shouldn't be needed - you can instead add another order
constraint. Rather than combining 2 SequentialTasks you can use Task
directly with constraints:

combined_nimbus = Task(
  processes = [nimbus, nimbus_ui, fetch_storm],
*  constraints = order(fetch_storm, nimbus) + order(fetch_storm,
nimbus_ui),*
  ...)

Then the nimbus_ui and nimbus processes will run together in parallel, but
neither will execute until the package has finished downloading and
extracting.

On Fri, Oct 16, 2015 at 2:08 AM Rogier Dikkes <[email protected]>
wrote:

> Hi Stephan,
>
> Since i recently started with aurora i understand the need for examples.
>
> The best examles i found were at:
> http://aurora.apache.org/documentation/latest/configuration-reference/
>
> And i found a no longer working example at:
>
> http://www.livewyer.com/blog/2015/04/13/deploying-docker-containers-using-apache-aurora
>
> I have seen some discussions in some documents/chats/other places about
> how to make Aurora more friendly for new users, i think that the Marathon
> community is doing an excellent job by providing json files for Docker
> deployments.
>
> To help a bit more i provide you with a nimbus deployment i made 2 days
> ago for the Storm Mesos framework, all you need to do is package the Storm
> deployment with the right configurations and provide the tgz on a central
> location such as a webserver or hdfs to be able to fetch it
>
> nimbus = Process(
>   name = 'nimbus',
>   cmdline = "cd ./storm-mesos-0.9.3 && ./bin/storm-mesos nimbus")
>
> nimbus_ui = Process(
>   name = 'nimbus_ui',
>   cmdline = "sleep 240 &&  cd ./storm-mesos-0.9.3 && ./bin/storm ui")
>
> fetch_storm = Process(
>   name = 'fetch_storm',
>   cmdline = "wget http://yourhttpserver/storm-mesos-0.9.3.tgz && tar zxf 
> ./storm-mesos-0.9.3.tgz && chmod +x ./storm-mesos-0.9.3/bin/storm-mesos && 
> chmod +x ./storm-mesos-0.9.3/bin/storm")
>
> task_nimbus = SequentialTask(
>   processes = [fetch_storm, nimbus],
>   resources = Resources(cpu = 4.0, ram = 8192*MB, disk = 10240*MB))
>
> task_nimbus_ui = Task(
>   processes = [nimbus_ui],
>   resources = Resources(cpu = 2.0, ram = 2048*MB, disk = 10240*MB))
>
> combined_nimbus = Tasks.combine(task_nimbus, task_nimbus_ui)
>
> jobs = [Service(
>   task = combined_nimbus, cluster = 'Meep', role = 'infra', environment =
> 'test', name = 'nimbus', contact = '[email protected]', instances = 1)]
>
>
> Please understand i am also new to this and its probably not the correct
> way or best way (especially the sleep), but maybe it helps to get the
> discussion or advices from others.
>
> Rogier
>
>
>
>
> On 10/16/15 9:49 AM, Erb, Stephan wrote:
>
> I have just came across this one here: 
> https://issues.apache.org/jira/browse/AURORA-215
>
> I guess that is what I am looking for :-)
>
> ________________________________________
> From: Erb, Stephan <[email protected]> <[email protected]>
> Sent: Friday, October 16, 2015 8:48 AM
> To: [email protected]
> Subject: Continuous Deployment with Aurora
>
> Hi Aurora users,
>
> I am interested in how you use the Aurora client or the Aurora API in you 
> daily business of releasing and deploying code:
>
> The Aurora client is rather generic. So, what have you build around it to 
> enable concepts like continuous deployment, canary releases, etc? I'd imagine 
> that most of you have somehow scripted the process of a user performing a git 
> commit to actually running this code in production.
>
> We are basically looking for some inspiration on what works great with Aurora 
> and what doesn't.
>
> A little background: We have used Aurora to replace the backend of an 
> existing inhouse PaaS without changing the external PaaS API. This has been 
> working great for us. However, we also see that Aurora offers some 
> interesting features that we would like to use. We could either continue to 
> invest into our own API wrapper to support those features, or we could try to 
> move into a similar direction as the rest of the community [1]. The latter 
> approach sounds somewhat more sane.
>
> [1] for example, as seen in https://github.com/wickman/sacker
>
> Thanks for your input. Much appreciated.
> Stephan
>
>
> PS: Was great to meet some of you in Dublin at MesosCon Europe!
>
>
> --
> Rogier Dikkes
> Systeem Programmeur Hadoop & HPC Cloud
> e-mail: [email protected] | M: +31 6 47 48 93 28
> SURFsara | Science Park 140 | 1098 XG Amsterdam
>
>

Reply via email to