I tried your example in the Aurora vagrant box. The ports were properly 
replaced and supervisor triggered with a command line such as:

env APPLICATION_PORT=31610 /usr/bin/supervisord -n


However, I skipped the Docker part as I don't have a usable image at hand.

Are you using the Aurora command line client to perform your experiments?

Best Regards,
Stephan

PS: There is a stale review request for the process-less Docker images you have 
asked about. Feel free to comment on it to re-activate the discussion: 
https://reviews.apache.org/r/44745/




________________________________
From: Bharath Ravi Kumar <[email protected]>
Sent: Thursday, May 26, 2016 12:41
To: [email protected]
Subject: Port allocation in Aurora

Hi,

Based on the documentation, my understanding is that referring to named ports 
in the thermos namespace (e.g. as {{thermos.ports[http]}}) will cause aurora to 
request a port from mesos and make that available through the named port. I'm 
assuming this does not require additional arguments to be explicitly passed to 
the thermos executor. However, in the following example, the environment 
variable APPLICATION_PORT is not being substituted by an allocated port. 
Instead, the env var turned out to be the literal {{thermos.ports[http]}}  
rather than the actual value  (when inspected from within the docker 
container). I tried referring to thermos.ports in the process definition and 
through the docker env variable, with the same result in either case. Is there 
a reason the pystachio template substitution is failing?



supervisord = Process(
  name = 'supervisord',cmdline = "env APPLICATION_PORT={{thermos.ports[http]}} 
/usr/bin/supervisord -n"
)

launch_supervisord = Task(
  name = 'start supervisord',
  processes = [supervisord],
  resources = Resources(cpu = 4, ram = 4096*MB, disk=800*MB)
)

env_var_param = Parameter(
  name = "env",
  value = 'APPLICATION_PORT={{thermos.ports[http]}}'
)

jobs = [
  Service(
    cluster = 'example',
    environment = 'devel',
    role = 'www-data',
    name = 'hello_docker',
    instances = 4,
    health_check_config = curl_health_checker_config,
    task = launch_supervisord,
    container = Container(docker = Docker(
      image = 'docker-python-demoapp',
      parameters = [env_var_param]
      ))
  )
]

On a related note, while I understand the aurora Job-Task model, for docker 
images one shouldn't be required to define a task, with the  entrypoint or cmd 
declared in the docker image being inferred by aurora as the implicit "task" to 
run. Is this already supported, or is there a plan to support this capability 
in an upcoming release?

Thanks,
Bharath

Reply via email to