Github user aljoscha commented on a diff in the pull request:

    https://github.com/apache/flink/pull/2176#discussion_r69304962
  
    --- Diff: flink-contrib/docker-flink/README.md ---
    @@ -1,80 +1,75 @@
    -#Apache Flink cluster deployment on Docker using Docker-Compose
    +Apache Flink cluster deployment on docker using docker-compose
     
    -##Installation
    -###Install Docker
    +# Installation
     
    +Install the most recent stable version of docker
     https://docs.docker.com/installation/
     
    -if you have issues with Docker-Compose versions incompatible with your 
version of Docker try
    +Install the most recent stable version of docker-compose
    +https://docs.docker.com/compose/install/
     
    -`curl -sSL https://get.docker.com/ubuntu/ | sudo sh`
    +# Build
     
    -###Install Docker-Compose
    +Images are based on the official Java Alpine (OpenJDK 8) image and run
    +supervisord to stay alive when running containers. If you want to build the
    +flink image run:
     
    -```
    -curl -L 
https://github.com/docker/compose/releases/download/1.1.0/docker-compose-`uname 
-s`-`uname -m` > /usr/local/bin/docker-compose
    +    sh build.sh
     
    -chmod +x /usr/local/bin/docker-compose
    -```
    -
    -###Get the repo
    -
    -###Build the images
    +or
     
    -Images are based on Ubuntu Trusty 14.04 and run Supervisord to stay alive 
when running containers.
    +    docker build -t flink .
     
    -The base image installs Oracle Java JDK 1.7 and SSH client & server. You 
can change the SSH password there or add your own key and adjust SSH config.
    +If you want to build the container for a specific version of 
flink/hadoop/scala
    +you can configure it in the respective args:
     
    -- Run `./build.sh`
    +    docker build --build-arg FLINK_VERSION=1.0.3 --build-arg 
HADOOP_VERSION=26 --build-arg SCALA_VERSION=2.10 -t 
"flink:1.0.3-hadoop2.6-scala_2.10" flink
     
    -###Deploy
    +# Deploy
     
     - Deploy cluster and see config/setup log output (best run in a screen 
session)
     
    -`docker-compose up`
    +        docker-compose up
     
     - Deploy as a daemon (and return)
     
    -`docker-compose up -d`
    +        docker-compose up -d
     
     - Scale the cluster up or down to *N* TaskManagers
     
    -`docker-compose scale taskmanager=<N>`
    -
    -- Access the JobManager node with SSH (exposed on Port 220)
    +        docker-compose scale taskmanager=<N>
     
    -`ssh root@localhost -p 220`
    +- Access the Job Manager container
     
    -or on Mac OS X with boot2docker
    -
    -`ssh root@$(boot2docker ip) -p 220`
    -
    -The password is 'secret'
    +        docker exec -it $(docker ps --filter name=flink_jobmanager 
--format={{.ID}}) /bin/sh
     
     - Kill the cluster
     
    -`docker-compose kill`
    +        docker-compose kill
     
     - Upload a jar to the cluster
     
    -`scp -P 220 <your_jar> root@localhost:/<your_path>`
    +        for i in $(docker ps --filter name=flink --format={{.ID}}); do
    +            docker cp <your_jar> $i:/<your_path>
    +        done
     
     - Run a topology
     
    -`ssh -p 220 root@localhost /usr/local/flink/bin/flink run -c <your_class> 
<your_jar> <your_params>`
    +        docker run -it --rm flink:latest flink run -m <jobmanager:port> -c 
<your_class> <your_jar> <your_params>
    --- End diff --
    
    AFAIK `docker run` starts a new container to run the command. This should 
probably be something like:
    ```
    docker exec -it $(docker ps --filter name=flink_jobmanager 
--format={{.ID}}) flink run -m <jobmanager:port> -c <your_class> <your_jar> 
<your_params>
    ```
    
    This would run the command on the existing JobManager container where the 
jar was previously uploaded.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to