Myself, I use Digital Ocean for hosting, using docker-compose. I keep the 
source code (w/o credentials) for each site on separate private repos and put 
the results on a separate repo for the docker instance.

Essentially, each website is compiled with C. I write a move.sh bash script to 
move only the important files, including the executable, to the distribution 
repo.

So, to publish changes from any of the websites:

  1. After saving my work in a commit and pushing it to GitHub, I run the 
move.sh script.
  2. I go to the distribution repo directory, commit the new changes and push 
to GitHub.
  3. I shell into the DigitalOcean host(s).
  4. I take the docker down, pull from GitHub, build, and restart the docker. 
Done.



Each website in the distribution directory has a subdirectory for each website 
containing:

  * the website executable (which I call webapp.)
  * any support files such as html templates; and any subdirectories
  * a Docker file.



An example docker file:
    
    
    FROM python:3.7
    WORKDIR /ngosite
    
    ADD . /ngosite
    
    
    Run

(Ignore the word "python" up there. I'm not running any python; it is just a 
convenient starting image for me.)

In the "nginx" subdirectory, I have a single conf file for each website similar 
to:
    
    
    # ngo.conf
    server {
        listen 80;
        server_name nimgame.online.localtest.me;
        location / {
            proxy_pass http://ngo:5000/;
            proxy_set_header Host "nimgame.online";
        }
    }
    
    server {
        listen 80;
        server_name nimgame.online;
        return 301 https://nimgame.online$request_uri;
    }
    
    server {
        listen 443 ssl;
        server_name nimgame.online;
        ssl_certificate nimgame.online.crt;
        ssl_certificate_key nimgame.online.key;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers HIGH:!aNULL:!MD5;
        
        location / {
            proxy_pass http://ngo:5000/;
            proxy_set_header Host "nimgame.online";
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded_Proto $scheme;
        }
    }
    
    
    Run

My overall docker-compose.yaml file looks something like this:
    
    
    version: '3.3'
    
    services:
        ngo:
            restart: always
            build: ./ngo
            ports:
                - "5000"
            volumes:
                - "./ngo:/ngosite"
            command: /ngosite/webapp
        tr:
            restart: always
            build: ./tr
            ports:
                - "5000"
            volumes:
                - "./tr:/trsite"
            command: /trsite/webapp
        ps:
            restart: always
            build: ./ps
            ports:
                - "5000"
            volumes:
                - "./ps:/pssite"
            command: /pssite/webapp
        nginx:
            restart: always
            build: ./nginx/
            ports:
                - "80:80"
                - "443:443"
            links:
                - ngo
                - tr
                - ps
    
    volumes:
        datavolume:
    
    
    Run

I don't run any database myself. IMO, that is a good way to lose data unless 
you are a skilled DB admin and have built a full cluster. Instead I have a 
subscription with ScaleGrid for shared databases. In fact, I choose my database 
instances on ScaleGrid that are on the same AWS network as my Digital Ocean 
instances. No point to doing database queries across the open Internet backbone.

In fact, I generally never store important new data on the Docker. I do store 
cache data locally. For example, I store IP Geo lookup caches locally. It's 
okay if that get's wiped out from time to time.

I'm not claiming my setups are ideal, but this can be a starting point.

Reply via email to