Docker-compose + tt-rss

@meyca
How are updates from fox’s master branch updated in your fpm container?

I see that the entrypoint.sh script runs rsync to update files from /usr/src/tt-rss/ to /var/www/html/, but how is the /usr/src/tt-rss/ directory updated?
Should a git pull origin master command be issued before the rsync command?

The update is done during build time of the container:

https://gogs.meyca.de/carstenmeyer/tt-rss-docker/src/master/tt-rss-fpm/Dockerfile#L87

The (not yet realized) idea is to (automatically) build and push a new container on each push to the master repo of TT-RSS on git.tt-rss.org. I will try with my local Gogs installation how things (webhooks, git hooks etc.) turn out.

Unless I’m something something, it appears that the container won’t receive regular updates as new commits are pushed to the master branch. With the current Dockerfile, the container will be built with the latest commit, but in order to update in the future, the container would need to be stopped, removed, and then built from source again.

Fox’s startup.sh script resolves this issue by running git pull origin master during a container reboot to make sure an existing container gets updated to the latest commit.

https://git.tt-rss.org/fox/ttrss-docker-compose/src/master/app/startup.sh#L17-L24

@milkman671, you are absolutely right and not something something. :wink:

It is some kind of container philosophy to think of a container as complete piece of software/service/microservice including all its dependencies. With this in mind you want one place to update this “monolithic” container. Having the container updating itself you suddenly have two places where updates may occur. This can become a version nightmare between dependencies and the software/service/microservice itself, because different version combinations between software/service/microservice and dependencies become possible.

So, in general, it is not a good idea to have a container updating itself. @fox’s setup does not include multiple containers for the tt-rss service so this setup only has one version combination. This reduces the problem, but might be difficult to support or a security problem, if you forget to update the container itself.

To avoid this, I want to use docker’s infrastructure to build the container and push it to the public registry. But this is, as I mentioned, not yet realized. :neutral_face:

I forgot to mention watchtower (Watchtower) to automatically update containers.

Instead of using the 15min update cron script, I wrote a basic Dockerfile using the fpm-alpine image to run a separate container for the update_daemon2.php script.
The changes below will remove the 15min update cron script and build/run a separate feed updater container. A weekly reboot script is also included to automatically update the existing app container to the latest commits in the master branch.

Edit: Based on the feedback, a simpler solution is described in the post below
https://community.tt-rss.org/t/docker-compose-tt-rss/2894/91?u=milkman671

a better way would be basing both app and daemon containers using one common ancestor and going from there, instead of using both alpine:3.9 (web) and that fpm-alpine image for the daemon.

i’m not saying using the daemon is a bad idea in itself but your implementation is.

Agreed. It looks like the app Dockerfile can be updated with a couple packages, and then re-used for the updater service.

Add the following packages to app/Dockerfile
php7-pcntl php7-posix

Edit: The master branch has been updated to include an updater service container

yeah, this is a much better approach, and i think this is what @meyca was doing upthread.

if you can file a PR with a clean modification for this, i’ll merge it.

also, why cron and reboot scripts? you don’t need cron at all, i think. if main daemon process exists, container will restart.

I can’t fork the repository or create a new pull request for some reason, otherwise I would be happy to submit the PR.

post your gogs username.

also check the stickies.

I just toyed around with foxes “official” docker container.

Everything works as expected except for terminating SSL with caddy. The reason for this is that the container doesn’t agree to the Let’s Encrypt TOS.

I fixed this by adding ENV ACME_AGREE=true to this Dockerfile .

Everything works as expected now.

Do you want me to fix that upstream?

yeah, a PR would be nice. i’m glad that it works even though personally i would put it behind nginx anyway.

this is one of those things that i wasn’t able to test because my test VMs don’t have a public IP.

nameless on Gogs. Please give me rights to fork the repo.

done.

/20charrrrrrrrRRR

e: i think i’m going to remove the “WIP” warning for the scripts soon, everything seems to work as intended (one notable exception is mail).

Thanks for merging this.

Are you going to push an Image of this container to Docker Hub or is this not going to happen?

i probably should even though i dislike all these *hubs (while actively using them, which makes me a hypocrite).

this is a compose solution though so i’m not sure what should be published exactly. app container? is there any point in it if you’re going to need to checkout docker-compose.yml anyway?

i would understand if the image was self-contained with source baked in the container but this setup works differently.

ideas?

I did some research tonight and from what I can tell pushing to docker hub would only make sense if we bundle ttrss source code with the container.

As you pointed out before, this setup is different.

I could image pushing a new container to docker hub every time ttrss source changes. Gogs has webhooks for that if I am not totally mistaken. However that would require a fundamentally different setup and I don’t know if it is worth the hassle.

Are you sure you want to provide an “official” ttrss image in the first place?

Why does tt-rss source code need to be bundled? The startup.sh script will automatically clone the master repo if the files are not present in the volume. If the files are present in the volume, startup.sh will automatically pull the latest commit from the master branch, ensuring the container is up-to-date.

The only time a new image would need to be pushed to Docker Hub, would be if a component of this ttrss-docker-compose repo (like the startup.sh script) need to be updated in the image on Docker Hub.

The docker-compose.yml could be published as the README for the container on Docker Hub, and a default Caddyfile could also be posted, which is then saved on a persistent volume and referenced in the volumes section of the web service.

The only change that is required for a successful build on Docker Hub is to set default values for the ARG’s in the app/Dockerfile

ARG OWNER_UID=1000
ARG OWNER_GID=1000

Example docker-compose.yml

version: '3'

# set database password in .env
# please don't use quote (') or (") symbols in variables

services:
  db:
    image: postgres:12-alpine
    restart: unless-stopped
    volumes:
      - db:/var/lib/postgresql/data
    environment:    
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_USER=${POSTGRES_USER}

  app:
    image: <docker_hub_username>/<docker_hub_reponame>
    restart: unless-stopped
    environment:
      - DB_TYPE=pgsql
      - DB_HOST=db
      - DB_NAME=${POSTGRES_USER}
      - DB_USER=${POSTGRES_USER}
      - DB_PASS=${POSTGRES_PASSWORD}
      - OWNER_UID=${OWNER_UID}
      - OWNER_GID=${OWNER_GID}
      - SELF_URL_PATH=${SELF_URL_PATH}
    volumes:
      - app:/var/www/html
    depends_on:
      - db

  updater:
    image: <docker_hub_username>/<docker_hub_reponame>
    restart: unless-stopped
    volumes:
      - app:/var/www/html
    depends_on:
      - app
    user: app
    command: "php /var/www/html/tt-rss/update_daemon2.php"

  web:
    image: abiosoft/caddy:no-stats
    restart: unless-stopped
    ports:
      - ${HTTP_PORT}:2015
    volumes:
      - app:/var/www/html:ro
      - path/to/Caddyfile:/etc/Caddyfile
    depends_on:
      - app

#  web-ssl:
#    image: abiosoft/caddy:no-stats
#    restart: unless-stopped
#    environment:
#      - CADDYPATH=/certs
#      - HTTP_HOST=${HTTP_HOST}
#    ports:
#      - 80:80
#      - 443:443
#    volumes:
#      - app:/var/www/html:ro
#      - path/to/Caddyfile:/etc/Caddyfile
#      - certs:/certs
#    depends_on:
#      - app

volumes:
  db:
  app:  
  certs:

so, app container.

but what’s the point? you won’t be able to simply git pull this image because it relies on other containers (i.e. postgres) to work.

if the app container changes there’s likely also changes in the compose yml, it makes sense to me to keep everything together and update through git. it just seems easier than maintaining yet another thing uptodate somewhere.

users would have to get the compose file somewhere anyway, might as well use git to get the whole thing at once.

instead of pushing things to the git repo i’ll need to update things on docker hub. don’t really see what would that give anyone other than depending on a third party service for no reason.

instead of simply git pull-ing from the scripts repo, users would need to know that there’s compose changes and update manually. it’s worse for everyone.

e: i ran into similar issues with updating weblate compose setup where images and compose scripts are independent so docker-compose pull can fetch an image which doesn’t work with current compose script which you then need to independently update.

i mean i understand hosting an OS image or something like that on docker hub. it makes sense to use them in this situation, they likely have CDN, etc rather then dealing with incoming traffic / origin load yourself.

or, if you don’t have the ability to (or simply don’t want to) host anything yourself, so you use github and all those other *hubs, you might as well publish on dockerhub.

none of this applies here.