Docker-compose + tt-rss

I’m a newbie to both Docker and tt-rss. I’m sorry that I didn’t know the precise syntax to use.

I was confused because the commands above that are copy & paste style, and that one isn’t.

I’m happy to edit the wiki to make it clearer, but I don’t have permission.

Are there any plans for a migration guide or tool?

My server’s not riced out or anything, but I’m not familiar with Docker, so I’ve definitely got a learning curve ahead of me.

Some specific questions I hope to figure out…

  1. I assume that the container(s) will have the following:
  • Web server configured to use included PHP binaries
  • DB infrastructure
  • PHP code for the app
    Is this correct?
  1. Where can I find information about pointing my existing, bare-metal web server to the container’s web server? It’s Apache, and I know you prefer nginx, but I assume there’s something out there somewhere. My goal is to use port 443 on the host machine to access the container’s web server so I can get my RSS fix.
  • My search-fu is failing me. I have things other than TT-RSS running on my bare metal, but most of the results I get are “How do I view a web server that’s running in a container?” and I would like it to be clean instead of relying on port-forwarding for a specific app.
  1. Is there a way to migrate my existing setup, or would this be like configuring it anew, exporting and then importing an OPML to migrate my feeds?

Thanks!

I gave you the benefit of the doubt and decided to read through the wiki. I don’t think you should be given edit permissions for it. It’s pretty clear.

I think the issue here is that you’re really new with this stuff. That’s okay, everyone starts somewhere. Just keep in mind that you are new to this stuff and asking to edit installation instructions when you’re still learning probably isn’t prudent.

i moved your post to a more appropriate place.

yes. also: web server that you reverse proxy to.

i dunno, google “docker reverse proxy”? FAQ includes an example snippet.

i don’t know what this means.

a common pattern is one reverse proxy which terminates SSL and connects to individual containerized services, which are listening on separate ports, likely bound to localhost. in case of tt-rss it boils down to forwarding one location (/tt-rss/).

YMMV but that’s how i did it:

  • dumped existing host tt-rss database
  • started container setup to create storage volumes, init database, etc
  • imported database dump into container database server like this: zcat ttrss_stable.sql.gz | docker exec --user postgres -i ttrss-docker-static_db_1 psql
  • moved plugins.local, themes.local to app docker volume
  • copied some extra stuff to config.php on the same volume

that’s it. there’s really no need to deal with OPML unless you also need to migrate from mysql. if you need to migrate from mysql, there’s a separate tool, it’s not perfect though.

i’m using the docker hub version which is auto updated by watchtower.

if you have any further questions i refer you to this thread and docker documentation, which is quite extensive while being easy to understand.

Thank you, fox!

By “clean” I meant where I could, from a different machine, use a url like https://server/tt-rss instead of having to go to the dockerized web server using something like “https://server:8380/tt-rss”. So it looks like a reverse proxy is the way to go.

So again, thanks!

Hi all, I was trying to get the dockerized version to run on a raspberry pi 4. It didn’t immediately work, so I wanted to share how I got it running. The problem was that the alpine and caddy base images were pulling down as x64 rather than ARM. As best I can tell, docker hub is supposed to figure this out for you when there are images available for your architecture, but the pi reports its CPU strangely and that doesn’t work. What I did to get this working was to follow the dynamic instructions rather than the static ones. Then I edited the Dockerfile for app to use “arm32v7/alpine:3.12” as a base image, and for web to use “detroitenglish/docker-caddy-rpi:latest” as a base image. After doing that, all 5 containers start and the web UI basically seems to work. I’m still poking at it, it’s a fresh install and can log in as admin and it just pulled articles for the default feed setup so I’m going to call it a success in terms of setup. YMMV of course, in particular, that latter image doesn’t seem authoritative judging by name alone but for me this is not a critical app and if I had to reinstall my entire RPI I wouldn’t care that much. Hope this is of some use to others! All my Googling just led me to extremely old instructions with fairly different container setup.

Just now I converted from direct hosting tt-rss to the docker version. Here are a few notes, which I hope will be helpful to other people. Some of the things I’ve done may not be optimal, so I welcome comments.

I started with the docker-compose.yml and instructions at https://git.tt-rss.org/fox/ttrss-docker-compose/src/static-dockerhub/README.md, but don’t up it yet.

Copy .env-dist to .env

Create a user and group for tt-rss. I called mine “ttrss”. I used these commands. addgroup --system ttrss then adduser --system --no-create-home --group ttrss. Note what the gid and uid are, or run something like getent passwd ttrss. Edit the .env file and put in the appropriate ids.

Then use pwgen or something to generate a password for postgres, and put that in there.

For my setup, the SELF_URL_PATH is http://my.host.name/tt-rss with http, not https, even though I am using a secure connection. The connection between my reverse proxy (non-dockerized) and TT-RSS is not encrypted.

Then I stopped my old tt-rss updater with systemctl stop tt-rss-local. The method of stopping the updater will be different on different installs.

Then I exported the old postgres database with sudo -u postgres pg_dump --clean tt-rss | zstdmt > backup/tt-rss.db.zst Of course use the appropriate database name for your install.

At this point I was finally ready to run docker-compose up -d in the ttrss-docker directory.

Then I imported the database to the docker postgres with zstdcat ~/backup/tt-rss.db.zst | sudo docker exec -i ttrss-docker_db_1 psql -U postgres

Once the dockers were all running, I copied over my old plugins.local and themes.local to those directories in /var/lib/docker/volumes/ttrss-docker_app/_data/tt-rss and ran chown -R ttrss. on them.

At this point things mostly work, and a connection to http://localhost:8280 should give a SELF_URL_PATH error.

I’m using Apache as my reverse proxy, because my setup predates widespread use of nginx.

<Location /tt-rss>
  ProxyPass        http://localhost:8280/tt-rss
  ProxyPassReverse http://localhost:8280/tt-rss
  ProxyPreserveHost On
</Location>

It didn’t immediately work, because I had to do some firewall tweaking so that the different tt-rss dockers could talk to each other. Depending on your setup, this might not be necessary.

After that, I was able to connect to https://my.host.name/tt-rss and it worked. I logged in with my username, and all of my stuff (except some favicons) was there. A bunch of my feeds were red though, because the tt-rss docker couldn’t reach my rss-bridge docker.

There are probably lots of ways to make this work, but the easiest to me was to just put tt-rss into the same network as rss-bridge. Those feeds were already subscribed with http://172.29.0.2/?action=display&bridge=... so all I need is to let tt-rss see that IP. To do that, I edited the docker-compose.yml file and added

networks:
  default:
    external:
      name:  my-docker-network-name

At the top. Now tt-rss can talk to my other containers.

docker-compose down && docker-compose rm && docker-compose pull && docker-compose up -d and the rss-bridge feeds are working again.

Much of that stuff has all been covered in other places, but some of the steps weren’t immediately obvious. None of it is magic, and this list is not a tested recipe, so don’t blindly copy and paste, but hopefully it will give people a general series of steps to follow.

There are different ways to do lots of these things, for example it might be better to add rss-bridge to the docker-compose.yml file, or to bridge the two networks, but this is the way I chose to do it. I welcome criticism if there are better ways to do any of this.

If it doesn’t work, in my case all I have to do is revert the Apache changes, and I can go back to using my hosted version. If it does work, run systemctl disble tt-rss-local.service or equivalent. Then in a week or so, rm -rf the old hosted install and DROP the database.

ETA: Oh yeah, don’t forget to exclude /var/lib/docker/volumes/ttrss-docker_app/_data/tt-rss/cache from your backups.

i’m just going to note that all this stuff about making a host user and customizing UID/GID is not necessary.

e: also i’d use an internal hostname for rss-bridge feeds instead of an IP address.

The permissions on /var/lib/docker/volumes will keep a user from accessing the files, even if they share a UID with them, but it would really mess with filesystem quotas for that user. Also, it just looks ugly to have a bunch of files from a container owned by some random user. I have no idea what will happen if somebody runs deluser --remove-all-files on the user with that UID. Certainly those things are not a big deal on a single user system, but I work on a lot of multi-user systems, so I tend to plan that users come and go, and they’ll do stupid things like download terabytes of data without knowing where to put it.

Yes, it is probably better to use names for the different dockers, and be able to address them that way. I went down the path of using static IP addresses for my containers, and I’m going to stick with it even if it makes thing harder for me!

i don’t think you should go in there directly, at least not during regular container operation. storage volumes are kinda abstracted away for a reason.

if this was a virtual machine, this would be editing machine filesystem image directly while complaining that UIDs inside don’t correspond to anything on the host. they aren’t supposed to.

i don’t think i ever used filesystem quotas in my life (other than on dedicated storage devices) so i can’ t comment here.

e: me pointing out the unnecessary parts was largely because someone is obviously going to follow the most convoluted post on this forum for his rpi “homelab” because people somehow always make things harder on themselves for no reason.

Why does the default (non-ssl) setup still use Caddy 1 instead of nginx? Caddy 1 is not supported anymore, and I read somewhere in this gigantic thread that fox doesn’t want to upgrade to Caddy 2 yet (can’t remember why). Also, web-nginx already exists and works fine, so it’s not more effort.

i think caddy is only there because of letsencrypt, which is a niche usecase. we could switch to nginx being the default and keep caddy optional.

this would also remove an unmaintained (?) third party dependency.

e: at some point nginx wasn’t there so using caddy would save on configuration files and stuff but since web-nginx is there anyway yeah it doesn’t make a lot of sense to use caddy for http.

i’m going to migrate web to web-nginx as the default, basically comment out the former and uncomment the latter in the default yml.

we’re going to keep caddy as an optional ssl-aware frontend.

relevant commits:

https://git.tt-rss.org/fox/ttrss-docker-compose/commit/105edb31490d0d6c970afd02373173ca179ff8b0

https://git.tt-rss.org/fox/ttrss-docker-compose/commit/c447753a50dcc6f6bb1c56b923825e89a8d2bd12

happy new year, etc.

e: this may be entirely subjective but i think it works faster with nginx too, especially noticeable on larger cached media files not slowing down the rest.

I’ve been looking at migrating to Docker as well - it looks like a really pleasant setup. I am aware that you don’t much care about the /tt-rss/ subfolder being there, @fox, and I understand your reasoning. But I don’t want to confuse a handful of not technologically fit users here, so wanted to get rid of it.

So I went and changed nginx.conf to show root /var/www/html/tt-rss; as well as removing the tt-rss part of the two location statements, loaded my old database as described in the forum and am running this as a test for a few days now. Seems to work nicely.

Is this an acceptable modification or will it break in unexpected ways?

1 Like

nope, it shouldn’t break anything.

I’ve updated the wiki on the subject of connectivity errors:

https://git.tt-rss.org/fox/ttrss-docker-compose/wiki/Home#im-running-into-502-errors-andor-other-connectivity-issues

I think I mentioned the important parts, however suggestions welcome to add anything else related I’ve missed that could help troubleshoot these issues.

Thanks for the work on providing tt-rss in docker compose containers.

I was wondering, what are your thoughts on providing & maintaining configuration information, separately to the application code?

Typically, with docker containers, the application code & containers should be disposable, with state (the db, etc.) kept outside the container in a datastore or persistent volume.

Currently the application is stored, along with the config, in the ‘app’ persistent volume.

When starting a new container, configuration information for some parameters can be provided through environment variables, and then on first startup, where the defaults are taken from config.php-dist displayed to the user, validated, then saved to config.php

There’s also a bunch of options in the config.php that can’t be configured through the interface, and must be manually edited. Eg. SINGLE_USER_MODE, SIMPLE_UPDATE_MODE, LOCK_DIRECTORY, CACHE_DIR, ENABLE_REGISTRATION, REG_MAX_USERS, etc, etc. A whole lot of config is possible.

One solution some alternative dockers configurations have used are reading environment variables, and using sed, to replace some of these configuration items on startup.

This effectively means there has to be a one to one mapping/mechanism for each configurable item with an environment variable.

It’s manageable, but the use of sed to modify config.php is a bit hacky.

Another option is to put config.php on it’s own docker volume, separate from the app volume, which would persist it and allow the app volume/code to be rebuilt separately. But this doesn’t allow for easy configuration through environment variables, and deployment using configuration/architecture as code.

I think ideally, it’d be great if nearly everything in either config.php, or config.php-dist was configurable through environment variables, then stored in a persistent data store. However, this needs to be designed in to the installation process.

Do you have any plans or thoughts on how you want initializing and persistence of configuration to be stored long term?

having everything in config.php to be configurable via environment is a good idea but i’m not sure how to do this properly given its format (a php source file). the setup is currently using sed for SELF_URL_PATH etc but it’s obviously not the best solution.

2 posts were split to a new topic: Docker: environment-based configuration

Thanks for everyone who posted here, and obviously fox specifically for the work and documentation. Thanks to you, I successfully migrated from a hosted install (on and ARM64 platform) to a docker install (also on an ARM64 platform, though a different device).

To be able to get the containers up and running on ARM64 I used the dynamic installation. I understand that to update application I just have to restart the containers but I have some extra questions:

  1. Can I see somewhere that there is an update available, so that I’m not just restarting the containers when it’s not necessary. (Either in the app itself, or by comparing the version number to a published number on github or somewhere else)
  2. Do I always need to restart all containers, or only the app container, or …

Thanks!