Docker-compose + tt-rss

If you’re suspecting SELinux, have you tried setting it to permissive mode?

Also, if you’re having connection issues, check if firewalld is blocking them.

Hello - thanks for setting this up using Docker Compose. Not only was it simpler than the old way of doing things, it was also my first introduction to docker.

I have this working well behind a caddy (version 2) reverse proxy, installed directly (via sudo apt install) on Ubuntu Server 20.04. In case anyone comes googling for this, I’ve pasted the Caddyfile below, with a few tips “for dummies” (like me).

As just indicated - I run my caddy reverse proxy directly on the host (“bare metal” ?), and would like to also Dockerise it. This won’t work if you just mount the same Caddyfile (pasted below) into the dockerised version of caddy. I think that’s because once it’s within the container, the addresses “localhost” or “127.0.0.1” (i.e. what the reverse proxy sends traffic to) will now refer to the container itself - not your physical host machine. My (n00b) understanding is that I should somehow link revrese proxy and tt-rss containers, or create a shared (virtual) network that bridges them.

Assuming that’s right, my question is: What’s best practice here - do I edit tt-rss’s docker-compose.yml file to define a specific network, and then also pass network that as an option when spinning up the reverse proxy container? And if so, do all four of the containers created by tt-rss (web, app, db, updater) need to have that custom network (or some other, internal network) defined for them? Also, is there anything to be done in respect of the ports exposed by the various containers?


(Notes for people wondering how to set up caddy, as a reverse proxy to tt-rss, without using Docker:

It was as simple (on ubuntu) as:

  • adding the repository to apt and then running sudo apt install caddy (follow the instructions on the Caddy website), -
  • using the Caddyfile below (which should live at /etc/caddy/Caddyfile), and then
  • enabling / reloading the service via sudo systemctl enable caddy (or “restart” rather than “enable”).

Make sure ufw (firewall) rules also allow TCP traffic on those ports. Finally, if your server is behind your own router/gateway (e.g. at home), sure you have your router/gateway forwarding ports 80 and 443 to the server your reverse proxy lives on.

Caddyfile:

example.org # Add your domain name here, including the subdomain if appropriate.  If you omit http:// or a port, caddy will set up HTTPS for you automatically - including fetching certificates for the     domain/subdomain.

reverse_proxy /tt-rss/* 127.0.0.1:8280

Turns out podman and podman-compose seem to be a perfect drop-in replacement for docker on Fedora 32.

I’ve read the README, but I’m a bit confused on the way to define SELF_URL_PATH. I don’t want the trailing /tt-rss/ at the end since I have a dedicated subdomain for tt-rss. How can I do that?

I am using nginx, I tried to reverse_proxy from my subdomain to http://localhost:8280/tt-rss/ but I get the error Startup failed: Please set SELF_URL_PATH to the correct value detected for your server: http://localhost:8280/tt-rss/. So I changed SELF_URL_PATH directly to my subdomain, but I get the same result.

Here’s my .env:

# Copy this file to .env before building the container.
# Put any local modifications here.

BUILD_TAG=latest

POSTGRES_USER=postgres
POSTGRES_PASSWORD=long_password

OWNER_UID=1000
OWNER_GID=1000

# You can keep this as localhost unless you want to use the ssl sidecar 
# container (I suggest terminating ssl on the reverse proxy instead).
HTTP_HOST=localhost

# You will likely need to set this to the correct value, see README.md
# for more information.
SELF_URL_PATH=http://feed.mydomain.com/

# bind exposed port to 127.0.0.1 by default in case reverse proxy is used.
# if you plan to run the container standalone and need origin port exposed
# use next HTTP_PORT definition (or remove "127.0.0.1:").
HTTP_PORT=127.0.0.1:8280
#HTTP_PORT=8280

And here’s my nginx conf:

server {
    listen 80;
    listen [::]:80;
    server_name feed.foolstep.com;

    location / {
        proxy_pass http://127.0.0.1:8280/;
    }

PS: Since I successfully ran tt-rss on Fedora, I am now trying on my server which is on Ubuntu Server 18.04.

you can’t with my containers. use something else if that’s that important to you. this has been discussed before in this thread.

also i’m sure there’s enough documentation around to figure out SELF_URL_PATH without any further spoonfeeding on our part.

Can you allow randompherret to fork. I would like to add an example override file and add to the documentation

sure, done. /20charRRRR

docker volume rm ttrss_*

Not that hard to type that out. I hope that people that come here looking for help that are new to self hosting aren’t scared off by your horrible attitude. I’ll enjoy the ban that I’m sure will be coming if you suspended someone who got confused as to why the system running a docker container wasn’t considered supported.

Respectfully, it’s more than your contribution. You created an account just so you could complain about the owner of a forum? :roll_eyes:

itt: a wannabe martyr :thinking:

Hello! Any intention to update this to use Caddy v2, now that it’s released and v1 is no longer being worked on?

if someone wants to file a PR (and there’s a telemetry-less docker repository), sure.

Happy to - I have it working on my local machine, and v2 of caddy dropped telemetry entirely. It’s also TLS by default, as it can do local TLS certificate issuance (without Let’s Encrypt or an external domain), so that even localhost traffic can be encrypted. I think that means you no longer need to have different /web and /web-ssl directories, and maybe have everything depend on a single environment variable (which the user could set as either localhost or a web domain name) - but I want to play around a bit to test all that.

I’ve signed up an account on git.tt-rss.org, but can’t create or fork the repo. Do I need special permissions? My username is dante.

Second question - how important to you are following three directives in the current Caddyfile? I’d need to figure out how to convert them to v2 format - I just don’t know if they’re especially important in this scenario.

log stdout
errors stderr
internal /tt-rss/cache

the first two are niceties so that you can see log output via docker-compose log, i’d like to have those if possible.

the last one is actually important, it’s related to nginx-compatible X-Accel support, it marks cache directory so that static files are sent in a fast way, without php.

if there’s no X-Accel support i’d rather go back to nginx.

sure, you should be able to fork and make repositories now.

OK - logging should be doable, see log (Caddyfile directive) — Caddy Documentation

Less sure about X-Accel, though. I think it might be better to wait for caddy v2.1 for that. You’re currently using the http.internal module for that directive (in caddy v1), as I understand it. Git issue commentary from the dev suggests he’s working on something in v2.1 that’s related to this (this time leveraging the reverse_proxy directive) - see here:

Second thought - I wonder if we can use caddy’s route{} directive for this, to serve static files from /cache directly, sending everything else to php? See php_fastcgi (Caddyfile directive) — Caddy Documentation ?

See also: Example: configure WordPress with a static cache - Wiki - Caddy Community

Is it just the cache directory that needs to be served as static files? I might have a try.

For what it’s worth, my single user install is using caddy2 sending everything to PHP (including calls to tt-rss’s cache, I guess), and performance is fine - but that’s on an intel i3 6th gen 8gb ram machine, I guess a raspi or similar might be a different story.

Out of the box PHP isn’t multi-threaded. It only handles one thing at one time per process. This means as requests increase (more users or interaction) it gets tied up handling requests and can’t respond. Basically, PHP itself is a bottleneck. This becomes readily apparent if you start using it to serve media files. Whereas modern web servers can tend to handle that stuff pretty easily. By having PHP pass that type of request back to the web server, it’s freed to handle the stuff it needs to handle (loading article content, marking things read, starred, etc.).

Gotcha.

There seems to be quite a lot of possibility within caddy v2 to do this; e.g. the examples here are specifically to bypass PHP if something exists in cache: try_files (Caddyfile directive) — Caddy Documentation

Would that approach suffice, or do we need to be a bit more sophisticated (e.g. do we need to authenticate requests to cached resources, i.e. only serve something from cache if the user has a logged-in session cookie)?

i think we might as well wait until there’s proper x-accel support. it does make a difference and i would definitely prefer to not lose it.

if caddy didn’t support it in the first place, my docker setup would use nginx instead.

Sure. Just so I’m clear on what you’re looking for - is it because simply bypassing the php engine when requesting cached static files isn’t sufficient - e.g. are you relying on x-accel for security as well?