Tiny Tiny RSS: Community

Docker-compose + tt-rss

We’ll have to differ on that one. :slight_smile:

Definitely don’t use Apple products then… :slight_smile:

I updated my docker setup this morning that was about 10 days old (I’m using a docker-compose override to redirect the web container to an nginx-web folder that is a copy of the pre-caddy scripts).

It rebuilt app, created updater, and is still completely operational.

I agree, I think it’s ready!

The only suggestion I can think of at this point is regarding the FAQ entry about nginx. I suggest using docker-compose.override.yml and directing the web container to build from a new folder instead of overwriting the contents of /web. I find this easier to work with git pull and keeping up to date with the other scripts.

oh yeah this makes sense, thanks.

i’ll add a sample configuration for nginx frontend container to the repo, it doesn’t hurt anything and it would be easier for non-x86 users etc.

Excellent, thank you!

Hi there, can anyone help me with ssl. in installation guide there is only the hint ->optional SSL support via Caddy . I changed the port in .env to 80, but now I want to user 443 in ssl. How I should configure env to use https over 443 port? should I create the certificates manually?

you should bind web-ssl to both 80 and 443, which should be externally accessible via a valid DNS hostname (i.e. SELF_HOST_NAME).

then on startup caddy should acquire certificates from letsencrypt.

I have no problem with hostname since I am connected through Cloudflare. So, I should add an entry in .env ? now I have only HTTP_PORT=80 .
The new configuration should be

HTTP_PORT=80
HTTPS_PORT=443 ?
I prefer only to use https, not both http/https.

have you even read through docker-compose.yml? default configuration for web-ssl container binds to 80 and 443, this is hardcoded because that’s the only way letsencrypt setup would work.

http is needed for certificate verification.

if this stock barebones automatic setup doesn’t fit you, you should use your own SSL termination setup.

this you would allow you to, among other things, validate letsencrypt certificates via cloudflare API or DNSSEC and disable http entirely.

I get the following error ERROR: for ttrssdocker_web-ssl_1 Cannot start service web-ssl: driver failed programming external connectivity on endpoint ttrssdocker_web-ssl_1 (6d8260fa647d7c7949ba0b62c48740cbb62471cbe586f64c9a845bec9e3be883): Bind for 0.0.0.0:80 failed: port is already allocated

Thanks for working on this. I got it running easily with the README.

I did have a question though. I am migrating from a git clone'd install that is behind a reverse proxy that has the address of https://rss.mydomain.com.

For this setup, I noticed it uses /tt-rss as the “qualifier” for where tt-rss actually lives. My question: @fox have you considered modifying this to allow for a subdomain config like my original setup? If not, would you be open to PRs if I can experiment with things to allow for that?

have you tried reading the error message?

it works just fine on a subdomain, the only difference is cosmetic - i.e. /tt-rss/ in the URL. i don’t really see the point in adding complexity there just because it is not “clean” enough or whatever.

remember, this setup is not meant to be a kitchen sink monstrosity like that shit linuxserver one that started this thread, with fifty idiotic options and a twenty page installation manual.

except if you are pointing the Android App to the short URL :wink:

Well, in the dockerized approach, you can let the fpm container respond on root, and then only introduce the path in nginx with a location /tt-rss block. As for me, I was really confused as for why my install, on a /reader path, would redirect on /tt-rss, a path my reverse proxy didn’t know what to do with.

It sounds kinda unusual for a dockerized app to respond with a path. As I already made the modifications, should I send a pull request? (And then, decide later if merging or not)

i think URL translation is not as simple as adding a location block though.

yes, normally people assume you’re going to have a dedicated subdomain for every docker app. personally i find it annoying.

depends on what you actually did. if you simply changed the container to respond on root, then no.

There is nothing simple about nginx proxy_pass and proxy_redirect directives :wink:
https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect

I would too, but instead I expect to have a single frontend nginx doing stuff like this, without exposing the actual app container themselves:

server {
  listen 443 ssl http2;
  include snippets/ssl;
  location = /app1 { rewrite ^ /app1/ redirect; }
  location /app1 {
    rewrite /app1/(.*) /$1 break;
    proxy_pass http://app1-web;
    proxy_redirect http://app1-web/path /app1;
    # where app1-web is the hostname of the app's web container 
    # on a non-default docker bridge network
  }
}

Yeah, I just deleted index.php and changed the root directive from /var/www/html to /var/www/html/tt-rss.

ahh i’ve entirely forgotten this exists. still its a rather complicated setup, what with all the rewrites.

The first rewrite is there to support example.com/path instead of only working for example.com/path/

The second might be avoidable. As is proxy_redirect, which has a sensible default. :smiley:

i’ve implemented some fixes related to todays mini-clusterfuck of git.tt-rss.org going down: if local source is available on startup but attempt to update via git clone fails for some reason, container shouldn’t abort now.