Docker-compose + tt-rss

I am trying out the suggested docker-compose solution and found out that the entire git-repository is exposed with the standard setup using Caddy or using the nginx without requiring authentication including this url:

http://host:port/tt-rss/.git/refs/heads/master

I didn’t find anything in the accompanied README or the FAQ about making sure to hide this. Is the docker-compose solution supposed to expose this information to the internet? Or am I supposed to make sure my reverse proxy is not forwarding these requests? Known vulnerabilities of past versions of tt-rss could be discovered by knowning on which git commit the instance is running.

you don’t say. i don’t want to shock you or anything but the same exact thing is also “exposed” at https://git.tt-rss.org/.

that’s up to you. some people revel in false sense of security provided by trying to pointlessly block all sorts of random things.

personally i don’t think much could be gained by trying to achieve security through obscurity.

if you bother to read this thread a bit you’d quickly notice how this situation (= people running old tt-rss code) is what i was specifically trying to prevent while making this docker setup.

I’ve had a long-running tt-rss install on a CentOS guest running under Synology Virtual Machine Manager. After some trial-and-error of interpreting the docker-compose files into Synology’s Docker GUI, I’ve got the static docker solution running.

A couple questions:

  • I followed this guide, but didn’t see a reference to the default username/password for the initial login (I guessed reasonable defaults).
  • Re: backups, the above guide and this FAQ says the backup job will run weekly. Does that run on a fixed time/day of week?

yeah the default is admin/password, README should mention it. i’ll make a note to add it later.

backups happen when dcron decides to run cron.weekly inside the container. i’m honestly not sure when exactly this happens. :thinking:

-rw-r--r--    1 root     root     227123782 Oct  3 03:01 ttrss-backup-20201003.sql.gz
-rw-r--r--    1 root     root       1821449 Oct  3 03:01 ttrss-backup-20201003.tar.gz
-rw-r--r--    1 root     root     226514944 Oct 10 03:01 ttrss-backup-20201010.sql.gz
-rw-r--r--    1 root     root       1820420 Oct 10 03:01 ttrss-backup-20201010.tar.gz

so for my instance it’s saturday 3am for some reason.

After your reply, I did some more reading. Based on dcron README, here are the contents of the example crontab file available with dcron. Your backup script file is stored in /etc/periodic/weekly/.

# do daily/weekly/monthly maintenance                                           
# min   hour    day     month   weekday command                                 
*/15    *       *       *       *       run-parts /etc/periodic/15min           
0       *       *       *       *       run-parts /etc/periodic/hourly          
0       2       *       *       *       run-parts /etc/periodic/daily           
0       3       *       *       6       run-parts /etc/periodic/weekly          
0       5       1       *       *       run-parts /etc/periodic/monthly

ah well its confirmed then, saturday 3am.

i’ve noticed that default docker setup exposed config.php.bak because of a SELF_URL_PATH rewrite. for a vast majority of tt-rss docker users this file contains nothing of interest to a potential third party but i suggest everyone to update just in case.

for docker hub configuration this is handled in the image so you only need to pull new image.

if you’re using dynamic (sans docker hub) setup than i suggest doing git pull on docker scripts and rebuilding your containers.

For those of us not having lots of resources to spare, can there be an option where we can use our existing database server and possibly our own web server? In particular running multiple postgres database servers seems rather wasteful of resources. If I already have a postgres server running for other aspects of my site, why should I start up another instance of the database engine just for tt-rss?
The same is true of the web server. If I already have a web server setup with all of the modules for other aspects of my site, why start yet another web server?

I’m not going to educate you on why using an isolated database is not a “waste”, if you don’t understand the point of containers, use google until you do.

in any case nobody is forcing you to use docker in the first place nor are you mandated to use a separate postgres server by some higher self hosting authority so the point of your post escapes me.

simply edit the yml until you’re satisfied with the results.

OK, thanks. I had thought that by specifying the docker setup as the default would require docker for future versions. If that’s not required, then great.

The code for restoring the database is missing something on FAQ in step 3 of backing of the database it says:

/backups/ttrss-backup-yyyymmdd.sql.gz | psql -h db -U $DB_USER $DB_NAME

it should be:

gunzip < /backups/ttrss-backup-yyyymmdd.sql.gz | psql -h db -U $DB_USER $DB_NAME

updated, thanks.

/20charr

This may work even better, as it should not ask for the db password:

zcat /backups/ttrss-backup-yyyymmdd.sql.gz | PGPASSWORD=$DB_PASS psql -h $DB_HOST -U $DB_USER $DB_NAME

typing in your database password serves as a confirmation, of sorts. it’s probably a good idea to have at least one safeguard. :slight_smile:

The instructions say

Copy .env-dist to .env and edit any relevant variables you need changed.

I assume this means run cp .env-dist .env/ but it isn’t really clear. Could this be clarified?

Given there’s no .env directory to copy anything anything into, I’m struggling to see the ambiguity in how it should be interpreted:

cp .env-dist .env

(Ignoring the fact that cp foo-dist foo is a common idiom with software distributed in this fashion anyway…)

i hope for your own sake this was some kind of esoteric trolling because it’s just embarassing otherwise.