I am trying out the suggested docker-compose solution and found out that the entire git-repository is exposed with the standard setup using Caddy or using the nginx without requiring authentication including this url:
I didn’t find anything in the accompanied README or the FAQ about making sure to hide this. Is the docker-compose solution supposed to expose this information to the internet? Or am I supposed to make sure my reverse proxy is not forwarding these requests? Known vulnerabilities of past versions of tt-rss could be discovered by knowning on which git commit the instance is running.
you don’t say. i don’t want to shock you or anything but the same exact thing is also “exposed” at https://git.tt-rss.org/.
that’s up to you. some people revel in false sense of security provided by trying to pointlessly block all sorts of random things.
personally i don’t think much could be gained by trying to achieve security through obscurity.
if you bother to read this thread a bit you’d quickly notice how this situation (= people running old tt-rss code) is what i was specifically trying to prevent while making this docker setup.
I’ve had a long-running tt-rss install on a CentOS guest running under Synology Virtual Machine Manager. After some trial-and-error of interpreting the docker-compose files into Synology’s Docker GUI, I’ve got the static docker solution running.
A couple questions:
I followed this guide, but didn’t see a reference to the default username/password for the initial login (I guessed reasonable defaults).
Re: backups, the above guide and this FAQ says the backup job will run weekly. Does that run on a fixed time/day of week?
After your reply, I did some more reading. Based on dcron README, here are the contents of the example crontab file available with dcron. Your backup script file is stored in /etc/periodic/weekly/.
i’ve noticed that default docker setup exposed config.php.bak because of a SELF_URL_PATH rewrite. for a vast majority of tt-rss docker users this file contains nothing of interest to a potential third party but i suggest everyone to update just in case.
for docker hub configuration this is handled in the image so you only need to pull new image.
if you’re using dynamic (sans docker hub) setup than i suggest doing git pull on docker scripts and rebuilding your containers.
For those of us not having lots of resources to spare, can there be an option where we can use our existing database server and possibly our own web server? In particular running multiple postgres database servers seems rather wasteful of resources. If I already have a postgres server running for other aspects of my site, why should I start up another instance of the database engine just for tt-rss?
The same is true of the web server. If I already have a web server setup with all of the modules for other aspects of my site, why start yet another web server?
I’m not going to educate you on why using an isolated database is not a “waste”, if you don’t understand the point of containers, use google until you do.
in any case nobody is forcing you to use docker in the first place nor are you mandated to use a separate postgres server by some higher self hosting authority so the point of your post escapes me.
simply edit the yml until you’re satisfied with the results.
OK, thanks. I had thought that by specifying the docker setup as the default would require docker for future versions. If that’s not required, then great.