“Mountpoint”: “/var/lib/docker/volumes/ttrssdocker_app/_data”
The file update_daemon2.php exist in this folder, how i should call php through docker to run the update process?
if you’re talking about git.tt-rss.org container - yes, it runs basic update cronjob every 15 minutes.
there’s a variety of ways to do that, to do it properly you’d have to modify app container startup script and run it from there, preferably using a supervisor.
e: a separate container could run the daemon but it’s probably Too Much Docker
[quote=“fox, post:22, topic:2894”]
there’s a variety of ways to do that, to do it properly you’d have to modify app container startup script and run it from there, preferably using a supervisor.
[/quote]Ok i will try this. Ttrss with docker is SUPER!!! i installed it with cloudfare as proxy and works perfectly!!!
So I was a user of the linuxserver tt-rss image, however it seems I’m going to have to switch over to this one as theirs was deprecated. Looking over the repo, it seems like the image is only compatible with postgresdb? I have an existing db on a mariadb host outside of my docker environment. What work would be required to use mariadb with your image? I’m not an expert but if someone wants to point me in the right direction I’m willing to try and test things out
linuxserver/ttrss image is a mess now, so today morning i moved to Awesome-TTRSS image by HenryQW, and in afternoon i saw @fox have his own “ttrss-docker-compose” and i quickly jumped on it because its has latest build.
to answer your question, follow /fox/ttrss-docker-compose its straight forward and it has postgres included, if you need mariadb just change pgsql to mariadb/user/pws/host in docker compose
love it, no more depending on third party images.
thank you @fox
Unfortunately it doesn’t seem that easy. I’m swapped the values out for MariaDB but the startup.sh script that runs with the container is only checking for PostgresDB, and since it can’t find it it never gets past the loop
it should be trivially easy to update the scripts to use mysql instead of postgres.
i’m only going to support postgresql on my docker scripts though. consider it a part of my evil plan to force as much people as possible to use a non-shit database server for once in their life.
downloads .tar.gz but at least it’s not an ancient tag
proprietary shit included (mercury api)
nginx/fpm sharing container for some reason with s6 included (almost always a sign that author doesn’t understand how docker containers should work - it’s not a fucking VM)
so much third party stuff bundled i couldn’t count it all, could be a plus i guess for someone
it’s better than linuxserver but i wouldn’t personally use it (or endorse it), ever.
volumes: - ~/postgres/data/:/var/lib/postgresql/data - doing stuff like this should get you fired into the sun, use actual volumes.
Sadly I’m not an expert in bash, so while I know enough to tell what it’s doing, I don’t know enough to be able to modify it to my needs
I would migrate, but I have several other services running off the same MariaDB host anyway and some don’t support pg. My MariaDB just works so I don’t see much reason to switch at this time.
It seems that I’m going to have to switch to something else anyway. While building from source is great, the docker-compose build directive isn’t supported in a docker swarm environment so I literally can’t use this without a lot of hassle.
If I might make a suggestion, it seems like the whole reason for building from source it to make sure users always have the latest code. How about using something like Drone for a pipeline? It would allow you to automatically publish an image to DockerHub whenever you push code to the repo. I would love to use an official TT-RSS image but I literally can’t at this time.
As far as I understand Docker and Compose, you should use a different database for all your applicatons so that if one takes it down, nothing happens to the others. Thus, what ttrss uses should not impact what anything else is using… Not an expert so I could be wrong.
In any case, it should be easy to clone the repo and modify it to be able to get what you want out of it.
I’m following along to this and the discussions about the db, and the main thing I’m thinking of is what is so important In the db that it needs saving? I would have thought post would be transient.
branch master → FETCH_HEAD
Updating 76dd74e0d…1aeeed930
error: Your local changes to the following files would be overwritten by merge:
.gitlab-ci.yml
include/functions.php
include/version.php
tests/ApiTest.php
utils/gitlab-ci/check-schema.sh
utils/gitlab-ci/config-template.php
utils/gitlab-ci/nginx-default
utils/gitlab-ci/php-lint.sh
utils/gitlab-ci/phpmd-ruleset.xml
utils/gitlab-ci/phpmd.sh
Please commit your changes or stash them before you merge.
Aborting
had to delete everything and created new container and restored from backup now am on v19.12-1aeeed930
it’s very likely that git config core.filemode false in the tt-rss directory on persistent volume would’ve been enough to fix everything.
i’ll take a look at this but in the future don’t just run and delete everything if you run into an issue, makes it impossible to debug anything until it happens again.
container binds to 127.0.0.1 by default in case you’re using a reverse proxy and don’t want origin port exposed (because docker would silently override your iptables rules in default configuration).