I use Windows 10 Docker Desktop to install ttrss-docker. when I run “docker-compose up --build”, I met the following errors. It is my first time use Docker, who can help me to solve this problem? Thanks.

updater_1 | Could not open input file: /var/www/html/update_daemon2.php
ttrss-docker_app_1 exited with code 2
ttrss-docker_updater_1 exited with code 1
ttrss-docker_app_1 exited with code 2
ttrss-docker_app_1 exited with code 2
ttrss-docker_updater_1 exited with code 1
ttrss-docker_updater_1 exited with code 1
Exception in thread Thread-16:
Traceback (most recent call last):
File “site-packages\docker\api\client.py”, line 261, in _raise_for_status
File “site-packages\requests\models.py”, line 940, in raise_for_status
requests.exceptions.HTTPError: 409 Client Error: Conflict for url: http+docker://localnpipe/v1.25/containers/a8b3f2f9ed90071b50d6d1b040828c2f4d4c1206e7b249da109444254b864d6b/attach?logs=0&stdout=1&stderr=1&stream=1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “threading.py”, line 916, in _bootstrap_inner
File “threading.py”, line 864, in run
File “compose\cli\log_printer.py”, line 233, in watch_events
File “compose\container.py”, line 215, in attach_log_stream
File “compose\container.py”, line 307, in attach
File “site-packages\docker\utils\decorators.py”, line 19, in wrapped
File “site-packages\docker\api\container.py”, line 61, in attach
File “site-packages\docker\api\client.py”, line 400, in _read_from_socket
File “site-packages\docker\api\client.py”, line 311, in _get_raw_response_socket
File “site-packages\docker\api\client.py”, line 263, in _raise_for_status
File “site-packages\docker\errors.py”, line 19, in create_api_error_from_http_exception
File “site-packages\requests\models.py”, line 880, in json
File “site-packages\requests\models.py”, line 828, in content
File “site-packages\requests\models.py”, line 750, in generate
File “site-packages\urllib3\response.py”, line 496, in stream
File “site-packages\urllib3\response.py”, line 444, in read
File “http\client.py”, line 449, in read
File “http\client.py”, line 493, in readinto
File “site-packages\docker\transport\npipesocket.py”, line 209, in readinto
File “site-packages\docker\transport\npipesocket.py”, line 20, in wrapped
RuntimeError: Can not reuse socket after connection was closed.

on first startup updater is going to fail a few times while app container initializes (i.e. source is checked out from git) but should work from then on. it’s a known issue.

Thanks.

When I browser “localhost:8280”, I got 404 error. What’s the problem?
Docker log is,
web_1 | 172.18.0.1 - - [23/Jan/2020:08:57:18 +0000] “GET / HTTP/1.1” 404 14
web_1 | 172.18.0.1 - - [23/Jan/2020:08:57:18 +0000] “GET /favicon.ico HTTP/1.1” 404 14

updater crashes on first startup should be fixed by this changeset:

https://git.tt-rss.org/fox/ttrss-docker-compose/commit/fa73a498a3c555f37fa27d087f7eec2355a9d5ea

it should redirect to /tt-rss/. post your .env file.

if you get a 404 this means app volume is not ready yet or you have some kind of trouble with checking out from git on git.tt-rss.org. it could be related to cloudflare. check the logs of the app container.

I updated ttrss-docker, solved the update_daemon2.php could not open problem.

I browsered the url localhost:8280/tt-rss, still 404.

.env is:

# Copy this file to .env before building the container.
# Put any local modifications here.

POSTGRES_USER=postgres
POSTGRES_PASSWORD=password

OWNER_UID=1000
OWNER_GID=1000

# You can keep this as localhost unless you want to use the ssl sidecar 
# container (I suggest terminating ssl on the reverse proxy instead).
HTTP_HOST=localhost

# You will likely need to set this to the correct value, see README.md
# for more information.
SELF_URL_PATH=http://localhost:8280/tt-rss

# bind exposed port to 127.0.0.1 by default in case reverse proxy is used.
# if you plan to run the container standalone and need origin port exposed
# use next HTTP_PORT definition (or remove "127.0.0.1:").
HTTP_PORT=127.0.0.1:8280
#HTTP_PORT=8280

without app container logs i can’t tell you anything more.

my guess would be git checkout failing because of cloudflare which is not something that i’ll be able to help you with, unfortunately.

here’s an experimental version of the docker setup which bakes tt-rss code into container on build and rsyncs over working copy on startup:

https://git.tt-rss.org/fox/ttrss-docker-compose/src/static

(same repository, static branch)

e: here’s an EXPERIMENTAL version of the above which uses a dockerhub image:

https://git.tt-rss.org/fox/ttrss-docker-compose/src/static-dockerhub

don’t even think about using this in production.

images on docker hub are updated daily if there were code changes on master branch. technically it’s a post-merge hook.

what i liked:

  • using docker-compose push is straightforward
  • downloading a prebuilt image is obivously faster than git cloning every time from git.tt-rss.org, etc.

what i didn’t like:

  • no way to change OWNER_UID/OWNER_GID because it’s baked into images because of adduser/addgroup (this isn’t really a big deal, not sure why would you want to customize these anyway OR could be handled on container startup)
  • compose prefers build: over image: (which makes sense i guess) but this means there should be two compose files - one for me to build things and another for users to pull things - am i missing something here?
  • web/web-ssl remain as locally built containers so you still need multiple files instead of a single compose file you can copy-paste, so you might as well use git to update. i guess i could also publish web and web-ssl to dockerhub. idk.

i think only one of those above branches has a reason to exist and it’s probably the dockerhub one.

If I understand your workflow correctly, you are building the image locally and then pushing it to the Docker Hub repository. An alternative method would be to configure automated builds on the Docker Hub repo by connecting your git repository. When you push changes to the git repository, a new build would be triggered automatically on Docker Hub. This method only requires a single docker-compose.yml file configured for the end-user with the already built image specified in the service.

this sounds interesting. i glanced over the manual and i think it involves merging docker-related stuff into main repo which is something that i would prefer to avoid. not sure how that would work with multiple repositories where a secondary one takes precedence over one containing docker scripts.

currently, i have a post-merge hook on a tt-rss repo clone (cron.hourly script invokes git fetch) which updates docker scripts from git and then runs docker-compose build / push. the whole thing takes less than a minute to build, is entirely automated and it’s maybe 10 lines of shell code. i’m not sure if there’s any point in trying to improve this further tbh by figuring out docker build system.

i could invoke the same thing using gogs web hook but, you know, too much effort.

well, except for .env, currently it’s pretty much the same, this is all you need:

https://git.tt-rss.org/fox/ttrss-docker-compose/src/static-dockerhub/docker-compose.yml

I’m not aware of any repo requirements, but I’ve only connected github to Docker Hub for automated builds. I cloned the master branch of the ttrss-docker-compose to a private github repo yesterday, and connected that clone to a private repo on Docker Hub with the automated build configured (previous screenshot), and it completed the build successfully (after setting the default values for OWNER_UID and OWNER_GID in the Dockerfile).

very nice, but which repo update triggers the build? the idea is to rebuild when source code payload changes, as opposed to the repository with the docker scripts.

If you want to trigger a new build on Docker Hub when a change is pushed to the master branch of the tt-rss git repo, you could configure a webhook on Docker Hub to trigger an automated build, and then call the webhook URL as part of the post-merge process you described earlier.

Edit: I believe this automated build process is only necessary if the tt-rss source is built in to the image. Using the git clone and git pull approach should not require any automated builds, other than times that the ttrss-docker-compose repo itself is updated.

this is nice, if i can link this with gogs. thanks, i’ll look into it.

dockerhub builds include source code, otherwise there’s really no point in the whole thing, at least in my opinion.

am i missing something or does this really only support only two “git” services and that’s it?

because i’m not using either of those (and I have no plans to start doing so).

I am currently investigating this, too. My idea was to update the tt-rss-container Dockerfile with the latest commit tag. Currently I do this manually and with my version of the container with the tt-rss-source code baked in for the reasons I mentioned earlier (in short: have “monolithic” updates including all dependencies for a service).

Result so far:
https://hub.docker.com/repository/docker/meyca/tt-rss-fpm

An example docker-compose.yml is included in the description.

@fox correct, Docker hub does not support gogs, gitea et al.

i’m doing two builds from the git hook, like this:

#!/bin/sh -e

_BUILD_TAG=$(date +%Y%m%d)-$(git --no-pager log --pretty='%h' -n1 HEAD)

cd ../scripts/src

git pull origin static-dockerhub

docker-compose build && docker-compose push

export BUILD_TAG=$_BUILD_TAG

docker-compose build && docker-compose push

it’s not absolutely correct because the source might change inbetween build starting and git clone happening in the container but it’s good enough for now, i think.

the alternative to building twice is extends syntax but it’s not available for compose v3 that i’m using.

(sample build compose file - static-dockerhub branch).

someone needs to tell all these people that git being decentralized doesn’t mean forking within github.

e: automated builds are working:

https://hub.docker.com/r/cthulhoo/ttrss-fpm-pgsql-static/tags

So how far away from being considered ready for production is that?

we’ll never know until more people try it and post feedback itt. :slight_smile:

this includes you!

I’m up and running!

Looks good so far.

try adding plugins/themes and see if anything gets broken on container restart, i haven’t tested this at all.

there’s no reason for stock install to not work, it’s essentially same as master branch setup, source baked in the image (instead of checked out on startup) is the only difference.

it’s not like any of this is rocket science anyway, it’s a few shell scripts, that’s it.