Problems with resolver on podman

you need to figure out which internal resolver podman uses and put that into RESOLVER environment var.

the supported setup is docker. you’re, however, using podman - a partially incompatible attempt to reimplement docker from redhat. do you see where this is going?

are you using k8s? no, you’re not. why would you want to post to that thread then? :confused:

Welll, my first post here though using tt-rss now for several years.

I strongly suppose I am affected by this DNS issue with latest images. My ttrss_web-nginx container complaining about app not reaching “send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53”. I am a relative newbie to this whole Docker and container stuff so please excuse my using these terms at least partly in a wrong way. Let me briefly explain my setup.

In summer this year I decided to move from my server based installation to a container based one as it is the strongly advised setup. I read some stuff about Docker and Podman and finally did a fresh install of tt-rss with Podman. This immediately succeeded and ran fine until I decided last weekend to update to latest images by removing all containers, volumes and images to make another fresh install using this docker-compose file: https://tt-rss.org/wiki/InstallationNotes

Installation was straight forward again, all containers started but Nginx could not be reached. The container logs revealed that the nginx container could not contact the app container. However, a ping from the nginx container to the app container just works fine:

Blockquote
podman-compose exec web-nginx ping db

Due to my limited knowledge of Docker containers I am not sure what the issue is. When reading this thread I suppose the resolver statement in nginx.conf might be the cause. Could anybody please tell me how to get back a working container based setup of tt-rss again?

EDIT: Accidentally put my message to the wrong thread, sorry, should have gone to Resolving issues with latest commit on k8s

oh well i screwed up moving around posts so my reply is above OP :slight_smile:

https://gitlab.tt-rss.org/tt-rss/tt-rss/-/wikis/InstallationNotes#im-using-podman-and

faq updated

I understand not wanting to support yet another environment (which was the whole point of moving to containers). However, there is already a standard way to specify the resolver (/etc/resolv.conf). I don’t see what’s the advantage of using a completely different method of specifying the resolver using an environment variable. What am I missing?

Found the answer myself. Because nginx is dumb: Nginx resolver address from /etc/resolv.conf - Server Fault

nginx with “upstream app” resolved app hostname exactly once on startup. if app container changed IP address for any reason (i.e. watchtower or k8s recreated it) you’d get 502 errors until you manually restart web container.

the rest is basically consequences of fixing the above + limitations of nginx.

Thanks, @fox, for making my post a separate topic. Due to my lack of experience with container stuff I was not sure whether my issue is a general DNS or a Podman specific one.

Thanks to pushing me into the right direction with the resolver env. I needed quite some time to understand why my “web-nginx” container is able to ping other containers of the “ttrss_default” network while at the same time complaing DNS 127.0.0.11:53 cannot be reached. For my understanding the post of @imgx64 has been very helpful.

Furthermore, it took quite some time to understand, that the RESOLVER variable must be transfered in the .env file instead of “docker-compose.override.yml”. Some of these problems are surely related to my container newbie status. As there is no shell in the “web-nginx” container following two commands have been helpful for me to analyse the situation:

podman exec --interactive --tty ttrss_web-nginx_1 cat /etc/resolv.conf |grep nameserver
podman exec --interactive --tty ttrss_web-nginx_1 env |grep -i resolver

May be these commands are helpful to other Podman users.

I have not enough experience to judge, if Docker is better than Podman or vice versa but in summer this year I decied to go with Podman because I had a lot of trouble with Docker (installed from the official Docker PPA) when it messed up my network resulting in not reaching my KVM guests anymore. There are numerous posts on this topic but none of them offered a quick solution. Therefore I changed to Podman without any hassle. For my usage I had no compatibility issues so far with any of my different containers until last weekend with tt-rss.

With this env tweak tt-rss is up and runs fine again.

As long as tt-rss and my other containers are running fine I will stay with Podman but keep an eye on the container topic. Maybe time will come where I will switch to Docker.

fwiw you can set all subnets for docker to use in daemon.json. it uses 172.something by default.

I set it in docker-compose.override.yml:

version: '3'

networks:
  default:
    ipam:
      config:
        - subnet: 172.16.1.0/24

This way I don’t have to fight with it with regards to firewall setup, and it’s per docker/compose setup.

For the record, setting RESOLVER wasn’t enough on my end, I was getting unexpected A record in DNS response errors. I had to also disable ipv6. So in my case, the environment variable to set was:

RESOLVER=10.89.1.1 ipv6=off

I did some further reading and the issue is not related to IP conflicts but to Docker’s modifying iptables. This does not only affect KVM but also UFW, two example links out of many similar links:
https://bbs.archlinux.org/viewtopic.php?id=233727

It looks like Docker option --iptables=false in the corresponding Systemd unit file should prevent Docker from modifying iptables. I will give it a try as soon as I have left some time for testing.

oh. i’m not using ufw or anything of that sort (just iptables) so.

Short feedback for completeness: Meanwhile moved to stock docker, did a fresh install yesterday and ttrss is running as fine as with podman :wink:. However, there is one minor issue left, will open a new thread tomorrow as it is off-topic and I am currently out of office not reaching my ttrss host.