Should non-docker install even be supported in $current_year?

docker solves pretty much all host-related idiocy (which is the usual cause of various hard to diagnose issues) and you can install it almost everywhere (one notable exception being OpenVZ-based VDSes i think — who even uses those? ew).

shared hosting is already unsupported, and i’m not particularly interested in feeling responsible to dig into some imbeciles’ bespoke arch setup running latest unstable everything.

also, it’s simply easier to deploy with compose. compare host installation guide with docker, it’s like three times as long, and for what purpose?

note that “unsupported” doesn’t mean “you are not going to get any help here”.

comments?

It’s not something I could see myself moving to. Running tt-rss on Debian (sometimes old) stable’s versions of things works for me. Moving over to Docker for anything would mean needing to run extra things in my use case (I hated doing this for gitlab when I used it for a while).

I can understand the incentive to reduce the number of people who don’t know what they’re doing with sysadmin tasks causing support issues.

But so long as a sensible bug report is still responded to, no issue.

yeah, that’s primarily it. it’s a lot harder to screw up a docker-compose install.

Docker is a nice way to develop and try things. But it shouldn’t be used to deploy software in production. An ansible playbook (or whatever automation) is more appropriate.

Docker can grab a lot of binary dependencies from untrusted 3rd parties. At least, untrusted from your distribution.

Of course, to be sure that an issue is valid, you can say: try the Docker version with your current data first.

And, finally, if you decide to only support Docker, I think I’ll be fine. I’m skilled enough to install it manually, as now.

:face_with_raised_eyebrow:

using configuration management and containerization is not an either/or thing, what the fuck

OpenVZ 6 didn’t support docker by default, but is now EOL. OpenVZ 7 should support it by default afaik.

https://wiki.openvz.org/Docker_inside_CT_vz7

OpenVZ VPS always used to be a lot cheaper than KVM or Xen, so as a tight bastard I’ve always used them, but prices seem to have leveled out now at the bottom end of the market.

I’m not sure I’d want the overhead of docker on a budget VPS.

what overhead? other than RAM usage by separate fpm and postgres processes i can’t really think of anything.

what you lose in RAM you gain in massively increased security of your host though (if nothing else, you won’t need to tinker with open_basedir), so i think the tradeoff is worth it.

Docker IMHO is fine, when integrated into a CI/CD pipeline with frequent rebuilds and redeploys, that also include fetching updates from “upstream”.
Just deploying a docker container and keeping it running forever might lead to unexpected security risks due to the images used or the stuff added on top of the upstream images (PHP) being outdated.

It is a “pets vs cattle” discussion: Do I throw away the (atomic) docker container at least weekly and rebuild it from scratch, hoping that my “upstream” image has all the latest patches included as well (=cattle),
or do I frequently login into my box, do an apt-get update and have all the latest stuff I need from PHP, DB and OS.
-> Relying on Docker / Docker compose for deployment would require “ops” instructions / scripts on how to keep the containers current. And it would “bet the cattle farm” on upstream images having their act together regarding updates/fixes.

See also here: snyk(dot)io/blog/10-docker-image-security-best-practices/ -> (1) popular upstream images contain “known vulnerabilities”, sometimes many of them. (I am not associated with them, just found this while searching for best practices on how it might be accomplished.)

i’m sorry but you have to choose one and only one - you either larp as a bigshot enteprise devops with jenkins and shit OR you “ssh into your box and run apt-get update”.

it simply can’t be both at the same time. also, your “box” doesn’t need jenkins, and neither does ~99.99999% of tt-rss userbase.

To be fair, unattended updates and basic orchestration doesn’t require jenkins. It is reasonably simple to turn ubuntu systems into “barn cats” (if not proper cattle) without all that extra stuff.

I don’t use Docker, I run my stuff in Linux containers (LXC on Debian). I’ve always liked that type of environment best.

I don’t have anything to add other than that. Personally, as long as the git repository is available for me to pull from I’m pretty indifferent.

git is not going anywhere; tbh i think mentioning that docker is the preferred method in the installation guide and host install is not recommended is enough.

Personally I’m moving everything on my systems to docker, it solves too many issues and is too convenient to avoid any more. As long as nothing is tying the software to docker causing lock-in issues in the future why the heck not?

Not a great idea. I run tt-rss on a raspberry pi, with limited memory, as do quite a few other people, I suspect.

nobody was talking about toy computers (which are already unsupported so nothing changes for people like you).

Er, doesn’t the name TINY-TINY rss suggest it is low on resources, and so ok for toy computers? :slight_smile: Where are the requirements actually listed?

While I’m definitely going to look at docker, I agree, as long as git stays available I’m not objecting.

How about searching before asking: https://tt-rss.org/

i’m going to probate you for two weeks for asking low effort questions like this

For me, it does not matter. I don’t run Docker on my VPS but since it will always be possible to run it without, I don’t care much

And for new users, it can be more easy. And people who know what they are doing will find their ways