Should non-docker install even be supported in $current_year?

Happy to do so if you could kindly provide me some spare time? :smiley:

Since you are both too lazy to do a simple good search, Let me point you to Docker’s home page’s “Get Started” page… As an added bonus, it takes like ten minutes to read.

Go get a cup of coffee (or tea, or whatever poison you like to drink), and read all those pages. As I said, it does not take longer than you would if you eat lunch at your desk and you can book that time in continuous improvement at work.

I currently run all my services in separate LXD containers on Debian, so I’m generally not opposed to the idea of containers. That said, running Docker within LXD, while supported, was finicky in oldstable Debian (AppArmor shenanigans) and I haven’t gotten around to trying it on Buster yet. Of course, that’s not really relevant to whether you should only support Docker.

I’ll probably stick with my current setup with the git master for as long as it stays working :slight_smile:

you’re running docker inside LXD? :face_with_raised_eyebrow:

I don’t usually hear of anyone doing a container within a container. (And if this is on a VPS, that’s a container within a container within a container.)

imagine doing all that in a nested KVM

Once - I wanted to run Guacamole and didn’t want to install Docker directly on the host when everything else was already LXD’d. Worked out okay after a bit of tweaking. I hear it’s better now and should Just Work™ but haven’t bothered trying yet.

Suppose I’ll do the same for tt-rss if the direct git master breaks in a way I can’t fix.

And yea the host is a dedi - I’m not quite that insane.

quietly purges KVM

Bare metal. Just because it works for me and I don’t need no change.

Bare metal works for me and I have other small apps running on my apache instance. My utlra small/cheap VPS doesn’t have a lot of extra HDD space for me to add buckets of layers to things. I like getting my fingernails dirty doing things and learning things. Docker is a great option for plug and play type users, I am not one of those.

I am ok with whatever y’all choose to do, but please don’t make docker a requirement.

Bare metal is fine for some applications. I have a machine that’s basically just used for offsite backups; I don’t run anything on that as I just rsync files to it every night.

But if I’m running multiple, unrelated applications I’ll put things in their own container. It’s better for a variety of reasons. Obviously security since things are isolated. Backups are easier because I just backup the container. And if I’m upgrading or otherwise making major changes, I can just create a new container (or start with a copy), make the changes, and go from there. If the changes work, great, if not I can just go back to the original container. I find all of these conveniences are worth the extra resources needed.

I’ve been running dockerhub’s clue/ttrss on a xen docker instances inside my xenserver for years. Dead simple and pain free. Moved to running directly on a Raspberry Pi to simplify my hosting. Google had all the answers for my simple configuration snafus getting on the metal. I’ve never used docker-compose so decided to test your recommended solution on an AWS EC2 instance at 5 cents an hour (curiosity mostly). No joy (restarted and retried several times) and not enough time or energy to troubleshoot but I’m sure my fail could be reproduced with a plain EC2 spin up, AWS docker install, docker-compose install, git pull, and your build command. I used a docker lamp instance to confirm my comms to and from the EC2 instance were OK. Bottom line, clue/ttrss is painless and Raspberry PI is a no go for docker-compose so kudos for your work outside of docker-compose. Also a hat tip to this effort but it is currently not the path of least resistance for me to use your wonderful work.

As others said, as long as the base git repo is available it is fine for me.

As I would need to have more than one service running on the same VPS, I would have to tinker anyway with docker compose, so in my use case there are less advantages.
Maybe it is just a classic case of the Douglas Adams principle…

Just FYI, you don’t need docker-compose for multiple services. It can be useful, but it isn’t necessary. I still have a (home lab) storage system running from a shell script that (poorly) handles coordination. Similarly, systemd/init scripts to start containers are not uncommon and make container services look a lot like everything else.

If you’re still working with traditional services, it is reasonable to consider Docker as similar to any other application server binary package. “Be sure you have the correct version of (tomcat/docker), then drop the (war/container) in and go to http://…”

Thanks for the input, much appreciated.

I need to study Docker better, I was not aware of this possibility.