Actually, this is so tricky and user specific… I don’t like the idea too much, but I completely understand.
The user should have do most to identify problem first and be able to fully reproduce “on brand new distro installation” otherwise should dig their owns. But completely “fuck all non-container” isn’t good idea, by my opinion.
Isn’t the whole point of docker to reduce the complexity?
wizard
36
Depends on your point of view.
From the articles I read about docker it reduces complexity for hosting/web application providers or IT departments with several applications supported by docker.
In this case: yes it does.
When you just run TT-RSS on a VPS and nothing else and have no clue about docker, its dependencys and functionallity or what is persistent and what could be updated etc… I do not think it reduces complexity.
pcause1
37
Prsonally I am happy with the git pull, but understand that others might want something simpler. I don’t want the extra overheard of another fpm, php, postgresql running but that is me. I don’t see that having the git pull and then docket container are at odds. we need the base source tree to create the container from. This means to me that creating the container is just a later step in the build process and that this isn’t an either or…
fox
38
this is completely wrong.
installing via docker is orders of magnitude easier than going through all the motions to install fpm and required php packages, nginx, database server, git clone tt-rss into a correct location, fix permissions, etc etc etc.
let’s not even talk about all the stuff you need to know so this setup has some semblance of security and your host is not compromised in the next 5 minutes because you decided to put something like wordpress in /var/www next to tt-rss.
also, “having no clue” about containerization is not an excuse. go get a clue then. and stop deploying services directly on your host.
tom_cat
39
Please do me a favor and tell me, how we keep the underlying Docker-Container base image, php, fpm, nginx, … up to date as the process for dong it isn’t “apt-get update”.
fox
40
we’ve already established that you don’t know what you’re talking about in your previous post itt, you don’t really need to dig yourself further.
also, imagine i probated you for questions you could easily google answers to.
wizard
41
Happy to do so if you could kindly provide me some spare time? 
Kierun
42
Since you are both too lazy to do a simple good search, Let me point you to Docker’s home page’s “Get Started” page… As an added bonus, it takes like ten minutes to read.
Go get a cup of coffee (or tea, or whatever poison you like to drink), and read all those pages. As I said, it does not take longer than you would if you eat lunch at your desk and you can book that time in continuous improvement at work.
BobVul
43
I currently run all my services in separate LXD containers on Debian, so I’m generally not opposed to the idea of containers. That said, running Docker within LXD, while supported, was finicky in oldstable Debian (AppArmor shenanigans) and I haven’t gotten around to trying it on Buster yet. Of course, that’s not really relevant to whether you should only support Docker.
I’ll probably stick with my current setup with the git master for as long as it stays working 
fox
44
you’re running docker inside LXD? 
I don’t usually hear of anyone doing a container within a container. (And if this is on a VPS, that’s a container within a container within a container.)
fox
46
imagine doing all that in a nested KVM
BobVul
47
Once - I wanted to run Guacamole and didn’t want to install Docker directly on the host when everything else was already LXD’d. Worked out okay after a bit of tweaking. I hear it’s better now and should Just Work™ but haven’t bothered trying yet.
Suppose I’ll do the same for tt-rss if the direct git master breaks in a way I can’t fix.
And yea the host is a dedi - I’m not quite that insane.
quietly purges KVM
mamil
48
Bare metal. Just because it works for me and I don’t need no change.
Bare metal works for me and I have other small apps running on my apache instance. My utlra small/cheap VPS doesn’t have a lot of extra HDD space for me to add buckets of layers to things. I like getting my fingernails dirty doing things and learning things. Docker is a great option for plug and play type users, I am not one of those.
I am ok with whatever y’all choose to do, but please don’t make docker a requirement.
Bare metal is fine for some applications. I have a machine that’s basically just used for offsite backups; I don’t run anything on that as I just rsync files to it every night.
But if I’m running multiple, unrelated applications I’ll put things in their own container. It’s better for a variety of reasons. Obviously security since things are isolated. Backups are easier because I just backup the container. And if I’m upgrading or otherwise making major changes, I can just create a new container (or start with a copy), make the changes, and go from there. If the changes work, great, if not I can just go back to the original container. I find all of these conveniences are worth the extra resources needed.
I’ve been running dockerhub’s clue/ttrss on a xen docker instances inside my xenserver for years. Dead simple and pain free. Moved to running directly on a Raspberry Pi to simplify my hosting. Google had all the answers for my simple configuration snafus getting on the metal. I’ve never used docker-compose so decided to test your recommended solution on an AWS EC2 instance at 5 cents an hour (curiosity mostly). No joy (restarted and retried several times) and not enough time or energy to troubleshoot but I’m sure my fail could be reproduced with a plain EC2 spin up, AWS docker install, docker-compose install, git pull, and your build command. I used a docker lamp instance to confirm my comms to and from the EC2 instance were OK. Bottom line, clue/ttrss is painless and Raspberry PI is a no go for docker-compose so kudos for your work outside of docker-compose. Also a hat tip to this effort but it is currently not the path of least resistance for me to use your wonderful work.
As others said, as long as the base git repo is available it is fine for me.
As I would need to have more than one service running on the same VPS, I would have to tinker anyway with docker compose, so in my use case there are less advantages.
Maybe it is just a classic case of the Douglas Adams principle…
Just FYI, you don’t need docker-compose for multiple services. It can be useful, but it isn’t necessary. I still have a (home lab) storage system running from a shell script that (poorly) handles coordination. Similarly, systemd/init scripts to start containers are not uncommon and make container services look a lot like everything else.
If you’re still working with traditional services, it is reasonable to consider Docker as similar to any other application server binary package. “Be sure you have the correct version of (tomcat/docker), then drop the (war/container) in and go to http://…”