[improvement suggestion] Use Docker multi-architecture builds

You can use docker to build for multiple architectures all in one go. It really doesn’t take any more effort, than building for a single architecture.

You can follow this tutorial:

thanks for the suggestion, i’ll need to see how this would integrate into my current build process.

i have a few (somewhat disorganized) concerns, however:

  • the host for the tt-rss.org secondary VM is not particularly fast, i wonder how slow and resource intensive the build process be under qemu.

  • if this is because of raspberry pi and alikes, i’m not sure if i want to bless tt-rss usage on these sbc platforms by providing images because performance would suck for people who use sd cards (and possibly others), they would complain here, and i won’t be able to do anything about it.

  • are there any other relevant architectures other than amd64, really? i doubt there’s strong demand for tt-rss on ppc64le or s390x, seriously. i’d rather not waste electricity building things that nobody would ever use.

BTW, this gets brought up here from time to time, and I still don’t understand what performance issues somebody can suffer from writing/reading couple o’hundred (OK, even couple thousand) text entries a day from SD card?
I have rpi4 with media-server and ttrss on it. Set it up back in February, just before you officially stopped supporting non-docker installations.
And I can say, that I have performance issues converting movies (I know, such a surprise), but not any with ttrss.
I’m not asking to officially support rpi - for me you did a good job developing it, and I will be strong and deal with things I break myself - I’m just saying that I’m not seeing any performance problems you are talking about.

main problem with sd cards has always been slow and high latency random i/o, which is, as far as i know, is inherent to underlying sd/mmc technology.

which means overall IOPS rating suffers a lot on workloads typical for database servers, and postgresql is essentially 100% of tt-rss disk i/o.

the fact that microsd write performance is not only slow but inconsistent so you get latency spikes and sd i/o devices are commonly placed on slow, high-latency busses doesn’t help much either.

with a few feeds and a single user, sure, maybe. it’s going to hit the IOPS wall a lot faster than usual when you add more feeds and postgres starts choking down on high %iowait.

i mean i can’t (and have no desire to) stop anyone from doing what they want, it might be a sub-par experience or not for every given user, it’s up to them and their personal tolerances.

i would prefer everyone having great tt-rss experience that won’t suddenly turn bad when they add a bunch of feeds and their microsd card i/o gives out, thus i suggest using a VDS, even a cheap SSD-based one would provide much more consistent performance, in my opinion.

also, it’s not like using docker images is a requirement. other than easy rollback from a broken image (how often does this happen and does anyone ever bother rolling back?) it doesn’t really give you much of anything compared to the dynamic setup, for which i don’t have to build and push a bunch of images.

3). Probably just ARM is actually relevant. Although I have heard that s390x demand is still shockingly high.

1). Docker will use Qemu without hardware acceleration to build the entire image. With most images, this is totally fine because it will basically just download and copy a bunch of files. But if you, for example, compiled a program from git without hardware acceleration it would probably take 10 or 20 times longer because it is just that bad if it comes to raw CPU power.

2). Your idea about Rasperry Pi performance is entirely outdated.

People are now rather hooking up their old now-too-small SSD drives via USB 3.0. The CPU processing power of the latest Rasperry Pi is about as high as the minimum system requirement for GTA 4. A lot of people have Gigabit fiber connections even. This totally makes a Rasperry Pi at home vastly superior to the average $100/mo server.

Granted, this isn’t everyone’s situation. But even if you ran Linux from an old fake Chinese Sandisk SD card (with 10x lower write speed), or a cheap USB stick, it will still be enough to make Wordpress run well well-enough via Docker. This is even though, as you said, IO does totally suck and the card it would choke very easily in a desktop application. A PHP/MySQL webserver however does so little IO in comparison, that virtually anything is automatically solved by caching from the OS.

A lot of people use modern, fast SD cards though. An this will almost always be enough for server applications. Still it just makes so much sense to use an old SSD via USB 3.0, that if someone really needed the IO (e.g. to dual-use their box for media center or desktop PC and server at the same time), most people would have the perfect solution readily available to them.

As I see it, there are two major sources for Tiny Tiny RSS users: First you have people running LibreELEC, who have to use Docker because there is no packet manager and the root partition is Squashfs and read-only. Second there is Yunohost, which ships with TT-RSS. Yunohost is a sort of Linux Distribution (or Docker package) that makes it very easy to run server apps at home, via an app store.

I think there will be a point in time when you will find the now-Android $30 TVBox from Aliexpress running Ubuntu coupled with Yunohost. And then everyone will be running their own server, meta search engine, P2P networks, hosting their own data, etc from at home. It fills many wide gaps and demands.

to be honest this reads like apple people doing processing power comparisons with their toy processors using “geekbench” - “my iphone is as fast as a 10th gen i7”, etc, etc.

i doubt that wordpress generates a lot of random writes.

i’m just going to quote this gem for posterity without comment.

that’s blatantly untrue and i’m not going to continue reading your post any further, sorry, because you don’t know what you’re talking about. unless your server applications is “my wordpress site which nobody ever visits” which is pointless to discuss.

also, even though i’ve never recommended people running home servers for tt-rss, i must note i have no idea what “$100/mo home server” is supposed to mean.

my home server doesn’t cost this much per month in my local currency to run (maybe 1/50th of that, and i’m being generous) and it does vastly more than an SBC ever could, even if you plug an SSD into it (via USB, thus severely gimping its performance, while still having zero data redundancy - something that VPS hosting would deal with so you wouldn’t have to btw).

for example, it has a proper hardware raid array with a BBU. on a fast bus, designed for this kind of usage (USB is not).

e: Raspberry Pi microSD card performance comparison - 2019 | Jeff Geerling (interesting part is 4k writes, i doubt sd cards or controllers got much better since then, he has another article on newer A2 cards which don’t fare much better at IOPS, and even if they suddenly become a lot faster you’ll run into host controller speed/latency issues)

This might be the source of your confusion, my money is on most being ran in $5usd vps, second running on an amd64 stack of some sort at home, then the pi folk.

it would be interesting to see this distribution, at least for forum visitors. discourse should be able to make polls. :slight_smile:

e: i would also add shared hosting before SBCs

I’ve seen the interest in s390x increase in the past few years and I just don’t understand it. You need a $100k+ mainframe to run it, yet many mainstream distributions and open source projects have s390x support. Is IBM paying all these projects to care about s390x or something?

I was talking in the context of ARM installations.

Quite the contrary. People who would complain about this statement would be people who are coming from a vantage point of professional hosting, which almost always precludes anything but large-scale applications due to the high cost of hiring someone to set it up. This is however totally untrue to average Joe’s situation who presents by sheer number the case to be considered “almost always” true. Indeed most websites are not visited by thousands of users a day and are served well enough with a small inexpensive solution. Also don’t forget we were talking about server applications in primary context of Rasperry Pi hosts, which includes an array of offline server and local network server applications as well. TT-RSS is just one such great example of a great many where very little resources are needed and even very rarely so.

I said “$100/mo server”, i.e. renting a root server for $100 a month at some average place. I just checked server prices and I discovered that this is no longer true. I was in the IT business back in 2016 and it was basically impossible to get unmetered 1/2GBit servers below $800 or so. Don’t nail me on exact numbers here. But to a lot of people, especially in the UK, it was possible to get symmetrical 1GBit fiber at home. Of course for $100 you would get lots of muscle 16GB RAM etc, but you just wouldn’t get the bandwidth. For hosting files and videos though, you always want all the bandwidth you can get - not like muscle. Most people just don’t need 32GB RAM and super-fast Xeons. 100Mbit was standard back then, 250Mbit already cost you plenty of extra in the range of $100. Now 1Gbit is standard, and cheap as well without the muscle like $20 VPSes. The Rasperry Pi maybe can keep up with those, but only barely. Of course you only pay $0.50 a month for power at most, which is still a huge difference. But it is not longer “vastly superior”, just because unmetered 1Gbit lines had such insane prices from data centers for a long time. I correct my statement to: “probably still better than a $20 VPS”.

USB being a bottleneck is just a myth, plain and simple. I mean I grew up with USB 1, I get where this is coming from. But it is just nonsense. USB 3 is almost on par with PCIe in every way. Check out the data. You can even run recent games over a video card plugged into a USB3.0 PCIe adapter (like Cyberpunk XYZ) and it will only result in a very minor performance penalty.

RAID: Most people don’t need it. My PC runs without RAID since 20 years. We even once had a faulty drive mirroring corrupt data onto the healthy one, destroying the data in a RAID 1. If anything should ever happens, I will just use a backup. RAID sucks. Of course you could also do RAID via USB 3, just like anything. A 120GB SSD costs $15 on Aliexpress, a fast adapter cost $10 the cheap one for $3 will only do 45MB/s.

I only flew over it, but don’t you see 40MB/s write speed for the expensive cards? It isn’t even that much worse than plain hard drives. I don’t know if you know this or how the article talks about it, but most mid-range cards will have values like 20MB/s write and 50MB/s read. Or 15MB/s write and 80MB/s read. The cheap ones will sometimes have 80MB/s read but only 8MB/s write (and yes, there are also fake ones that have 1MB/s write and 15MB/s read). When you use the Box for desktop applications, the drop in write speed in random IO easily becomes a problem. But for server applications, you often get 80MB/s read speed from a cheap card and never even feel the 8MB/s write speed because of caching. It feels essentially the same as any plain hard drive would.

You really should stop while you’re ahead.

It’s true because you think it’s true because most web sites (in your mind) are low volume? Whether you’re right or not is irrelevant, you’re presenting facts without citation to prove points.

In the grander scheme, it’s better for these low volume web sites to simply use shared hosting. Properly done, a lot of sites can be put on a single server to share resources and benefit from redundancy, notwithstanding the fact that someone (probably slightly more knowledgeable) will keep the whole thing up-to-date security-wise… That’s really the point of shared hosting and why it’s so cost-effective.

If you’re talking home PCs or laptops, sure, no RAID. But my home NAS has nearly 6 TB of data on it and has experienced three drive failures over the years. I’m really glad I had RAID setup because restoring from backups would have been annoying.

Also consider that if you want something to be online most of the time (i.e. little to no down time) you need RAID. Pretty much every other part of a computer can be replaced and the system turned back on to continue on its merry way, but a drive failure means re-installing and setting everything back up from scratch. Even with things like Docker to ease deployment you’re still looking at a notable amount of work. You need to have RAID.

Saying, “RAID sucks,” effectively removes any credibility you have.

Really? I can’t even.

In a typical blog-style site running WordPress, most IO is going to be read (probably close to all of it, in fact). With an application like TT-RSS nearly everything is write. It’s writing ALL the time. When it updates, it writes. When it tries to update, it writes. When you read an article? It writes. It’s writing all the time because it needs to.

As you have noticed, prices for all servers (VPS, dedicated, etc.) have dropped quite a bit in the last few years. Not only can you get a reliable VPS for a fair price, even dedicated systems are priced fairly well these days. But my home NAS is running 4 drives 24/7 and while its effects on my electric bill are measurable, it is so far below $100 it’s barely a rounding error. In fact, I pay less to run my home NAS than one of my tiny VPS instances.


I’d like to add that I don’t care if people want to use Raspberry Pis for TT-RSS or anything else, but saying they’ve progressed to the point where they can effectively replace VPS or dedicated machines is simply ridiculous. It’s not like desktop, laptop, and server systems have stood still while Raspberry Pis have advanced. The entire industry continues to move forward and these larger systems have benefitted from that as well.

Well we could certainly pull apart further how we both have developed different perspectives on the matter, but I don’t think it would be helpful. I am able to follow your rationale, without trying to discredit your opinion, you could just do the same.

Hosting at home is not only about server performance, but mainly about having control over and privacy with your digital data. While servers may have progressed, they ran just fine 10 years back as well. This is where ARM SOCs and home internet connections now stand. They are as powerful as that they could become the backbone of the internet, in all but a few places. They consume virtually no power. ARM is the future of the internet. You can have it at home as well.