Heads up: several vulnerabilities fixed

Thank you very much for doing this. It is appreciated.

Any chance you, Daniel, or DigeeX requested a CVE for this?

It’s in progress, we have written the descriptions and will request them on weekend

CVE-2020-25787, CVE-2020-25788 and CVE-2020-25789 have been assigned to our findings

We have published our full report (PDF), together with a blog post. I’ll be here to answer further questions and we can all get this sorted out together.

Thank you to both of you for finding and fixing these issues in a positive way. It’s always a tricky problem when issues like this are reported. I’m very pleased that everything was resolved.

I know this might incur the wrath of fox … but to all the localhost and site:port whiners there may be a way for you to use the new more secure tt-rss still w/o really securing your install if you don’t want.

The CVE’s were written against the base application and not the plugins; you can reopen your own vulnerabilities if you want to. I am not advising it, just pointing it out. AFAIK the plugin hooks occur b/f the sanity checks.

You can write a plugin (hopefully with an approved url list configured via prefs to limit the blast zone) that does its own curl_exec. Then you can do whatever the heck you want, including shot your own foot. But :man_shrugging:

I wouldn’t suggest asking anyone on the list to write it for you though.

Let the probation begin … 3 … 2 … #$%@ connection lost

yeah this was actually mentioned above (or in a different thread?). anyone could make an “unsafe_fetch” plugin which would hook on HOOK_FETCH_FEED or w/e and just do the thing.

(i don’t think there’s a way to hook on fetching other arbitrary URLs so this won’t enable caching and stuff like that.)

to me, the important part is that i’m not the one responsible, i can’t (and won’t) try to stop anyone from shooting himself in the foot, if he so desires.

core code is open source anyway, anyone could simply patch the necessary code to prevent any URL checks, no plugin needed.

don’t be so dramatic :slight_smile:

Would you mind clarifying what you mean by this please? What is the better alternative for people (like me) who are not aware of the latest trends?
Does this mean everything has to be behind a reverse proxy nowadays, and the default entry point should be the default ports? Does this mean local services have to be reached through an external IP?

As suggested in other threads (ex: “regression-some-atom-feeds-broken”) I have setup a reverse proxy, but this means opening the default port to reach a non-default port.
And this also means I have to hit a local gateway (router) in order to access services on the same machine, which sounds a bit ridiculous… but is that the correct way to do it now?

Thanks for your time, thanks Daniel and his colleagues at DigeeX, and thanks fox.

“hi plz teach me basics of networking and how to set up multiple random web services on my host, properly, for free” goes way beyond the scope of this forum, i think.

that aside, here’s some brief answers:

standard ports, yes

reverse proxy, if necessary (one exception could be services provided by containers linked onto tt-rss docker network, however you still need to use standard ports)

no

no

if you have any other questions, i suggest starting on this long, perilous, and largely unrewarding road of becoming a junior linux sysadmin by using your favorite search engine. good luck.

Thanks fox, no worries. I should have worded by questions better. By “external IP” and “local gateway” I meant a LAN IP address such as “192.168.x.x”. My bad.
BRB getting a degree in system administration.