Heads up: several vulnerabilities fixed

I’m just going to comment out the loopback check. Using public IP addresses has it’s own set of security implications that are more concerning than allowing my private single-user instance to loopback on itself.

actually, no. it works kinda like this:

your private instance → XSS from feed data or w/e → your local service bound to localhost is potentially exposed → your LAN is also exposed

good luck

Fox, you will as always do what you think makes sense/is right. I just point out that a “valid URL” includes a port number, and people have reasons for this use case. for example, if your ISP blocks incoming traffic on specific ports or other limits/monitoring you might want to use an alternative. yes, the ISP are jerks but many folks don’t have a choice and have only a single broadband carrier choice.

well, that much is obvious. valid in this situation is more like “we’re reasonably certain that we won’t be screwed over if we try to fetch and show data from this URL or even simply show it to the user”

don’t get me started on a recent bunch of changes that deals with redirects.

I use uMatrix for a reason, anyone running a browser that just executes any JavaScript handed to it has way bigger issues than loopback connections. I wouldn’t be using tt-rss at all for security reasons without uMatrix.

tt-rss doesn’t work without javascript so you’d have this origin whitelisted. successful XSS via tt-rss and you’re screwed.

A potential exploit could lead to a potential exploit. Fortunately there’s nothing on my LAN that’s accessible without additional credentials, assuming that there’s an additional exploit where that aforementioned exploit is leveraged to gain arbitrary server-side TCP access.

Yeah, I’d rather not punch holes in my firewall and add additional points of failure, I’ve already unnessarily bounced it through nginx, that’s enough. Frankly I might just remove the port “sanitization” as well and simplify things. Ports are part of the URI specification, you’re the breaking specification, and now with the loopback check it makes even less sense.

Hello peeps!

I had to jump in this thread, I’m Daniel Neagaru, and it was my
initiative to perform penetration testing on an open source
project I commonly use (TTRSS) to test our services
before we offer it to clients, and also contribute to the
security of this project.

Me and my friend that’s helping me build the company have spent
many days hacking on the application and writing a comprehensive
report with accompanying proof of concept scripts. We intend to
publish our findings on Monday to give users time to update
and for us to retest the fixes. It would be irresponsible for us
to provide more details about the vulnerabilities until people have
a chance to apply updates.

When the report is made public, everyone will have the
opportunity to take a look at the findings then the conversation
here would be more productive, and we can all be on the same page
and find a compromise, fox doesn’t have to explain the
vulnerabilities himself… we’ll publish the findings soon and
I’m here to discuss them if you have any questions.

In our opinion, fox has handled the vulnerabilities very well,
responded fast, and was a pleasure doing the disclosure process
with him. Many changed happened at once, so that lead to some
things that broke, but I’m sure they can be fixed or safer
alternatives to be found.

alright people, it seems that filtering was a bit too strict after all.

https://git.tt-rss.org/fox/tt-rss/commit/4efc3d7b3f6465a23d5e1c1415ec74e80cc7562d

filtering related to whitelisted ports and loopback are going to only apply to stuff actually downloaded by tt-rss, i.e. fetching of feeds, and various caching / proxy functionality, for URLs simply displayed in the UI and SELF_URL_PATH nonstandard ports and such are going to be allowed.

Thank you very much for doing this. It is appreciated.

Any chance you, Daniel, or DigeeX requested a CVE for this?

It’s in progress, we have written the descriptions and will request them on weekend

CVE-2020-25787, CVE-2020-25788 and CVE-2020-25789 have been assigned to our findings

We have published our full report (PDF), together with a blog post. I’ll be here to answer further questions and we can all get this sorted out together.

Thank you to both of you for finding and fixing these issues in a positive way. It’s always a tricky problem when issues like this are reported. I’m very pleased that everything was resolved.

I know this might incur the wrath of fox … but to all the localhost and site:port whiners there may be a way for you to use the new more secure tt-rss still w/o really securing your install if you don’t want.

The CVE’s were written against the base application and not the plugins; you can reopen your own vulnerabilities if you want to. I am not advising it, just pointing it out. AFAIK the plugin hooks occur b/f the sanity checks.

You can write a plugin (hopefully with an approved url list configured via prefs to limit the blast zone) that does its own curl_exec. Then you can do whatever the heck you want, including shot your own foot. But :man_shrugging:

I wouldn’t suggest asking anyone on the list to write it for you though.

Let the probation begin … 3 … 2 … #$%@ connection lost

yeah this was actually mentioned above (or in a different thread?). anyone could make an “unsafe_fetch” plugin which would hook on HOOK_FETCH_FEED or w/e and just do the thing.

(i don’t think there’s a way to hook on fetching other arbitrary URLs so this won’t enable caching and stuff like that.)

to me, the important part is that i’m not the one responsible, i can’t (and won’t) try to stop anyone from shooting himself in the foot, if he so desires.

core code is open source anyway, anyone could simply patch the necessary code to prevent any URL checks, no plugin needed.

don’t be so dramatic :slight_smile:

Would you mind clarifying what you mean by this please? What is the better alternative for people (like me) who are not aware of the latest trends?
Does this mean everything has to be behind a reverse proxy nowadays, and the default entry point should be the default ports? Does this mean local services have to be reached through an external IP?

As suggested in other threads (ex: “regression-some-atom-feeds-broken”) I have setup a reverse proxy, but this means opening the default port to reach a non-default port.
And this also means I have to hit a local gateway (router) in order to access services on the same machine, which sounds a bit ridiculous… but is that the correct way to do it now?

Thanks for your time, thanks Daniel and his colleagues at DigeeX, and thanks fox.

“hi plz teach me basics of networking and how to set up multiple random web services on my host, properly, for free” goes way beyond the scope of this forum, i think.

that aside, here’s some brief answers:

standard ports, yes

reverse proxy, if necessary (one exception could be services provided by containers linked onto tt-rss docker network, however you still need to use standard ports)

no

no

if you have any other questions, i suggest starting on this long, perilous, and largely unrewarding road of becoming a junior linux sysadmin by using your favorite search engine. good luck.

Thanks fox, no worries. I should have worded by questions better. By “external IP” and “local gateway” I meant a LAN IP address such as “192.168.x.x”. My bad.
BRB getting a degree in system administration.