Heads up: several vulnerabilities fixed

also,

you’re not passing X-Forwarded-Proto properly to the container.


since skip url path checks leads to hard to diagnose issues such as above. should it stay? if it should stay, what if tt-rss generates a single warning on login mentioning wrong origin address potentially leading to various issues?

also,

doing this is a bad idea, unless you’re only staying while migrating away from your setup.

https://git.tt-rss.org/fox/tt-rss/commit/da5af2fae091041cca27b24b6f0e69e4a6d0dc60

etc

there are multiple reasons to not allow this, even as a hidden tweakable. why not use a separate subdomain instead?

I ended up using nginx to do this, but I’d prefer not having to do that. In my situation is was an easy change because it’s internal-only, but I could imagine a world where a third party could be running on a non-standard port. I understand there are security implications to this, but it should be an optional restriction for those who need non-standard port functionality.

an optional restriction is really not a restriction at all, i think.

main problem is services bound to loopback. tt-rss proxy opening access to fpm daemon or whatever else you might have “safely” bound to ::1 is bad news.

unfortunately, it’s somewhat hard to definitely prevent all connections to loopback specifically so we also filter by “safe” ports. this does break external services on nonstandard ports, yes, but in my opinion in this particular situation it is worth it.

luckily, in the year of our lord 2020, an absolute minority uses anything but regular https, if only because browser vendors push it so hard and whine at everything else.

i’m going to go as far as saying that at this point any web-related service in production on a random nonstandard port might be considered questionable simply because of it.

also,

well, the code is open, you can change whatever you like. as long as you understand the consequences, one of those being that i’m not going to feel responsible if you get owned.

i don’t particularly like this “forbid everything aaaa” thing but in a world where some imbecile decided that adding fucking javascript to a vector image was a good idea, and nobody told him to go jump off a cliff instead of implementing this whole cancerous abortion into web browsers, can we really afford to do something else? :thinking:

FYI, my post got merged into this thread. I had created one thread on its on. How could I have then read all this?

yes, imagine reading a bunch of recent posts before enriching us with a new thread. unthinkable, really.

:face_with_raised_eyebrow:

Thanks for pointing me in the right direction. Correcting this fixed the issue.

I also use a non-standard port. There really shouldn’t be an issue of leaving non-standard ports and it adds no real risks. I’d ask you leave this capability as a “:port” is a completely valid part of a URL and shouldn’t be disallowed. I am also one who think relative is fine, although I don’t use. There are no security issues with this and is someone is worried they can just not set that up in their tt-rss config. Fox can fix stuff that relates to CVEs but he can’t prevent idiots from shooting themselves in the foot by misconfiguring their servers, using admin/123456 as username/password and other stupid stuff. If you don’t have some modicum of real technical knowledge you shouldn’t be trying to run a web server.

tt-rss serving as a web proxy for services on random ports, internal or otherwise, absolutely does add risks.

I would prefer to keep validation consistent for all URLs instead of adding special handling for SELF_URL_PATH specifically to minimize confusion and potential for further exploits.

especially if the goal here is running stuff on http://myserver:1234. this kinda setups went out of fashion a decade ago.

I’m just going to comment out the loopback check. Using public IP addresses has it’s own set of security implications that are more concerning than allowing my private single-user instance to loopback on itself.

actually, no. it works kinda like this:

your private instance → XSS from feed data or w/e → your local service bound to localhost is potentially exposed → your LAN is also exposed

good luck

Fox, you will as always do what you think makes sense/is right. I just point out that a “valid URL” includes a port number, and people have reasons for this use case. for example, if your ISP blocks incoming traffic on specific ports or other limits/monitoring you might want to use an alternative. yes, the ISP are jerks but many folks don’t have a choice and have only a single broadband carrier choice.

well, that much is obvious. valid in this situation is more like “we’re reasonably certain that we won’t be screwed over if we try to fetch and show data from this URL or even simply show it to the user”

don’t get me started on a recent bunch of changes that deals with redirects.

I use uMatrix for a reason, anyone running a browser that just executes any JavaScript handed to it has way bigger issues than loopback connections. I wouldn’t be using tt-rss at all for security reasons without uMatrix.

tt-rss doesn’t work without javascript so you’d have this origin whitelisted. successful XSS via tt-rss and you’re screwed.

A potential exploit could lead to a potential exploit. Fortunately there’s nothing on my LAN that’s accessible without additional credentials, assuming that there’s an additional exploit where that aforementioned exploit is leveraged to gain arbitrary server-side TCP access.

Yeah, I’d rather not punch holes in my firewall and add additional points of failure, I’ve already unnessarily bounced it through nginx, that’s enough. Frankly I might just remove the port “sanitization” as well and simplify things. Ports are part of the URI specification, you’re the breaking specification, and now with the loopback check it makes even less sense.

Hello peeps!

I had to jump in this thread, I’m Daniel Neagaru, and it was my
initiative to perform penetration testing on an open source
project I commonly use (TTRSS) to test our services
before we offer it to clients, and also contribute to the
security of this project.

Me and my friend that’s helping me build the company have spent
many days hacking on the application and writing a comprehensive
report with accompanying proof of concept scripts. We intend to
publish our findings on Monday to give users time to update
and for us to retest the fixes. It would be irresponsible for us
to provide more details about the vulnerabilities until people have
a chance to apply updates.

When the report is made public, everyone will have the
opportunity to take a look at the findings then the conversation
here would be more productive, and we can all be on the same page
and find a compromise, fox doesn’t have to explain the
vulnerabilities himself… we’ll publish the findings soon and
I’m here to discuss them if you have any questions.

In our opinion, fox has handled the vulnerabilities very well,
responded fast, and was a pleasure doing the disclosure process
with him. Many changed happened at once, so that lead to some
things that broke, but I’m sure they can be fixed or safer
alternatives to be found.

alright people, it seems that filtering was a bit too strict after all.

https://git.tt-rss.org/fox/tt-rss/commit/4efc3d7b3f6465a23d5e1c1415ec74e80cc7562d

filtering related to whitelisted ports and loopback are going to only apply to stuff actually downloaded by tt-rss, i.e. fetching of feeds, and various caching / proxy functionality, for URLs simply displayed in the UI and SELF_URL_PATH nonstandard ports and such are going to be allowed.

Thank you very much for doing this. It is appreciated.

Any chance you, Daniel, or DigeeX requested a CVE for this?

It’s in progress, we have written the descriptions and will request them on weekend