

This absolutely. Anyone who actually wants open registration will be configuring their own SSO or whatever backend. The default should be safe for testing and/or hobbyists.


This absolutely. Anyone who actually wants open registration will be configuring their own SSO or whatever backend. The default should be safe for testing and/or hobbyists.


To anyone afraid of the above conclusion, a dedicated $5 VPS with automatic snapshots get you a long way.


Any time you have a server willing to process random data uploaded from randos, just expect it to be compromised eventually and prepare for the eventuality by isolating it, backing it up religiously, and setting up good monitoring of some sort. Doesnt matter if its a forge, a wiki, or like nextcloud or whatever. It will happen.


We also have COW filesystems now. If you need large datasets in different places, used by different projects, etc, just copy them and use BTRFS or ZFS or whatever. It wont take any space and be safer. Git also has multiple ways of connecting external data artifacts. Git should by default reject symlinks.


Theres a HUGE difference between hosting it essentially read-only to the world, vs allowing account creation, uploading, and processing unknown files by the server.
I have thought of blocking access to the commit history pages at the reverse proxy to cut off 99% of the traffic from bots. If anyone wants to look at the history, its just a git clone away.


You can git pull a repo to your machine, make your changes and then use git to submit a patch via email. Its not pretty, but it works. Hopefully federation is built soon and you will be able to submit a pull request from your own forge.


While good, network security isnt the issue. Its running a web service with open registration allowing randos to upload content that gets processed by the server.
Throw this up on a dedicated $5 VPS and you still have a problem. The default should be manual registration by admins.


Its always code forges and wikis that are effected by this because the scrapers spider down into every commit or edit in your entire history, then come back the next day and check every “page” again to see if any changed. Consider just blocking pages that are commit history at your reverse proxy.
I see this take online a lot, but in person, everywhere I go people play netflix and whatever directly on their TV. I think there might just be a huge divide in perspective between those with and without game consoles of some sort always connected to their TV.