• 0 Posts
  • 16 Comments
Joined 5 months ago
cake
Cake day: June 29th, 2025

help-circle
  • Ordinary people made great use of the internet - but failed to make it really decentralized. Thus the enshittification of Reddit, Youtube, social media, and so forth

    I don’t think one is related to the other.

    Decentralisation doesn’t affect enshittification that much. Look at Lemmy and Fediverse in general - it’s federated… so what? The .world instance is by far the largest in the Fediverse. If the mods there go insane, like they did on Reddit, or if the admins decide to add monetisation to it… it just happens. There being other servers changes nothing for the users stuck on the .world server. Sure, they can create new accounts elsewhere, but that’s - in principle - no different than switching from Reddit to Lemmy.

    On the other hand, look at Steam. Valve, the creators of Steam, has no “decentralisation” of their product, they’re the god emperor of everything in terms of how Steam operates. At face value, it’s the same exact product as, I don’t know, the Epic Store, and yet Steam is loved by gamers, while Epic is hated.

    No, you can have centralised and not enshittified services just fine - as long as the goal is to provide the service, instead of “creating value for the shareholders”. As soon as that element comes in, there’s no stopping enshittification.

    We can choose to embrace local LLM that is fully under our control, or cede ownership to the 1% forevermore.

    Agreed.


  • You’re not training it from scratch, though. There are people, enthusiasts, doing it for you. I can fire up LM Studio and browse through thousands of models to then have a conversation with, or have them write stories, etc., etc.

    As for “nothing of economic value” - that’s, again, just plain misunderstanding what AI can be used for. Corridor Crew - a VFX team publishing on YouTube - used self-trained AI to boost their film making options. For example, to copy the “bullet time” effect from The Matrix, they were able to use around a dozen cameras instead of hundreds, and then used AI to create the “in between” frames.

    How does that have “no economic value”, mate?


  • But that’s completely not true! Like, not a single thing you said is even slightly correct!

    LLMs are relatively cheap to run - at small scales. You can run an LLM on your own computer right now. It won’t be super fast, it won’t have super skills, but you can run it, and you can train it yourself.

    Massive LLMs like ChatGPT require tremendous resources precisely because they are not just tools available only to big players. Everybody on the planet has access to them - for free. The only actual difference there is between running an LLM locally and through a provider is that you get better speed and (sometimes, depending on context) better training through a provider.

    As for “there’s a lot to milk from its growing adoption” - maybe? Probably? Who knows? That’s the “magic” of the AI bubble we’re experiencing right now - the big players keep saying that it will “make work and money obsolete”, that “anyone will be able to do anything”, that “a time of post-scarcity approaches”, and a billion other bullshit marketing slogans like that. But the reality is that nobody has yet figured out how to make money on that thing.

    Right now, the only reason it’s “growing”, is because of the weird and probably illegal circular financing that’s going on at the very top - Nvidia invests in OpenAI, which invests in Oracle, which invests in Nvidia - and so on. No money is actually being made or (often) even changing hands, but everyone can now show they’ve received a lot of investment which pumps up their stock prices. The only reason this hasn’t popped yet is probably because the main investing parties are using tonnes of cash they had stored.

    Growing adoption means nothing. It’s a marketing tool for them to keep shareholders happy while they keep a literal investing circlejerk going, every now and again inviting another player into the fold.








  • Because other than appearance and keyboard shortcuts, I haven’t configured anything to affect these behaviors

    Which is another aspect of the “Windows is more stabled” that I meant earlier.

    I switched my laptop last year and installed Arch with Plasma 6 so it was working out of the box

    The save window position thing was also working out of the box on mine. Only after it stopped I started looking into this and found that, apparently, it’s NOT a thing KDE/Wayland can do. I don’t know how it worked, but settings also show that feature doesn’t exist - if you go to System Settings → Window Management → Window Behaviour → Advanced → Window placement, I only have these options available: “Minimal Overlapping”, “Maximised”, “Random”, “Centred”, “In Top-Left Corner” or “Under Mouse”. There’s no “Remember” or “Restore previous” or anything like that.



  • It is out of the box. Meta + Arrow Keys and/OR Meta + PgUp.

    Ah, OK, nice! I didn’t see it as it’s not available via mouse, but found all those threads saying it doesn’t exist. Good to know!

    Confirmed works by FarrellPerks@feddit.uk in above comments

    Doesn’t work on Garuda (Arch-based) with KDE.

    Or rather: it used to work, but then just stopped.

    I don’t know about desktop towers, for laptop it is always only one instance — my laptop display, monitor is dark before I hit enter

    Interesting! On my laptop I also had two instances of SDDM.

    same happens on KDE Plasma.

    Not where I’m sitting. Tested via cat accidentally turning a monitor off. The browser window just stayed on that screen - the process was there, but the application was not available.


  • Stability? Update management? Window tiling? What? Linux does have all of these things.

    No.

    In fact Linux is way more stable than Windows

    I install Windows and forget about it. I install Linux and have to do all this, and then it still might do this or this.

    Mind you, it does depend on the distro

    Agreed.

    and the amount of stability you want

    I want all the stability.

    but I have been running Debian servers for years and I hardly run into problems.

    Not talking about servers.

    But even then - at my last job we finally killed off a Windows Server that had an uptime of over 1000 days, just chugging along like a little trooper. At my previous-previous job I was responsible for the WinServer updates, every single one of them was getting monthly updates and reboots, didn’t have a single issue in 7 years. It was just shy of 100 servers.

    The only thing windows offers over Linux is gaming and a better UI. Even both of those are dwindling away. I hate the new windows 11 UI and most games work on Linux unless you require a rootkit for some anti cheat software.

    Agreed. I have Garuda Linux installed on my gaming PC and only had minor issues with three titles. It’s surprisingly frictionless.


  • Especially when you’re on Arch with KDE, you don’t have:

    1. good update management
    2. window tiling
    3. saving window positions

    I know because I’m on Arch with KDE.

    By “good update management” I mean what MS does - all updates are pushed once a month, on Patch Tuesday (second Tuesday of the month). You can put it in your calendar and plan for a necessary reboot.

    I know Arch is a rolling release so it doesn’t have that on purpose, but it’s not much better with Ubuntu - I was getting updates every couple of days, once a week at best.

    Window tiling doesn’t exist “out of the box”, you need third party software (which, apparently still doesn’t give you what Windows has out of the box) or a switch from KDE to COSMIC, which still doesn’t give you the freedom of choice that Windows has (it’s either “everything is tiled” or “nothing is tiled”).

    Saving window positions (on Wayland) is the most confusing one, because it seems like the one that’d be the easiest to implement, but KDE devs just flat out refuse to do it. I hear that it works on X11.

    Multi-monitor support is piss poor. If I spread my windows across multiple monitors and then turn one monitor off, those windows are no longer accessible. SDDM displays the same interface on each monitor, and each is a separate instance of SDDM - meaning, you can type in your password on monitor 2, and if you press “OK” on monitor 1, it will fail, because the password field is empty. It’s just silly design. On Windows, if you disconnect an extra screen, all the content gets dropped on the main screen. Since Windows 11, if you then re-connect the screen, all windows will pop back into their places before the disconnect happened.