• 0 Posts
  • 9 Comments
Joined 5 months ago
cake
Cake day: September 17th, 2025

help-circle

  • Lmao, I promise streaming services and CDNs employ world-class experts in encoding, both in tuning and development. They have already poured through maximized quality vs cost. Tuning your encoder to allow for more bits in some scenes by definition ups the average bitrate of the file, unless you’re also taking bits away from other scenes. Streaming services have already found a balance of video quality vs storage/bandwith costs that they are willing to accept, which tends to be around 15mbps for 4k. That will unarguably provide a drastically worse experience on a high-enough quality tv than a 40mbps+ bluray. Like, day and night in most scenes and even more in others.

    Calibrating your tv, while a great idea, can only do so much vs low-bitrate encodings and the fake HDR services build in solely to trigger the HDR popup on your tv and trick it into upping the brightness rather than to actuality improve the color accuracy/vibrancy.

    They don’t really care about the quality, they care that subscribers will keep their subscriptions. They go as low quality as possible to cut costs while retaining subs.

    Blu-rays don’t have this same issue because there are no storage or bandwith costs to the provider, and people buying blu-rays are typically more informed, have higher quality equipment, and care more about image quality than your typical streaming subscriber.


  • Oh, for sure, the marketing is terrible and makes this into a bigger issue by making people over confident. I wouldn’t say the lack of sandboxing is a major problem on its own, though. If you want an automated agent that does everything, it’s going to need permissions to do everything. Though they should absolutely have configurable guardrails that are restrictive by default. I doubt they bothered with that.

    The idea is sound, but the tech isn’t there yet. The real problem is that the marketing pretends that LLMs are ready for this. Maybe Anthropic shouldn’t have released it at all, but at this point AI companies subsist on releasing half-baked products with thrice-baked promises so at this point I wouldn’t be surprised if OpenAI, in an attempt to remain relevant, tomorrow releases an automated identity theft bot to help you file your taxes incorrectly.


  • Bad, sure, but also a well-known fundamental issue with basically all agentic LLMs, to the point where sounding the alarm as much as this seems silly. If you give an LLM any access to your machine, it might be tricked into taking malicious actions. The same can be said to real humans. Though they are better than LLMs right now, humans are the #1 security vulnerability in any system.

    LLMs can get around this by requiring explicit permission before being able to take certain actions, but the more it asks for permission, the more the user has to just sit there and approve stuff, defeating the purpose of an autonomous agent in the first place.

    To be clear, it’s not good, and anyone running an agent with a high level of access with no oversight in a non-sandboxed environment today is a fool. But this article is written like they found an 11/10 severity CVE in the linux kernal when really this is a well-known fundamental thing that can happen if you’re an idiot and misusing LLMs to a wild degree. LLMs have plenty of legitimate issues to trash on. This isn’t one of them. Well, maybe a little.