

Department of Wino?


Department of Wino?


As an added benefit, that protects you if your phone gets lost or stolen during the journey.


“What if the crime you’re being charged with is destruction of evidence? Checkmate!”


According to the Constitution, almost all rights apply equally to citizens and non-citizens. The term used in the Constitution is “persons,” not “citizens.” The Supreme Court has eroded some of those rights over time, though.


They have to charge him with something.


It’s price discrimination. Lots of companies do it, generally based on marketing analytics they apply to their users.


And there’s no reason for widespread adoption of AI besides a massive hype cycle driving a speculative bubble.


The Earth doesn’t need a fucking thing it doesn’t already have, except for a cleanup of human-generated pollution.
Most of the new demand for energy is to run LLMs that nobody actually needs.


And then people wonder why the Chinese have very high confidence in their government.
Yeah, nothing to do with total government control of the media…
The CCP is rotten to the core. What we’re seeing here is that someone was caught not playing the corruption game according to the CCP’s rules.


That shakedowns and show trials can serve as a substitute for rule of law?


Someone didn’t get their cut.


And, if I recall correctly, the other option besides rolling their own interpreter was to just use Scheme as the browser scripting language. Which would have been immeasurably better.


“End-to-end encrypted using the time-tested ROT-13 cypher.”
Or something even more porous.


A family member is a graphic designer for an agency. They use AI to reformat files, and claim that having AI do it, then fixing it, is quicker and easier than doing it some other way. But their actual creative process doesn’t involve AI at all.


Most valuable assets can’t really move
The very rich don’t hold a very large percentage of their weath in immobile form. More in investments of various sorts.


How disappointing.
In related news, my dog didn’t die of loneliness after worming treatment.


YouTube has been getting much worse lately as well. Lots of purported late-breaking Ukraine war news that’s nothing but badly-written lies. Same with reports of Trump legal defeats that haven’t actually happened. They are flooding the zone with shit, and poisoning search results with slop.


Nothing to do with the 100,000 dead civilians.


it’s basically just pattern recognition
Only of a very specific kind.
Something computers are really good at.
They’re good at recognizing the patterns they’re programmed to recognize. That tells you nothing of the significance of a pattern, its impact if detected, or what the statistical error rates are of the detection algorithm and its input data. All of those are critical to making real-life decisions. So is explainabiliy, which existing AI systems don’t do very well. At least Anthropic recognize that as an important research topic. OpenAI seems more concerned with monetizing what they already have.
For something safety-critical, you can monitor critical parameters in the system’s state space and alert if they go (or are likely to go) out of safe bounds. You can also model the likely effects of corrective actions. Neither of those requires any kind of AI, though you might feed ML output into your effects model(s) when constructing them. Generally speaking, if lives or health are on the line, you’re going to want something more deterministic than AI to be driving your decisions. There’s probably already enough fuzz due to the use of ensemble modeling.
What computers are really good at is aggregating large volumes of data from multiple sensors, running statistical calculations on that data, transforming it into something a person can visualise, and providing decision aids to help the operators understand the consequences of potential corrective actions. But modeling the consequences depends on how well you’ve modeled the system, and AIs are not good at constructing those models. That still relies on humans, working according to some brutally strict methodologies.
Source: I’ve written large amounts of safety-critical code and have architected several safety-critical systems that have run well. There are some interesting opportunities for more use of ML in my field. But in this space, I wouldn’t touch LLMs with a barge pole. LLM-land is Marlboro country. Anyone telling you differently is running a con.
Gomer should get that treated.