• 0 Posts
  • 13 Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle
  • oh awesome, Palantir, the same company profiting off the genocide in Gaza

    CEO Alex Karp has been a vocal supporter of Israel. In November 2023, he stated, “I am proud that we are supporting Israel in every way we can.”

    After October 2023, Palantir has provided Israel with multiple AI-powered data analytics tools for military and intelligence purposes.

    In January 2024, Palantir held its board meeting in Israel and entered into a “strategic partnership” with Israel’s Ministry of Defense to help Israel’s “war effort.”

    In October 2024, Norway’s largest asset manager, Storebrand, divested its Palantir shares, worth $24 million, due to concerns that Palantir’s work for Israel might implicate Storebrand in violations of international humanitarian law and human rights.




  • but that’s the problem. Ai people are pushing it as a universal tool. The huge push we saw to have ai in everything is kind of proof of that.

    People taking the response from LLMs at face value is a problem

    So we can’t trust it, but in addition to that, we also can’t trust people on TV, or people writing articles for official sounding websites, or the white house, or pretty much anything anymore. and that’s the real problem. We’ve cultivated an environment where facts and realities are twisted to fit a narrative, and then demanded that we give equal air time and consideration to literal false information being peddled by hucksters. These LLMs probably wouldn’t be so bad if we didn’t feed them the same derivative and nonsensical BS we consume on a daily basis. but at this point we’ve just introduced and are now relying on a flawed tool that’s basing it’s knowledge on flawed information and it just creates a positive feedback loop of bullshit. People are using ai to write BS articles that are then referenced by ai. It won’t ever get better, it will only get worse.


  • I don’t think using an inaccurate tool gives you extra insight into anything. If I asked you to measure the size of objects around your house, and gave you a tape measure that was not correctly metered, would that make you better at measuring things? We learn by asking questions and getting answers. If the answers given are wrong, then you haven’t learned anything. It, in fact, makes you dumber.

    People who rely on ai are dumber, because using the tool makes them dumber. QED?




  • I think Ai being used by teachers and administrators for the purpose of off-loading menial tasks is great. Teachers are often working like 90 hours a week just to meet all the requirements put upon them, and a lot of those tasks do not require much thought, just a lot of time.

    In that respect, yeah sure, go for it. But at this point it seems like they’re encouraging students to use these programs as a way to off-load critical thinking and learning, and that… well, that’s horrifyingly stupid.


  • When I was in medical school, the one thing that surprised me the most was how often a doctor will see a patient, get their history/work-up, and then step outside into the hallway to google symptoms. It was alarming.

    Of course, the doctor is far more aware of ailments, and his googling is more sophisticated than just typing in whatever the patient says (you have to know what info is important in the pt. history, because patients will include/leave out all sorts of info), but still. It was unnerving.

    I also saw a study way back when that said that hanging up a decision tree flow chart in Emergency rooms, and having nurses work through all the steps drastically improved patient care; additionally new programs can spot a cancerous mass on a radiograph/CT scan far before the human eye could discern it, and that’s great but… We still need educated and experienced doctors because a lot of stuff looks like other stuff, and sometimes the best way to tell them apart is through weird tricks like “smell the wound, does it smell fruity? then it’s this. Does it smell earthy? then it’s this.”


  • I gotta be honest. Whenever I find out that someone uses any of these LLMs, or Ai chatbots, hell even Alexa or Siri, my respect for them instantly plummets. What these things are doing to our minds, is akin to how your diet and cooking habits change once you start utilizing doordash extensively.

    I say this with full understanding that I’m coming off as just some luddite, but I don’t care. A tool is only as useful as it improves your life, and off-loading critical thinking does not improve your life. It actively harms your brains higher functions, making you a much easier target for propaganda and conspiratorial thinking. Letting children use this is exponentially worse than letting them use social media, and we all know how devastating the effects of that are… This would be catastrophically worse.

    But hey, good thing we dismantled the department of education! Wouldn’t want kids to be educated! just make sure they know how to write a good ai prompt, because that will be so fucking useful.