• StopTech@lemmy.today
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 hour ago

    The “Cancel ChatGPT movement” doesn’t appear to be mentioned in the article, but other outlets say hashtags like #CancelChatGPT are trending on X.

  • lumettaria@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 hour ago

    Now imagine my shock when I had done the swap from ChatGPT to Claude the day before the news about Anthropic’s (now backpedalled) deal. Anyway, I deleted ChatGPT and Gemini accounts and degoogled my life while I was at it.

  • JigglypuffSeenFromAbove@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 hours ago

    From OpenAI’s statement:

    We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:

    • No use of OpenAI technology for mass domestic surveillance.

    • No use of OpenAI technology to direct autonomous weapons systems.

    • No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).

    It specifically states their AI can’t/won’t be used for surveillance and autonomous weapons. Of course I’m not saying I trust them, but isn’t this the same thing Anthropic says they’re against? What’s the difference here or what did I miss?

    • WhatsHerBucket@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 hours ago

      Same! Was planning on doing this today.

      What do you plan to switch to? I’m currently thinking a combination of Claude and something else for images if it turns out I really need to pay for it.

      • bobbbu@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        2 hours ago

        I use ppq.ai , which lets yo chose which llm/service you need, and then you are charged per query. It has all the latest models for imaging, text, video. Etc. So you get to use the one that fits the task best, and no need to pay for a membership. Which is way cheaper if you don’t have professional use of heavy models, imo. It allows allows privacy payments (monero and such. I dont have that, just mentioning it is possible :)).

  • /home/pineapplelover@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    4 hours ago

    Dude the only guardrails are

    1. No fully automated killings

    2. No mass surveillance

    You could literally do anything else, you could automate killing people with a person approving.

    Trump booted anthropic because they couldn’t lift these two guardrails. Fuck me

  • pnelego@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 hours ago

    I’m wondering if this is a play for a future bailout. OpenAI knows they are fucked; and instead of just going away like most companies do when they fail, they are embedding themselves in the government to secure a bailout under the guise of a critical defence vendor.

    Furthermore, I’m not convinced the researchers and critical personnel will work for a company that does this. I think we’re about to see the biggest jumping of a ship so far in the industry.

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    The lesser of two evils is still…evil. Anthropic’s hands aren’t clean either…they’re just minimally less caked in blood.

    BUT

    One can hope that this is the ‘turn towards the light side’. If ‘don’t be evil’ can finally be made profitable, well, self interest might actually be a lever for good. Ha.

    I wish there was a clearly, unambiguously good guy in the cloud AI space. I don’t know how to make that work with economies of scale being what they are. Yes, that includes Lumo - though one has faint hope on that end to.

    • qualia@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      3 hours ago

      Mass surveillance for advertising seems marginally more benign than mass surveillance by one’s own government, personally. Though admittedly both are bad.

      Edit: I can find alternatives for most of Google’s ecosystem but mapping out accurate bus routes is terrible via OSM/OsmAnd or Organic Maps. Anyone have any tips there?

      • muusemuuse@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        The mission statement is irrelevant when the outcome is the same. Google has data a hostile power wants and goes it to them whenever they want.

    • ArmchairAce1944@discuss.online
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      I just got grapheneOS on my new phone (it is a google pixel 10, but it is the one that can handle that…) I needed a client to use my gmail which will probably be the last thing I get rid of.

      • wabasso@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        If it’s the sending and receiving part of email, I’ve switched to purelymail (you could pick another) and put it behind my custom domain name. Because behind a custom domain, that’s the last time you’ll have to update your contacts as it won’t be dependent on which email provider you choose.

        Searching through decades of old emails I do still use the Gmail account, but I just have to get off my butt to self host a local IMAP server for that.

      • yabbadabaddon@lemmy.zip
        link
        fedilink
        English
        arrow-up
        15
        ·
        10 hours ago

        No, I don’t think this is correct. There was a time during which Google did great things. Their search engine allowed millions if not billions to gain access to knowledge. They had a positive impact on a lot of FOSS projects. What they were is not what they are.

        • digital_digger@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Agreed. They even refused to extend their services to China because of censure. But that was before, after change of CEO, enshittification started.

        • gnutrino@programming.dev
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 hours ago

          The tell was getting rid of “don’t be evil” as their motto. Even for a corporation that was a little on the nose.

  • raskal@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    101
    ·
    22 hours ago

    Canada recently has had its 2nd worst school shooting ever. The killer had many interactions with ChatGPT that warranted banning her account. A whistleblower has claimed that they wanted to inform Canada’s police force of these comments but were denied by ChatGPT’s management.

    They had a chance to stop the death of 8 people, most of which were young children, but failed to do anything.

    FUCK CHATGPT AND THOSE BASTARDS THAT RUN IT

    • jagungal@aussie.zone
      link
      fedilink
      English
      arrow-up
      11
      ·
      18 hours ago

      Why would you not contact police? I understand that this is a systemic failure and blame does not lie with that employee but if others me I’d rather be out of a job than have those deaths on my conscience for the rest of my life.

      • Kissaki@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 hours ago

        In my eyes some blame does lie with them. A systematic failure is a failure of many parts. An employee taking notice and following bad instructions is one of them.

        I don’t know what information they had, but if they were at the point of intending to share, it seems like whistleblowing would have been the just and moral thing to do even if it means ignoring immediate authoritative structure.

      • Takios@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        14
        ·
        12 hours ago

        It’s probabilities. If you report it you’re 100% out of a job but only maybe prevented something bad from happening. If you don’t report, you keep your job but maybesomething bad happens. Reliance on a job for survival shifts the decision even further to taking the course of action that’ll keep you your job.

  • trackball_fetish@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    16
    ·
    17 hours ago

    Anyone stockpiling ai prompt vulnerabilities for when we’ll eventually need them to fight off some deathbots?

    • Credibly_Human@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      3
      ·
      17 hours ago

      This is a nonsensical and unrealistic fear/threat to be putting at the top of your list.

      The biggest problems are happening right now not in some 90s sci fi films.

      One of those threats is automated weaponry and mass surveillance, but not in the comic relief way you speak about it.

      • trackball_fetish@lemmy.wtf
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        10 hours ago

        Prey tell the purpose of your comment, Brutus

        You take issue with referring to these machines as deathbots? I’m allowed to poke fun at things that will eventually be used to attempt murdering me you absolute anthropomorphic dunce cap.

        I wasn’t referring to some far off scenario, more for when this situation happens

        I can assure you that not only do I live somewhere where these very things are above me daily, that I’m out here working my ass off in unspeakable ways to prevent exactly the aforementioned sceneario for people like yourself

        Direct your anger elsewhere, the energy could be spent doing something useful

      • vacuumflower@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        10 hours ago

        It’s a trope that every problem posed by the plot has a solution of difficulty level properly fit to the audience.

        A culture of arcade games, unfortunately, has such long-standing effects.

        While we are playing a roguelike. With no respawns.

    • ILikeBoobies@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      13 hours ago

      A machine is more expensive and less expendable than a human. You don’t need to worry about killbots.

      • bearboiblake@pawb.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        9 hours ago

        Sorry, but this is a stupid take. Humans can refuse to fire on a crowd of innocent people. Killbots cannot. The unquestioning loyalty is worth more than money can buy.

          • ArmchairAce1944@discuss.online
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            The reason why shooting people was too difficult is because many of the einsatzgruppen members broke down psychological and some became so murderous that they might not have been refit to reenter civilian society. They used gas chambers because it was sufficiently distanced from the actual act of killing (it just involved rounding people up into a room and having some guy with a canister dump the stuff into a vent. None of the actual killers even had to see the results of their actions as the cleaning was done by another group) that they could do it without creating that same problem.

        • dejova281@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          9 hours ago

          Brainwashing is a thing, just look at the modern despots and their foot soldiers.

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      40
      ·
      23 hours ago

      Sam Altman is just some fail upward money guy, he’s been eventually removed from basically every prior position he has held.

      • PolarKraken@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        22
        ·
        22 hours ago

        Seems like his career has largely been lying and making impossible promises, so. The folks who do that well always manage to exit the stage before the magic tincture is revealed to just be piss 🤷‍♂️

  • perishthethought@piefed.social
    link
    fedilink
    English
    arrow-up
    175
    arrow-down
    3
    ·
    1 day ago

    mainstream

    I’ll believe that when my sisters start saying this. Till then, it’s just us privacy fans screaming in a dark cave, enjoying the echo.

    • criscodisco@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 hours ago

      I had a coworker tell me how cool Copilot was because he asked it a question and it found the answer in an email in his outlook mailbox. I thought, “you needed AI to search your email?”

      We are probably cooked.

    • Xorg_Broke_Again@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      95
      arrow-down
      1
      ·
      1 day ago

      It’s always like this. We get a ton of articles on how everyone is suddenly boycotting/deleting [insert thing] but when you ask someone in real life, they usually have no idea what you’re talking about.

      • EldritchFemininity@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        ·
        16 hours ago

        The one thing I will say is that there does seem to be a generalized dislike for AI that has all the investors and upper management types nervous. Even by their own studies do people generally either not care about AI in their products or actively dislike it/find it intrusive. There was a study by a phone company from this past summer or fall that concluded that 80% of their users had no interest in AI or found that it actively made their experience worse, and there have been plenty of pretty damning reports about how useful it’s been in various industries (just look at Microslop). That is not conducive to convincing investors to fund your product and does not show a viable path to making a profit in the future.

        We’ve seen similar things happening recently with car manufacturers walking back on their big touchscreens (with some help from regulation in civilized places that care about things like “pedestrian fatalities” - like Europe) due to consumer sentiment. They tried for nearly a decade to push bigger and bigger screens into cars and remove physical buttons, and now they’re moving in the other direction. Completely anecdotal evidence, but the last time I went to buy a car I told the salesman at the dealership that I wasn’t interested in cars newer than a certain year because that was when they increased the size of the screen and put them in a more obnoxious spot on the dashboard, and he said that he heard similar sentiments from practically everybody who came in looking to buy a car - everybody hated the bigger screens.

      • The Quuuuuill@slrpnk.net
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        1
        ·
        1 day ago

        so explain it to them gently. you won’t reach everyone, but you’ll reach more people than accepting this status quo