• Avid Amoeba@lemmy.caOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    19 days ago

    Also he thinks LLMs are a dead end for getting smarter AI while Zuck is doubling down on them.

    • tomiant@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      Getting Smarter AI < Making More Money

      Is there more money in smarter AI or in manipulating people’s voting patterns with the tools you’ve got?

      I saw Suck at Trump’s inauguration, I didn’t see this Chinese feller there.

      • nymnympseudonym@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 days ago

        this Chinese feller

        He’s French, actually.

        This is one of the three people that basically invented Deep Learning . One of the others is Geoffrey Hinton, who got the Nobel Prize in 2024

        No matter what you think of LeCun or his opinions… he’s damn well worth listening to with attention and respect.

          • krooklochurm@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            19 days ago

            Good for you for owning up to it like a grown up. Might I suggest rewarding yourself by shumming?

            • tomiant@piefed.social
              link
              fedilink
              English
              arrow-up
              0
              ·
              18 days ago

              Chumming is a Quest in Escape from Tarkov. Must be level 24 to start this quest. Stash 3 Golden neck chains under the mattress next to BTR-82A in Generic Store on Interchange Stash 3 Golden neck chains in the microwave on the 3rd floor of the dorm on Customs Stash 3 Golden neck chains in the middle wooden cabin at the sawmill on Woods Eliminate 5 PMC operatives in the time period of 22:00-10: …

              Mmmno.

              • krooklochurm@lemmy.ca
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                18 days ago

                Shumming.

                It’s a new word I learned the other day on lemmy.

                It’s when you shit and cum at the same time.

                Here it is a sentence “I’m going to shun all over your face”

                Or “I can’t right now, I’m shumming!”

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    19 days ago

    Meta’s chief AI scientist and Turing Award winner Yann LeCun plans to leave the company to launch his own startup focused on a different type of AI called “world models,” the Financial Times reported.

    World models are hypothetical AI systems that some AI engineers expect to develop an internal “understanding” of the physical world by learning from video and spatial data rather than text alone.

    Sounds reasonable.

    That being said, I am willing to believe that an LLM could be part of an AGI. It might well be an efficient way to incorporate a lot of knowledge about the world. Wikipedia helps provide me with a lot of knowledge, for example, though I don’t have a direct brain link to it. It’s just that I don’t expect an AGI to be an LLM.

    EDIT: Also, IIRC from past reading, Meta has separate groups aimed at near-term commercial products (and I can very much believe that there might be plenty of room for LLMs here) and aimed advanced AI. It’s not clear to me from the article whether he just wants more focus on advanced AI or whether he disagrees with an LLM focus in their afvanced AI group.

    I do think that if you’re a company building a lot of parallel compute capacity now, that to make a return on that, you need to take advantage of existing or quite near-future stuff, even if it’s not AGI. Doesn’t make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.

    https://datacentremagazine.com/news/why-is-meta-investing-600bn-in-ai-data-centres

    Meta reveals US$600bn plan to build AI data centres, expand energy projects and fund local programmes through 2028

    So Meta probably cannot only be doing AGI work.

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension.

      The system he’s talking about is more about using NNL, which builds new relationships to things that persist. It’s deferential relationship learning and data path building. Doesn’t exist yet, so if he has some ideas, it may be interesting. Also more likely to be the thing that kills all human.

        • just_another_person@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          19 days ago

          Lol 🤣 I’m SO EMBARRASSED. You’re totally right and understand these things better than me after reading a GOOGLE BLOG ABOUT THEIR PRODUCT.

          I’ll never speak to this topic again since I’ve clearly been bested with your knowledge from a Google Blog.

          • Communist@lemmy.frozeninferno.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            19 days ago

            yes, google reported about their ai discovering a novel cancer treatment, of course they did?

            now tell me about how it isn’t true. Do you have anything of substance to discredit this?

            this reeks of confirmation bias, did you even try to invalidate your preconcieved notions?

            • just_another_person@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              19 days ago

              I sure do. Knowledge, and being in the space for a decade.

              Here’s a fun one: go ask your LLM why it can’t create novel ideas, it’ll tell you right away 🤣🤣🤣🤣

              LLMs have ZERO intentional logic that allow it to even comprehend an idea, let alone craft a new one and create relationships between others.

              I can already tell from your tone you’re mostly driven by bullshit PR hype from people like Sam Altman , and are an “AI” fanboy, so I won’t waste my time arguing with you. You’re in love with human-made logic loops and datasets, bruh. There is not now, nor was there ever, a way for any of it to become some supreme being of ideas and knowledge as you’ve been pitched. It’s super fast sorting from static data. That’s it.

              You’re drunk on Kool-Aid, kiddo.

              • Communist@lemmy.frozeninferno.xyz
                link
                fedilink
                English
                arrow-up
                0
                ·
                19 days ago

                You sound drunk on kool-aid, this is a validated scientific report from yale, tell me a problem with the methodology or anything of substance.

                so what if that’s how it works? It clearly is capable of novel things.

                • just_another_person@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  19 days ago

                  🤦🤦🤦 No…it really isn’t:

                  Teams at Yale are now exploring the mechanism uncovered here and testing additional AI-generated predictions in other immune contexts.

                  Not only is there no validation, they have only begun even looking at it.

                  Again: LLMs can’t make novel ideas. This is PR, and because you’re unfamiliar with how any of it works, you assume MAGIC.

                  Like every other bullshit PR release of it’s kind, this is simply a model being fed a ton of data and running through thousands of millions of iterative segments testing outcomes of various combinations of things that would take humans years to do. It’s not that it is intelligent or making “discoveries”, it’s just moving really fast.

                  You feed it 102 combinations of amino acids, and it’s eventually going to find new chains needed for protein folding. The thing you’re missing there is:

                  1. all the logic programmed by humans
                  2. The data collected and sanitized by humans
                  3. The task groups set by humans
                  4. The output validated by humans

                  It’s a tool for moving fast though data, a.k.a. A REALLY FAST SORTING MECHANISM

                  Nothing at any stage if developed, is novel output, or validated by any models, because…they can’t do that.