Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Idle

OpenAI Cofounder Mocked for Tweeting That Neural Networks Might Be Slightly Conscious (futurism.com) 175

"It may be that today's large neural networks are slightly conscious," OpenAI cofounder Ilya Sutskever tweeted Wednesday.

Futurism says that after republishing that remark, "the responses came rolling in, with some representing the expected handwringing about sentient artificial intelligence, but many others calling bull." "Every time such speculative comments get an airing, it takes months of effort to get the conversation back to the more realistic opportunities and threats posed by AI," UNSW Sidney AI researcher Toby Walsh chimed in....

Independent sociotechnologist Jürgen Geuter, who goes by the pseudonym "tante" online, quipped in response to Sutskever's tweet that "it may also be that this take has no basis in reality and is just a sales pitch to claim magical tech capabilities for a startup that runs very simple statistics, just a lot of them...."

Leon Dercynski, an associate professor at the IT University of Copenhagen, ran with the same idea. "It may be that there's a teapot orbiting the Sun somewhere between Earth and Mars," he bantered. "This seems more reasonable than Ilya's musing, in fact, because the apparatus for orbit exists, and we have good definitions of teapots...."

These critics, it should be noted, are not wrong to point out the outlandishness of Sutskever's claim — it was not only a departure for OpenAI and its chief scientist, but also a pretty unusual comment to make, given that up to this point, most who work in and study AI believe that we're many years away from creating conscious AI, if indeed we ever do.

Sutskever, for his part, seems unbothered by the controversy.

"Ego is (mostly) the enemy," he said Friday morning.

This discussion has been archived. No new comments can be posted.

OpenAI Cofounder Mocked for Tweeting That Neural Networks Might Be Slightly Conscious

Comments Filter:
  • Mockery? (Score:5, Funny)

    by cascadingstylesheet ( 140919 ) on Sunday February 13, 2022 @08:48AM (#62263539) Journal
    Mockery? On the internet????
    • by Ã…ke Malmgren ( 3402337 ) on Sunday February 13, 2022 @08:50AM (#62263541)
      I'm slightly conscious of the fact that there might be such a thing.
    • Re:Mockery? (Score:5, Informative)

      by Joce640k ( 829181 ) on Sunday February 13, 2022 @09:10AM (#62263563) Homepage

      Somebody really needs to put a teapot into orbit between Earth and Mars. Elon...?

      https://en.wikipedia.org/wiki/... [wikipedia.org]

      (also drop a Mars Bar where the Mars rover will find it...)

  • Self-aware (Score:5, Funny)

    by RickDeckard57 ( 4512475 ) on Sunday February 13, 2022 @08:54AM (#62263547)
    Wow, that's really old news! Everyone knows Skynet became self-aware on August 29, 1997.
  • by OneHundredAndTen ( 1523865 ) on Sunday February 13, 2022 @09:50AM (#62263609)

    I own a little book from 1981 that goes under the title 'Experiments in Artificial Intelligence for Small Computers'. While interesting - especially at the time - it is ridiculously triumphalist. From the preface:

    Furthermore, we may be closer to artificial intelligence than some people think. At least one AI researcher - Philip C. Jackson from Xerox Corp. - is recommending that computer scientists, when working on certain types of AI programs, take certain precautions lest the program suddenly become intelligent and get out of control!

    Some just won't learn.

    • by gweihir ( 88907 )

      Some just won't learn.

      Indeed. always the same grande, baseless visions. They are basically trying to claim they are the greatest researchers and engineers ever by hyping their product. At the same time, that product is in no way, form or shape what they are claiming it is and it certainly is only "intelligent" if you degrade the meaning of "intelligent" way below the usual meaning. By the definition currently used, a book can be "intelligent". That makes no sense at all, unless you are lying for a marketing campaign.

  • So what other definitions can we change to make the attached word fit some narrative?

  • Truth was taken for mockery, the target of the entire affair doesn't seem to be that bothered, but the media, as always, are putting more wood on the fire while trying to milk the history for needless drama and clicks.

  • by suss ( 158993 )

    "Independent sociotechnologist Jurgen Geuter, who goes by the pseudonym "tante" online"

    Tante means "Aunt" in several languages, Dutch and German being two of them.

    We all have a crazy aunty who claims to have some made-up job, but i hadn't heard "sociotechnologist" before...

  • by Slicker ( 102588 ) on Sunday February 13, 2022 @11:06AM (#62263687)

    Yes. I read the book, shaking my head all the way through. The Complex "Theory" of consciousness is (and nevermind the lack of a definition for it) likely to be something that arises from a certain critical mass of complexity. The idea is almost as absurd as referring to it as a theory. The reasoning is similar to that of Roger Penrose's Chinese Room analogy, in that it's based on a blurry view of what each was talking about. They had know clear idea of what they were talking about and that fact, alone, makes it nonsense.

    Neural nets were based on the 1950's Hodgkins and Huxley model of neurons. That model was very overly simplistic and only minimally biologically valid. Neurons are much more complex and we know far more about them, today--knowledge never incorporated into neural nets. For more biologically accurate models, look at the Genesis or Neuron projects. These are programs you can use for rich, biologically valid, models. Instead, Neural Nets moved toward optimized mathematical models and, in so doing, grew much closer to the field of statistics and probability. This new way of modeling doesn't even allow for corrections or much at all in the way of any kind of fundamental advances. Tensor Flow, for example, provides a handful of activation functions (how weights are modified by training set examples) and you can add more but you have a highly constrained environment in which these functions can operate. The father of Deep Learning himself has openly stressed this limitation.

    In AI, researchers almost exclusively seek new methods of intelligence, especially algorithms that can solve the widest possible set of problems. Many seek to claim this as Artificial General Intelligence (AGI) but what if it is not intelligence that is at the core of what we are? What if intelligence (of which there can be many kinds.. perhaps unlimited kinds) is merely a tool for an Agent of Free Will? That agent uses Free Will to decide which tools to apply in understanding or solving each problem.

    Free Will is the ability to derive options, weigh them against each other, and execute the most preferred. We (humans) compare our current observations to patterns of the past to know which sequences are more or less likely to occur in the future (or had happened in the past, if pondering the past). Various things in such sequences will have efficacies (positive for things like food when we are hungry, or negative when it will cause pain). These are possibilities but those that occur requiring such action to make them most likely are optional possibilities. The sum of likelihood + efficacy of a sequence is what you weigh them with. We choose that with the highest sum of possibility and efficacy and execute it.

    So what is consciousness? Well, I think most of us agree that consciousness is the continuous stream of things we are aware of. Those possibilities I spoke of, above, comprise exactly the things we are aware of. So from this, we can easily say that consciousness is the stream of possibilities (some of which are options) that pass through awareness. One could also say that the soul is the continued and unique existence of that consciousness (if you don't mind leaving the question of immortality out of it). The soul is, in other words, the story of one's life.

    Feel Will provides for so much that neural nets lack, such as intent, purpose, and the ability to contemplate. By default, the human mind is in contemplation mode (modeling possibilities) until we decide to take action. It seems that deciding to contemplate can also be an action of a kind. In all of this, it's hard to argue that we are not looking at the very inner workings of a consciousness.

    • by jd ( 1658 )

      Neural nets used to be based on that. Genesis 3 and other biological simulations of neural nets are not based on simple weighting algorithms. Do keep up.

      • by gweihir ( 88907 )

        They are still fully deterministic and in the end, their "learning" part is just estimating parameters for a statistical classifier. Making that classifier more complex does not suddenly make it "intelligent". You have completely missed the point of the OP.

        • There is precisely zero evidence that the human brain is not deterministic.

          People go nuts and take a shit on their own intelligence as soon as they realize that all available evidence suggests that "Free Will" is nothing but a fucking illusion swallowed up by a very complex piece of organic machinery.

          You think you have a choice. But of course you would. The fact that you're aware of the deliberative process your neurons are engaging in doesn't mean you're in control of it.

          You speak of baseless mystici
    • The Chinese Room argument was from John Searle, FWIW.

    • by gweihir ( 88907 )

      Yes. I read the book, shaking my head all the way through. The Complex "Theory" of consciousness is (and nevermind the lack of a definition for it) likely to be something that arises from a certain critical mass of complexity. The idea is almost as absurd as referring to it as a theory.

      Indeed. That idea is pure, baseless mysticism. Even very complex deterministic systems are still fully deterministic and have no special properties. This idea is basically "creation from nothing" and utterly dumb.

      It is also pretty clear that intelligence is basically only a tool and that it requires skill and insight to apply it competently. That skill and insight can then be boosted using intelligence, but it needs to be present before and without that intelligence. Just look at how many utterly dumb high-

    • by narcc ( 412956 )

      Total failure.

      First, there is no book called The Complex Theory of Consciousness or any variation of that.

      The reasoning is similar to that of Roger Penrose's Chinese Room analogy,

      Penrose had nothing to do with the Chinese room, and the Chinese room is not related in any way to emergence. You're talking nonsense.

      In AI, researchers almost exclusively seek new methods of intelligence, especially algorithms that can solve the widest possible set of problems.

      This is what people imagine AI researchers do, not what they actually do

      So what is consciousness? Well, I think

      You are the wrong person to ask.

  • it was found that most of the mockery was perpetrated by the neural networks themselves.
  • I just found my girlfriend is slightly pregnant which is too bad since my parents are slightly dead, so will never meet their slightly human grandchild.
    • This was basically what I was going to say.

      The very first problem is defining "conscious", then, when that is done, we have to figure out if there are degrees of consciousness.

      Look, we even have this with pregnancy. Is a person pregnant the moment the first sperm breaks the through, into the egg ? Does it start at the first cell division (12 hours?)? Does the placenta have to be forming before there's a pregnancy (4-6 weeks later) ? Or, does just looking at a girl with a glint in your eye cause her to

    • But then the next day you found out you had cancer, and very fucking quickly learned to understand that some things exist on a gradient.
  • by jd ( 1658 )

    We don't have a definition of consciousness, so it's hard to falsify the cl\aim.

    We do know that consciousness is emergent, a product of interactions rather than an algorithm, and that it is a continuum rather than a yes/no thing,

    What we don't know is if there's a minimum level below which the interactions are discrete (in the same way that below a certain threshold of photons, the two slit experiment produces random dots and not a wave). That's an unprovable assumption until we know what we're even measurin

    • by gweihir ( 88907 )

      We do know that consciousness is emergent, a product of interactions rather than an algorithm, and that it is a continuum rather than a yes/no thing,

      Actually, we do not know that. That idea is pure, baseless speculation. The only things we know is that some instances of consciousness can recognize themselves and that they can affect this physical reality regarding that recognition. These follow from the consciousness being discussed via physical channels. That is about all we know.

      Yes, that is almost nothing and it does not really form a sound basis for more research. Hence people fantasize about additional properties, just like you do. A very common re

    • We don't have a definition of consciousness, so it's hard to falsify the cl\aim.

      We don't have a precise definition of consciousness, but we do know some boundaries of the definition. For example, all evidence says that rocks are not conscious.

    • by narcc ( 412956 )

      We do know that consciousness is emergent

      We do not know that.

  • No one can define consciousness in an objectively verifiable or falsifiable way. This is the stuff of freshman bull sessions among philosophy majors.

    • by gweihir ( 88907 )

      No one can define consciousness in an objectively verifiable or falsifiable way. This is the stuff of freshman bull sessions among philosophy majors.

      Indeed. But fantasizing about a definition and then deriving additional imagined "properties" does allow a specific type of person in "AI" research to pretend their work is far greater and more significant than it actually is. That clearly happened here. Makes for a bad researcher.

      • by narcc ( 412956 )

        AI has been about marketing from the very beginning. Pamela McCorduck, who was there at the time, writes about the origin of the term in AI in her book Machines Who Think.

        • by gweihir ( 88907 )

          AI has been about marketing from the very beginning. Pamela McCorduck, who was there at the time, writes about the origin of the term in AI in her book Machines Who Think.

          That may explain why nothing really worthwhile was ever found in the "A(G)I" space: Marketing is satisfied when they have found enough suckers that believe. No need to have an actual product.

    • No one can define consciousness in an objectively verifiable or falsifiable way

      Which is kind of weird, if you think about it.

  • These terms consciousness and intelligence need much more debate, which is what is happening right now, and that is a good thing, even if we may laugh at it straight away.
    We need more of it.

    Take a few definitions of consciousness for example, a : the quality or state of being aware especially of something within oneself. b : the state or fact of being conscious of an external object, state, or fact.

    What does that even mean? Systems may be aware of states and facts, mimicking given (lots of) input.

    The
  • 1) Star explodes, blowing heavier elements throughout its local neighborhood.

    2) Eddys form in the dust, creating gravity wells. Accretion occurs. Solar system forms.

    3) Earth accretes.

    4) A molecule or tangle of molecules comes alive [khanacademy.org]. Consumes resources, reproduces, seeks to persist (i.e. maintain homeostasis [google.com])

    5) As part of being alive, the organism must process information. It must record and communicate to progeny: how to consume resources, how to convert those resources to energy, how to reproduce, how to e

  • That guy is is delusional or trying to push something via a lie.

    Rationale: As we can reason about consciousness, it clearly has some effect on physical reality. So it cannot be a passive observer. Otherwise it would observe but the idea would never have made it into the physical world. Artificial Neuronal Networks are fully deterministic digital structures. They are about as conscious as a rock or a piece of bread and they cannot generator or have an original idea like the existence of consciousness.

    • Being fully deterministic doesn't preclude one from being conscious.

      • by gweihir ( 88907 )

        Being fully deterministic doesn't preclude one from being conscious.

        Is does unless you are a fully passive observer only. Think it through.

        • It's not entirely clear that you are not fully deterministic. That is, humans.

          • by gweihir ( 88907 )

            It's not entirely clear that you are not fully deterministic. That is, humans.

            Actually, that idea does not work as it implies consciousness being a passive observer and that does not work either.

            • No, it implies that the choices you make are a result of things in the past.

              • by gweihir ( 88907 )

                No, it implies that the choices you make are a result of things in the past.

                Nope. If humans were fully deterministic, it would imply the idea of consciousness would not exist in physical reality or be some obscure fringe-thing generated randomly. It is not.

                You are thinking too small and in too limited a space here.

                • If humans were fully deterministic, it would imply the idea of consciousness would not exist in physical reality or be some obscure fringe-thing generated randomly.

                  I see no implication here.

                  • by gweihir ( 88907 )

                    If humans were fully deterministic, it would imply the idea of consciousness would not exist in physical reality or be some obscure fringe-thing generated randomly.

                    I see no implication here.

                    Somehow that does not surprise me...

        • That is an absurd assertion.
          In fact, the rest of your thread with phantomfive is pure fucking absurdity.

          You make assertions that are not backed up by fact in the fucking slightest.
          They're reasonable things to muse, but you absolutely can-fucking-not state them as facts.
    • They're not fully deterministic if you throw in some (genuine) random number skew into the mix. It wont make them conscious but they could come up with something original.

      • by gweihir ( 88907 )

        They're not fully deterministic if you throw in some (genuine) random number skew into the mix. It wont make them conscious but they could come up with something original.

        Not really. Or not more than generating texts at random would generate texts with original insights. On a practical level finding the needle of original in the haystack of nonsense would be impossible. No, these machines cannot do it either because that would require actual insight.

        Also, nobody knows whether genuine random numbers can be generated. The "true" random quantum effects form Physics are just a case of "we have no better model".

  • by Walt Dismal ( 534799 ) on Sunday February 13, 2022 @08:53PM (#62264917)
    I do cutting edge research in conscious self-aware systems, and though I will not mock him, I can say he's very off the mark. A big big problem these days is that statistical machine learning people have inflated ideas about what recognition is good for. They overreach. They think that because the brain has neurons, and they emulate neurons, that ML owns the keys to the universe. Nothing could be further from the truth. Stat ML is just a tool and a building block but it alone is NOT how you build conscious systems. I'm writing a couple of university-level textbooks on how one might build self-aware conscious systems, and I know very well that one has to have other than stat learning alone to make a mind. However, I will throw out a bone: if you interleave symbolic systems between stal ML layers, replacing the 'hidden layers' with dynamic self-restructurable symbolic computing elements, you can eventually build up systems complex enough to do the functions needed. I call these 'hybrid' systems and you will see more about it in years to come after I get some patents going.
  • "Every time such speculative comments get an airing, it takes months of effort to get the conversation back to the more realistic opportunities and threats posed by AI,"

    Completely failing to realize that they himself is an Artificial General Intelligence.

Good day to avoid cops. Crawl to work.

Working...