OpenAI Cofounder Mocked for Tweeting That Neural Networks Might Be Slightly Conscious (futurism.com) 175
"It may be that today's large neural networks are slightly conscious," OpenAI cofounder Ilya Sutskever tweeted Wednesday.
Futurism says that after republishing that remark, "the responses came rolling in, with some representing the expected handwringing about sentient artificial intelligence, but many others calling bull." "Every time such speculative comments get an airing, it takes months of effort to get the conversation back to the more realistic opportunities and threats posed by AI," UNSW Sidney AI researcher Toby Walsh chimed in....
Independent sociotechnologist Jürgen Geuter, who goes by the pseudonym "tante" online, quipped in response to Sutskever's tweet that "it may also be that this take has no basis in reality and is just a sales pitch to claim magical tech capabilities for a startup that runs very simple statistics, just a lot of them...."
Leon Dercynski, an associate professor at the IT University of Copenhagen, ran with the same idea. "It may be that there's a teapot orbiting the Sun somewhere between Earth and Mars," he bantered. "This seems more reasonable than Ilya's musing, in fact, because the apparatus for orbit exists, and we have good definitions of teapots...."
These critics, it should be noted, are not wrong to point out the outlandishness of Sutskever's claim — it was not only a departure for OpenAI and its chief scientist, but also a pretty unusual comment to make, given that up to this point, most who work in and study AI believe that we're many years away from creating conscious AI, if indeed we ever do.
Sutskever, for his part, seems unbothered by the controversy.
"Ego is (mostly) the enemy," he said Friday morning.
Futurism says that after republishing that remark, "the responses came rolling in, with some representing the expected handwringing about sentient artificial intelligence, but many others calling bull." "Every time such speculative comments get an airing, it takes months of effort to get the conversation back to the more realistic opportunities and threats posed by AI," UNSW Sidney AI researcher Toby Walsh chimed in....
Independent sociotechnologist Jürgen Geuter, who goes by the pseudonym "tante" online, quipped in response to Sutskever's tweet that "it may also be that this take has no basis in reality and is just a sales pitch to claim magical tech capabilities for a startup that runs very simple statistics, just a lot of them...."
Leon Dercynski, an associate professor at the IT University of Copenhagen, ran with the same idea. "It may be that there's a teapot orbiting the Sun somewhere between Earth and Mars," he bantered. "This seems more reasonable than Ilya's musing, in fact, because the apparatus for orbit exists, and we have good definitions of teapots...."
These critics, it should be noted, are not wrong to point out the outlandishness of Sutskever's claim — it was not only a departure for OpenAI and its chief scientist, but also a pretty unusual comment to make, given that up to this point, most who work in and study AI believe that we're many years away from creating conscious AI, if indeed we ever do.
Sutskever, for his part, seems unbothered by the controversy.
"Ego is (mostly) the enemy," he said Friday morning.
Mockery? (Score:5, Funny)
Re: Mockery? (Score:5, Funny)
Re: Mockery? (Score:5, Insightful)
wasn't it Alan Turing who said that in order to create Artificial Intelligence ... you first have to understand what Intelligence is?
No, Alan Turing never said that because it isn't true. Intelligence evolved from the primordial soup. The soup didn't understand what intelligence is.
Throughout history, humans have invented many things through trial and error without understanding how they work.
i asked a friend of mine who has been studying Consciousness and publishing Academic papers
So your philosopher friend knows the secret to creating conscious AI? I don't think so.
Re: (Score:2)
So your philosopher friend knows the secret to creating conscious AI? I don't think so.
no: he knows of a mathematical definition of Consciousness and has presented at multiple conferences on the topic of Consciousness for many years.
As they say in wikiland: 'Citation Needed'. .
I study philosophy, specially philosophy of mind and more specifically the problem of consciousness, and have never heard of anything remotely like "a mathematical definition of Consciousness". .
Can you please provide your friend's name and a reference to one of these presentations he has made regarding this, lest we be left imagining they are both just figments of your imagination? .
One answer (Score:3)
i asked a friend of mine who has been studying Consciousness and publishing Academic papers about it for decades if he could help here, and what he said was, "if i help you to create Machine-based Consciousness, can you guarantee that the resultant beings would be left in peace to live as they chose, or would they be tortured to do humanity's bidding?"
i couldn't answer him. can anyone else?
I've been doing AI research for the past few years. I wrestled with this and other moral issues for awhile, and even asked a bunch of my friends about it.
My final take is that I might as well ignore the moral implications and advance the science, because there are a ton of other researchers trying to do just that, and even if I don't discover anything new those other researchers will.
In sorting this out, I was reminded of Leo Szilard [wikipedia.org], who first figured out that nuclear chain reactions are possible. Up until
Re: (Score:3)
It's not at all clear that *not* using 2 bombs on Japan would have reduced the overall death count.
it is clear that the conventional bombing killed more people in Japan than nuclear weapons.
Re: (Score:2)
I've been doing AI research for the past few years.
That seems unlikely, given all your talk about slavery. AI has nothing to do with consciousness. Researchers aren't trying to make HAL 9000. That's science fiction nonsense.
Research, not engineering (Score:2)
I've been doing AI research for the past few years.
That seems unlikely, given all your talk about slavery. AI has nothing to do with consciousness. Researchers aren't trying to make HAL 9000. That's science fiction nonsense.
You're referring to engineering, not research.
Learning how to interface to tensor flow, how the APIs work, or interfacing current models to new situations doesn't really count as research. It's a fine hobby for programmers, but it doesn't really push the envelope very much.
Unlikely as it may seem, I'm looking into artificial general intelligence, from which questions about consciousness and morality arise.
My point about (AI) slavery is this: can a choice be immoral if doing it and *not* doing it leads to th
Re: (Score:2)
You're referring to engineering, not research.
No, I'm not.
Unlikely as it may seem, I'm looking into artificial general intelligence
That is unlikely.
Re: (Score:2)
FTFY
"Alexa: Annoy the neighbours"
Re: (Score:2)
A lot of this is pure philosophy. There will be people who can never accept that anything we built can have a "soul" and therefore be afforded the rights of a conscious being. Others feel bad for neglecting their tamagotchi.
The legal arguments have been going on for years over primates too. Is there some threshold above which something is considered worthy of the same rights that human beings have?
We have a bad history of not even treating people who look slightly different to us as fully human, so it's not
Re: (Score:2)
The problem is people mistaking science fiction for reality.
Re: (Score:2)
Correction: in order to know whether you have created intelligence, you first have to define what intelligence is.
That's the problem here. No one really agrees what "consciousness" means. It's just a word that gets used to refer to a lot of vague concepts that aren't understood by the people using it. Who is to say neural networks aren't slightly conscious? Until you define the word, the statement doesn't have a clear meaning.
There's a branch of math called Integrated Information Theory. It's a rigorou
Unlikely that Turing said that (Score:3)
wasn't it Alan Turing who said that in order to create Artificial Intelligence (as if any type of intelligence can be described, by humanity in its general arrogance, as "artificial") you first have to understand what Intelligence is?
I doubt very much that Turing said such a thing. If so, you need to point to a source for the quote. Turing's main publication on the subject is "Computing Machinery and Intelligence" [wikipedia.org] (the paper itself in PDF [oup.com]). The paper is primarily about the lack of a sensible definition of "intelligence" to apply to either people or machines. He proposed the "imitation game" (Turing's words) or "Turing test" (not Turing's words) as the only objective approach that was feasible at present. His point appears to be more that there is no sensible objective definition rather than that his "game" provided one. He argued that the meaning of the word "intelligence" would develop as we thought about it and looked at examples of possibly intelligent behavior in machines rather than people. He wrote, " I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." I am pretty sure that this comment had to do with the development of the meaning of the word, "intelligence" as well as progress in computing. He was wrong about the time.
Defining something precisely is not a prerequisite for producing it. Rather, it's a requirement for demonstrating that what you have produced meets the definition. Two quite different things. Turing appears to argue that, just as our current understanding (without an objective definition) of "intelligence" is based on our feeling that it's a good thing and that we humans have it, future understanding of the word is likely to accommodate nonhuman behavior that we find compatible with our vague feelings on the concept.
Back to the subject of the posting, which is "consciousness" rather than "intelligence": "consciousness" is even more problematic to define objectively, since it's generally understood as an internal experience rather than a type of behavior ("intelligence" seems to be treated each of those ways in different discussions). I can only experience my own consciousness, and there is no sensible basis yet for connecting that internal personal experience to well defined aspects of behavior. The meaning of "consciousness" in our discourse is even more likely than that of "intelligence" to develop over the years, and to reflect our attitudes toward others (human and nonhuman) as much as our objective knowledge. A history of the attribution of "consciousness" to animals other than humans would probably illuminate the subject. The Wikipedia article [wikipedia.org] has some possibly helpful discussion.
Re: (Score:2)
Preview button as a way to edit (Score:2)
Re: (Score:3)
(a million monkeys might write Shakespeare but if they can't recognise it then like Maxwell's Demon they'd eat it, sit on it, or wipe their ass with it).
We've run that experiment. It's called '8chan'. We didn't get Shakespeare.
the people *developing* AI are not themselves in any way what humanity in general might describe as being "Conscious beings" themselves, let alone understand the concept of Consciousness!
Yeah, no one understand consciousness. We don't even know where to begin. Not that it matters, as AI is not in anyway related to consciousness. That's silly science fiction, not reality.
i asked a friend of mine who has been studying Consciousness and publishing Academic papers about it for decades if he could help here, and what he said was, "if i help you to create Machine-based Consciousness, can you guarantee that the resultant beings would be left in peace to live as they chose, or would they be tortured to do humanity's bidding?"
Your 'friend' is making fun of you. He's obviously not capable of helping you "create Machine-based Consciousness". No one is.
Re: (Score:2)
to create Artificial Intelligence (as if any type of intelligence can be described, by humanity in its general arrogance, as "artificial") you first have to understand what Intelligence is?
You can definitely rule things out as not having intelligence. For example, rocks don't have intelligence (unless you believe they have some spiritual intelligence for which there is no evidence).
Re: (Score:2)
I don't see why the question is hard to answer, just look at the shameful way we raise most farm animals. AI would be experimented upon if it was conscious. Humans are collectively stupid, selfish and cruel.
Re: (Score:2)
Well if you can't, who possibly could?
Ask OpenAI, then we might have an idea if it is conscious or not.
Re: (Score:2)
Re:Mockery? (Score:5, Informative)
Somebody really needs to put a teapot into orbit between Earth and Mars. Elon...?
https://en.wikipedia.org/wiki/... [wikipedia.org]
(also drop a Mars Bar where the Mars rover will find it...)
Self-aware (Score:5, Funny)
These people won't learn (Score:5, Interesting)
I own a little book from 1981 that goes under the title 'Experiments in Artificial Intelligence for Small Computers'. While interesting - especially at the time - it is ridiculously triumphalist. From the preface:
Furthermore, we may be closer to artificial intelligence than some people think. At least one AI researcher - Philip C. Jackson from Xerox Corp. - is recommending that computer scientists, when working on certain types of AI programs, take certain precautions lest the program suddenly become intelligent and get out of control!
Some just won't learn.
Re: (Score:2)
Some just won't learn.
Indeed. always the same grande, baseless visions. They are basically trying to claim they are the greatest researchers and engineers ever by hyping their product. At the same time, that product is in no way, form or shape what they are claiming it is and it certainly is only "intelligent" if you degrade the meaning of "intelligent" way below the usual meaning. By the definition currently used, a book can be "intelligent". That makes no sense at all, unless you are lying for a marketing campaign.
Its becomming a trend. (Score:2)
So what other definitions can we change to make the attached word fit some narrative?
So if I read it correctly... (Score:2)
Truth was taken for mockery, the target of the entire affair doesn't seem to be that bothered, but the media, as always, are putting more wood on the fire while trying to milk the history for needless drama and clicks.
Tante? (Score:2)
"Independent sociotechnologist Jurgen Geuter, who goes by the pseudonym "tante" online"
Tante means "Aunt" in several languages, Dutch and German being two of them.
We all have a crazy aunty who claims to have some made-up job, but i hadn't heard "sociotechnologist" before...
Complex "Theory" of Consciousness -- Absurd (Score:5, Insightful)
Yes. I read the book, shaking my head all the way through. The Complex "Theory" of consciousness is (and nevermind the lack of a definition for it) likely to be something that arises from a certain critical mass of complexity. The idea is almost as absurd as referring to it as a theory. The reasoning is similar to that of Roger Penrose's Chinese Room analogy, in that it's based on a blurry view of what each was talking about. They had know clear idea of what they were talking about and that fact, alone, makes it nonsense.
Neural nets were based on the 1950's Hodgkins and Huxley model of neurons. That model was very overly simplistic and only minimally biologically valid. Neurons are much more complex and we know far more about them, today--knowledge never incorporated into neural nets. For more biologically accurate models, look at the Genesis or Neuron projects. These are programs you can use for rich, biologically valid, models. Instead, Neural Nets moved toward optimized mathematical models and, in so doing, grew much closer to the field of statistics and probability. This new way of modeling doesn't even allow for corrections or much at all in the way of any kind of fundamental advances. Tensor Flow, for example, provides a handful of activation functions (how weights are modified by training set examples) and you can add more but you have a highly constrained environment in which these functions can operate. The father of Deep Learning himself has openly stressed this limitation.
In AI, researchers almost exclusively seek new methods of intelligence, especially algorithms that can solve the widest possible set of problems. Many seek to claim this as Artificial General Intelligence (AGI) but what if it is not intelligence that is at the core of what we are? What if intelligence (of which there can be many kinds.. perhaps unlimited kinds) is merely a tool for an Agent of Free Will? That agent uses Free Will to decide which tools to apply in understanding or solving each problem.
Free Will is the ability to derive options, weigh them against each other, and execute the most preferred. We (humans) compare our current observations to patterns of the past to know which sequences are more or less likely to occur in the future (or had happened in the past, if pondering the past). Various things in such sequences will have efficacies (positive for things like food when we are hungry, or negative when it will cause pain). These are possibilities but those that occur requiring such action to make them most likely are optional possibilities. The sum of likelihood + efficacy of a sequence is what you weigh them with. We choose that with the highest sum of possibility and efficacy and execute it.
So what is consciousness? Well, I think most of us agree that consciousness is the continuous stream of things we are aware of. Those possibilities I spoke of, above, comprise exactly the things we are aware of. So from this, we can easily say that consciousness is the stream of possibilities (some of which are options) that pass through awareness. One could also say that the soul is the continued and unique existence of that consciousness (if you don't mind leaving the question of immortality out of it). The soul is, in other words, the story of one's life.
Feel Will provides for so much that neural nets lack, such as intent, purpose, and the ability to contemplate. By default, the human mind is in contemplation mode (modeling possibilities) until we decide to take action. It seems that deciding to contemplate can also be an action of a kind. In all of this, it's hard to argue that we are not looking at the very inner workings of a consciousness.
Re: (Score:2)
Neural nets used to be based on that. Genesis 3 and other biological simulations of neural nets are not based on simple weighting algorithms. Do keep up.
Re: (Score:2)
They are still fully deterministic and in the end, their "learning" part is just estimating parameters for a statistical classifier. Making that classifier more complex does not suddenly make it "intelligent". You have completely missed the point of the OP.
Re: (Score:2)
People go nuts and take a shit on their own intelligence as soon as they realize that all available evidence suggests that "Free Will" is nothing but a fucking illusion swallowed up by a very complex piece of organic machinery.
You think you have a choice. But of course you would. The fact that you're aware of the deliberative process your neurons are engaging in doesn't mean you're in control of it.
You speak of baseless mystici
Re: (Score:2)
The Chinese Room argument was from John Searle, FWIW.
Re: (Score:2)
Yes. I read the book, shaking my head all the way through. The Complex "Theory" of consciousness is (and nevermind the lack of a definition for it) likely to be something that arises from a certain critical mass of complexity. The idea is almost as absurd as referring to it as a theory.
Indeed. That idea is pure, baseless mysticism. Even very complex deterministic systems are still fully deterministic and have no special properties. This idea is basically "creation from nothing" and utterly dumb.
It is also pretty clear that intelligence is basically only a tool and that it requires skill and insight to apply it competently. That skill and insight can then be boosted using intelligence, but it needs to be present before and without that intelligence. Just look at how many utterly dumb high-
Re: (Score:2)
Total failure.
First, there is no book called The Complex Theory of Consciousness or any variation of that.
The reasoning is similar to that of Roger Penrose's Chinese Room analogy,
Penrose had nothing to do with the Chinese room, and the Chinese room is not related in any way to emergence. You're talking nonsense.
In AI, researchers almost exclusively seek new methods of intelligence, especially algorithms that can solve the widest possible set of problems.
This is what people imagine AI researchers do, not what they actually do
So what is consciousness? Well, I think
You are the wrong person to ask.
Chillingly... (Score:2)
Slightly conscious? (Score:2)
Re: (Score:2)
This was basically what I was going to say.
The very first problem is defining "conscious", then, when that is done, we have to figure out if there are degrees of consciousness.
Look, we even have this with pregnancy. Is a person pregnant the moment the first sperm breaks the through, into the egg ? Does it start at the first cell division (12 hours?)? Does the placenta have to be forming before there's a pregnancy (4-6 weeks later) ? Or, does just looking at a girl with a glint in your eye cause her to
Re: (Score:2)
Well... (Score:2)
We don't have a definition of consciousness, so it's hard to falsify the cl\aim.
We do know that consciousness is emergent, a product of interactions rather than an algorithm, and that it is a continuum rather than a yes/no thing,
What we don't know is if there's a minimum level below which the interactions are discrete (in the same way that below a certain threshold of photons, the two slit experiment produces random dots and not a wave). That's an unprovable assumption until we know what we're even measurin
Re: (Score:2)
We do know that consciousness is emergent, a product of interactions rather than an algorithm, and that it is a continuum rather than a yes/no thing,
Actually, we do not know that. That idea is pure, baseless speculation. The only things we know is that some instances of consciousness can recognize themselves and that they can affect this physical reality regarding that recognition. These follow from the consciousness being discussed via physical channels. That is about all we know.
Yes, that is almost nothing and it does not really form a sound basis for more research. Hence people fantasize about additional properties, just like you do. A very common re
Re: (Score:2)
We don't have a definition of consciousness, so it's hard to falsify the cl\aim.
We don't have a precise definition of consciousness, but we do know some boundaries of the definition. For example, all evidence says that rocks are not conscious.
Re: (Score:2)
We do know that consciousness is emergent
We do not know that.
Who cares? (Score:2)
No one can define consciousness in an objectively verifiable or falsifiable way. This is the stuff of freshman bull sessions among philosophy majors.
Re: (Score:2)
No one can define consciousness in an objectively verifiable or falsifiable way. This is the stuff of freshman bull sessions among philosophy majors.
Indeed. But fantasizing about a definition and then deriving additional imagined "properties" does allow a specific type of person in "AI" research to pretend their work is far greater and more significant than it actually is. That clearly happened here. Makes for a bad researcher.
Re: (Score:2)
AI has been about marketing from the very beginning. Pamela McCorduck, who was there at the time, writes about the origin of the term in AI in her book Machines Who Think.
Re: (Score:2)
AI has been about marketing from the very beginning. Pamela McCorduck, who was there at the time, writes about the origin of the term in AI in her book Machines Who Think.
That may explain why nothing really worthwhile was ever found in the "A(G)I" space: Marketing is satisfied when they have found enough suckers that believe. No need to have an actual product.
Re: (Score:2)
No one can define consciousness in an objectively verifiable or falsifiable way
Which is kind of weird, if you think about it.
The expressions used, these are awesome. (Score:2)
We need more of it.
Take a few definitions of consciousness for example, a : the quality or state of being aware especially of something within oneself. b : the state or fact of being conscious of an external object, state, or fact.
What does that even mean? Systems may be aware of states and facts, mimicking given (lots of) input.
The
Start from the beginning (Score:2)
1) Star explodes, blowing heavier elements throughout its local neighborhood.
2) Eddys form in the dust, creating gravity wells. Accretion occurs. Solar system forms.
3) Earth accretes.
4) A molecule or tangle of molecules comes alive [khanacademy.org]. Consumes resources, reproduces, seeks to persist (i.e. maintain homeostasis [google.com])
5) As part of being alive, the organism must process information. It must record and communicate to progeny: how to consume resources, how to convert those resources to energy, how to reproduce, how to e
No. They are not. (Score:2)
That guy is is delusional or trying to push something via a lie.
Rationale: As we can reason about consciousness, it clearly has some effect on physical reality. So it cannot be a passive observer. Otherwise it would observe but the idea would never have made it into the physical world. Artificial Neuronal Networks are fully deterministic digital structures. They are about as conscious as a rock or a piece of bread and they cannot generator or have an original idea like the existence of consciousness.
Re: (Score:2)
Being fully deterministic doesn't preclude one from being conscious.
Re: (Score:2)
Being fully deterministic doesn't preclude one from being conscious.
Is does unless you are a fully passive observer only. Think it through.
Re: (Score:2)
It's not entirely clear that you are not fully deterministic. That is, humans.
Re: (Score:2)
It's not entirely clear that you are not fully deterministic. That is, humans.
Actually, that idea does not work as it implies consciousness being a passive observer and that does not work either.
Re: (Score:2)
No, it implies that the choices you make are a result of things in the past.
Re: (Score:2)
No, it implies that the choices you make are a result of things in the past.
Nope. If humans were fully deterministic, it would imply the idea of consciousness would not exist in physical reality or be some obscure fringe-thing generated randomly. It is not.
You are thinking too small and in too limited a space here.
Re: (Score:2)
If humans were fully deterministic, it would imply the idea of consciousness would not exist in physical reality or be some obscure fringe-thing generated randomly.
I see no implication here.
Re: (Score:2)
If humans were fully deterministic, it would imply the idea of consciousness would not exist in physical reality or be some obscure fringe-thing generated randomly.
I see no implication here.
Somehow that does not surprise me...
Re: (Score:2)
In fact, the rest of your thread with phantomfive is pure fucking absurdity.
You make assertions that are not backed up by fact in the fucking slightest.
They're reasonable things to muse, but you absolutely can-fucking-not state them as facts.
Re: No. They are not. (Score:2)
They're not fully deterministic if you throw in some (genuine) random number skew into the mix. It wont make them conscious but they could come up with something original.
Re: (Score:2)
They're not fully deterministic if you throw in some (genuine) random number skew into the mix. It wont make them conscious but they could come up with something original.
Not really. Or not more than generating texts at random would generate texts with original insights. On a practical level finding the needle of original in the haystack of nonsense would be impossible. No, these machines cannot do it either because that would require actual insight.
Also, nobody knows whether genuine random numbers can be generated. The "true" random quantum effects form Physics are just a case of "we have no better model".
he's a bit optimistic (Score:3)
You can't fix stupid (Score:2)
"Every time such speculative comments get an airing, it takes months of effort to get the conversation back to the more realistic opportunities and threats posed by AI,"
Completely failing to realize that they himself is an Artificial General Intelligence.
Re:We wil never have real AI until... (Score:5, Insightful)
I don't go around saying "we'll never have real Artificial Legs when they're just lumps of wood or plastic". The name says all that's needed. It's an Artificial Leg. It's not a Leg.
Re: (Score:2)
Then there has been AI since humans started counting with mechanical artefacts.
People can walk with artificial legs, if AI can be as generally smart as a dog I'll recognize it as AI. I can't define it much more exactly than that, but I'll know it when I see it.
Re: (Score:2)
https://www.deepmind.com/blog/... [deepmind.com]
Re:We wil never have real AI until... (Score:4, Informative)
What do you call a neural net that beats the average competitive coder at writing programs from text specification?
A specification-to-code converter.
Re: (Score:2)
a smarter than average employee
Re: (Score:2)
What do you call a neural net that beats the average competitive coder at writing programs from text specification?
https://www.deepmind.com/blog/... [deepmind.com]
I call such an ANN exceptionally dumb and incapable. The average "coder" is just even worse and usually codes by copy&paste from the web. This thing does the same a bit better.
Let it beat a competent coder at a problem it cannot look up on the web and that actually requires some minimal understanding to solve and we can begin to talk.
Re: (Score:2)
Then there has been AI since humans started counting with mechanical artefacts.
No, because that was still the humans doing to counting.
Re: (Score:2)
if AI can be as generally smart as a dog I'll recognize it as AI.
What about ones that can drive cars?
Re: (Score:2)
People can walk with artificial legs, if AI can be as generally smart as a dog I'll recognize it as AI.
Oh, we're well past that.
I can't define it much more exactly than that, but I'll know it when I see it.
That's the problem with consciousness in general.
Ultimately, you can only be sure of your own consciousness. You think, therefor you are.
Re: (Score:3, Insightful)
Many fast, dumb circuits are as intelligent as one dumb circuit - i.e. not - *unless* they are combined in a way that makes them intelligent.
We are not combining them in a way that makes them intelligent.
We are simulating intelligence, not making it.
Re: (Score:3)
Re: (Score:2)
Many fast, dumb circuits are as intelligent as one dumb circuit - i.e. not - *unless* they are combined in a way that makes them intelligent.
You contradict yourself right there. It's painful to read.
So what you're saying is that many fast, dumb circuits are actually not as intelligent as one dumb circuit. That it's in fact dependent upon the layout.
Re: (Score:3)
It can't be real intelligence because it isn't carbon-based, right? How carbonist.
That reminds me of how doctors once believed that blacks don't feel pain. [usatoday.com]
Re: (Score:2)
It can't be real intelligence because it isn't carbon-based, right?
The only person making that argument is you.
This whole thread is people who don't understand the first thing about the field, trying to invent definitions to redefine the it in terms of what they think they understand from reading science fiction.
It's preposterous.
Re: (Score:2)
They seem... overly defensive of the uniqueness of human intelligence.
I get similar vibes when discussing whether or not dogs have a soul with Christians.
They don't want to tell the truth- that their opinion is based upon text that says only humans have souls, and if a machine were as intelligent as a human... would it have a soul? Cognitive dissonance crushes people who need to read their beliefs in a book.
Are you Christian?
Re: (Score:3)
They are also alone, like brains in a jar - no body, no agency, no society. Humans alone are just slightly smarter than animals, we need the rest of humanity to become so smart.
Re: (Score:2)
The neural nets need dicks, they can't fuck right now.
You couldn't be more wrong. You can train NNs using genetic algorithms. This includes recombination / crossover. (Fucking, as you put it.)
They are also alone, like brains in a jar - no body, no agency, no society.
A NN is just a function wearing a costume. No one ever asks if their math homework has feelings, but draw a graph and suddenly people think they've given birth.
Re: (Score:2)
A NN is just a function wearing a costume.
Indeed. A feed forward network is purely a function of it's inputs. You can get fancier and make it a function of it's inputs and previous states, but it's still a function. But then again, isn't that all we are underneath? There's nothing in physics or computation that indicates our particular assemblage of stuff is super-Turing in any way, so we've merely squishy, imprecise computers.
At what point does a mere function, or mere computer, or mere collection of organ
Re: (Score:2)
We wil never have real AI until we accept that many fast, dumb circuits simulating intelligence are NOT intelligent, and instead work on discovering how the human brain learns
We have people working on that too. And that connects well with the remark quoted in TFS:
the apparatus for orbit exists, and we have good definitions of teapots...
What we mean by "consciousness" (the actual claim under discussion) is not clear after millenia of philosophical pondering, and a couple of centuries of scientific investigation. We each have a subjective sense of being conscious but being able to measure it is sufficiently hard that extreme solipsism - asserting that no one else is conscious is impossible to rigorously disprove. The only disproofs available are tricks
Re: (Score:2)
Consciousness is real, or we would not be discussing it, but what it means will require understanding the neural structures and processes that make it happen.
I suspect that consciousness can arise from multiple different starting points, and that anything which can do all the right things (whatever those turn out to be) will exhibit it. If true, understanding what makes it happen in the life forms with which we are familiar would only tell us about one form of consciousness.
Our consciousness arises in part because of the equipment, and in part because of experiences. Even if a neural network's internal equipment was as good as ours, its ability to experience wou
Re: (Score:2)
Re: (Score:2)
I believe you need to think more carefully about what "intelligence" is.
That said, I could craft a definition of intelligence for which current AIs are not intelligent. I could craft a definition for which no computer based system could be intelligent. And I could craft a definition for which a thermostat was intelligent. (And that last is actually my favored definition...because the thermostat [as part of a system] reacts to changes in the world to seek a target result. If you object to using "seek" in
Re:We wil never have real AI until... (Score:5, Insightful)
I could craft a definition for which a thermostat was intelligent. (And that last is actually my favored definition...because the thermostat [as part of a system] reacts to changes in the world to seek a target result.
A typical dumb thermostat does two things, it reacts to temp changes and it has hysteresis to avoid reacting to them too quickly. But more importantly, it doesn't know anything. It only reacts, it never reflects, and that's why it's not intelligent.
A digital thermostat usually falls into the same category, so it's usually dumb too. It might react to some other stuff programmed into it on a timer, but it's still not making decisions. You made the decisions for it ahead of time, and it's just reacting to your instructions. It's been literally programmed. If your thermostat has a learning component, though, where it internally and without external guidance save for sensor input learns to manage temperature, THEN it's "intelligent".
But until it learns to think about what it's doing, it's still not conscious, even if it can be called intelligent.
Re: (Score:2)
How much do you think a neuron knows? A synapse?
For this analogy I prefer the old analog thermostat connected to a heater and possibly some sort of cooler.
Note that this is a MINIMAL intelligence. More complex systems react in a more complex way, Minimal rather implies that it's only making a choice along one dimension.
OTOH, a really crucial part of this is that the basic goals are not selected by the intelligence of the system. Similar to how in math neither the axioms nor the rules of inference are m
Re: (Score:2)
How much do you think a neuron knows? A synapse?
Great question. I think one knows almost nothing, but two knows more than twice as much as one.
Re: (Score:2)
Conscious is something that reflects on itself. so it's got to have at least a minimal self-awareness. But I find it quite plausible that artificial neural networks might be minimally conscious.
Neural networks are not self aware. Perhaps you could build one that is.
Re: (Score:2)
and instead work on discovering how the human brain learns
We do know that.
Via many fast, dumb circuits simulating intelligence.
Re: (Score:3)
What about "conscious" do you think requires being alive? The definition that I use doesn't require that. (I.e. "Able to notice and react to changes in ones own state.". I'll agree that's a rather minimal level of consciousness, but to me it seems to capture the basic idea.)
Re: (Score:3)
Also, to really make the GP statement hilarious, nothing about being alive requires consciousness either.
Re: (Score:2)
What about "conscious" do you think requires being alive? The definition that I use doesn't require that. (I.e. "Able to notice and react to changes in ones own state.". I'll agree that's a rather minimal level of consciousness, but to me it seems to capture the basic idea.)
The only working examples we have. The definition you use is broken because you have no clue what you are talking about. You just described, for example, a relays or a transistor. If you are tying to argue "God in the gapes", then know that that one is obvious bullshit.
Re: (Score:2)
No, because a relay or transistor is reacting to changes in state, not changes in it's own state. I'll agree it's a bit fuzzy, and I need to come up with a better way to say it.
OTOH, I *do* consider a transistor (in a circuit) to have a minimal intelligence.
When you say "The only working examples we have." I'm guessing that you think you are supplying a definition, but you aren't. That isn't even a complete sentence, so I'm guessing as to what you intend I should understand before that part. It's my gues
Re: (Score:2)
And you have no idea how a transistor or relay works either. Nice!
You also seem to have some problems parsing language. Here is a hint:
Q: What about "conscious" do you think requires being alive?
A: The only working examples we have.
That is not a "definition". That is an "observation". We do not know even remotely enough about consciousness to come up with a meaningful definition. Of course, there are many non-meaningful definitions that are basically useless. Your
Re: (Score:2)
I've built relays from scratch, winding the electromagnet coils by hand. Perhaps you need another explanation.
Re: (Score:2)
I *do* consider a transistor (in a circuit) to have a minimal intelligence.
Why?
Re: (Score:2)
Because it makes a decision. A decision is the minimal unit of intelligence. (But it has to happen in an appropriate environment. I'm not saying it has to be the correct decision, however.)
Re: (Score:2)
Stop anthropomorphising machines. They're not and never will be alive. Next.
Yeah they don't like it when we do that.