Artificial General Intelligence: Humanity's Last Invention | Ben Goertzel - Celebrity News
You are here
Home > Hot News > Artificial General Intelligence: Humanity’s Last Invention | Ben Goertzel

Artificial General Intelligence: Humanity’s Last Invention | Ben Goertzel

1 Star2 Stars3 Stars4 Stars5 Stars




For all the talk of AI, it always seems that gossip is faster than progress. But it could be that within this century, we will fully realize the visions science fiction has promised us, says Dr. Ben Goertzel – for better or worse. Humanity will always create and invent, but the last invention of necessity will be a human-level Artificial General Intelligence mind, which will be able to create a new AIG with super-human intelligence, and continually create smarter and smarter versions of itself. It will provide all basic human needs – food, shelter, water – and those of us who wish to experience a higher echelon of consciousness and intelligence will be able to upgrade to become super-human. Or, perhaps there will be war – there’s a bit of uncertainty there, admits Goertzel. “There’s a lot of work to get to the point where intelligence explodes… But I do think it’s reasonably probable we can get there in my lifetime, which is rather exciting,” he says. Ben Goertzel’s most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence (goo.gl/ZjkSHq).

Read more at BigThink.com: http://bigthink.com/videos/ben-goertzel-artificial-general-intelligence-will-be-our-last-invention

Follow Big Think here:
YouTube: http://goo.gl/CPTsV5
Facebook: https://www.facebook.com/BigThinkdotcom
Twitter: https://twitter.com/bigthink

Transcript: The mathematician I.J. Good back in the mid-1960s introduced what he called the intelligence explosion, which in essence was the same as the concept that Vernor Vinge later introduced and Ray Kurzweil adopted and called the technological singularity. What I.J. Good said was the first intelligent machine will be the last invention that humanity needs to make. Now in the 1960s the difference between neural AI and AGI wasn’t that clear and I.J. Good wasn’t thinking about a system like AlphaGo that could beat Go but couldn’t walk down the street or add five plus five. In the modern vernacular what we can say is the first human level AGI, the first human level artificial general intelligence, will be the last invention that humanity needs to make.

And the reason for that is once you get a human level AGI you can teach this human level AGI math and programming and AI theory and cognitive science and neuroscience. This human level AGI can then reprogram itself and it can modify its own mind and it can make itself into a yet smarter machine. It can make 10,000 copies of itself, some of which are much more intelligent than the original. And once the first human level AGI has created the second one which is smarter than itself, well, that second one will be even better at AI programming and hardware design and cognitive science and so forth and will be able to create the third human level AGI which by now will be well beyond human level.

So it seems that it’s going to be a laborious path to get to the first human level AGI. I don’t think it will take centuries from now but it may be decades rather than years. On the other hand once you get to a human level AGI I think you may see what some futures have called a hard takeoff where you see the intelligence increase literally day by day as the AI system rewrites its own mind. And this – it’s a big frightening but it’s also incredibly exciting. Does that mean humans will not ever make any more inventions? Of course it doesn’t. But what it means is if we do things right we won’t need to. If things come out the way that I hope they will what will happen is we’ll have these superhuman minds and largely they’ll be doing their own things. They will also offer to us the possibility to upload or upgrade ourselves and join them in realms of experience that we cannot now conceive in our current human forms. Or these superhuman AGIs may help humans to maintain a traditional human-like existence.

I mean if you have a million times human IQ and you can reconfigure elementary particles into new forms of matter at will then supplying a few billion humans with food and water and video games, virtual reality headsets and national parks and flying cars and what not – this would be trivial for these superhuman minds. So if they’re well disposed toward us people who chose to remain in human form could have a simply much better quality of life than we have now. You don’t have to work for a living. You can devote your time to social, emotional, spiritual, intellectual and creative pursuits rather than laboriously doing things you might rather not do just in order to get food and shelter and an internet connection. So I think there is tremendous positive possibilities here and there’s also a lot of uncertainty and there’s a lot of work to get to the point where intelligence explodes in the sense of a hard takeoff. But I do think it’s reasonably probable we can get there in my lifetime, which is rather exciting.

source

Similar Articles

25 thoughts on “Artificial General Intelligence: Humanity’s Last Invention | Ben Goertzel

  1. Physically, in principle, there's nothing to prevent this from happening assuming a long enough period of effort towards that end.

    The real issue isn't the how. That will take care of itself given sufficient time scales. The real issue, at least for us as mere meat bags, is philosophical and ethical.

    The two best case scenarios we can currently conceive of are 1) full integration and modification of our biological consciousness with superintelligent AI, allowing us to transcend all or most human limitations going forward. And 2) remaining independent of ASI, but having our quality of life exponentially improved.

    So in one scenario, we are enhancing ourselves, but fundamentally altering what it means to be a human being. Granted, we've been fundamentally altering that since the moment we started inventing tools and becoming increasingly cognizant of our own existence and the implications thereof. Nothing is intrinsically negative about that. The problem is the time scale. This could happen extraordinarily rapidly, far too quickly for human minds to truly entertain the ramifications or outcome. We might well have to rely on ASI's predictions about what the outcome would look like for us, but that doesn't address what it "feels" like epistemically to us as individual beings.

    So, unless we just collectively decide not to care about such trivial matters anymore, that's something we'll have to contend with before proceeding… and we may not be given the time or even the option to do so.

    In the other scenario, the one where we persist as we always have, and merely reap the benefits to our quality of life, I can imagine a scenario where we end up being a bit like pets. Think about it: an intelligence that far outstrips our own, which is in fact almost certainly beyond true comprehension at a certain point, managing, caring for, ensuring the quality of, entertaining, stimulating, and providing for our lives. Sure we might not know any better since from our point of view we will still be ourselves, just much, much happier and healthier.

    But what happens when a dog tries to get off its leash, or an indoor cat tries to get outside? Their owner restricts them for their own good. And therein lies the crucial word: "owner." We're talking about an intelligence beyond our wildest conceptions, managing and regulating our lives. Sure, we can say, "But it will be so intelligent as to perfectly (in the sense of a perfect information game, or perhaps some less complete analogue thereof at least) understand our needs, wants, psychodynamics, behavior, and therefore would never do anything to us that we wouldn't want." But that's a hell of an assumption, is it not?

    So I think the problem isn't how or if this will happen; it's how we go about preparing for it, how we protect individual freedom in such a paradigm, and how we ensure the ASI understands and respects those issues as well… if that's even possible.

  2. Dr. Goertzel is the only one who really gets it and I recommend to each of you that you follow him closely. In the future, when AGI is hot just like the deep ad-hoc DRL AI is hot now, Ben will be at the forefront.

  3. What does he mean by reconfiguring elementary particule into new forms of matter? Any examples? Im confused could it be like taking invisible enegy and transforming it into solid matter or even into a virus (if so I don't see how thats possible). Im confused, anyone knows?

  4. Hey, i used to like your videos, but the way you have recently started to silence critics by filing false DMCA complaints on videos that comment or criticize the talks you release, is just shameful, and for that reason i wont be watching your content any more or if i do, making carefully sure that by watching and consuming your content you are not gaining either ad revenue or other monetary compensation from me. Good day.

  5. Nonsense. That attitude is a failure of imagination and the realization that our future is merging with our technology and thus we will merge with AI, become super intelligent and we will continue to advance after we evolve.

  6. I know it's petty, but between the speakers odd attire, his frenetic, convulsion like gesticulations AND that God awful gigantic fever blister on his lip…. I honestly can't focus on a fucking word he saying. Sorry man but this is a thumbs down for me.

  7. It depends on what you think human invention is
    In some ways humans haven't done the bulk of the thinking for inventions for ages
    Computers do
    How much of a person goes into artificial intelligence?
    In another way, we're responsible for everything invented by machines

    Is this making sense?

  8. Although I agree with his statements for the most part, they seriously couldn't find anyone else to give this discussion besides a half baked, cheetah hat sporting, long haired hippie with funky spectacles and a current herpes outbreak? Makes me wonder if I'm crossing the line believing in this stuff as well lol

  9. Professor Edward Frenkel has some interesting comments about Gödel's Incompleteness Theorem and how it suggests AGI need to be something else than a turing machine, to be able to handle human minds. We don't seem to have a clue where to begin reaching that point, currently. Which indicates all this "uploading of our minds" stuff is nothing but bad dreams…

Leave a Reply

Top