Putting the Human Back Into the Post-Human -- The Motion Picture
The talk I gave at the New York Future Salon is now available!
The entire video runs about 98 minutes; my talk starts after a couple of minutes of intro, and I finish up right at the one-hour mark. The remainder of the video is the Q&A period, which has some good stuff, too. When I get a chance, I hope to pull out some short clips as stand-alone videos.
The sound quality is surprisingly good, considering that I wasn't mic'd. The lighting is such that some of the slide images are a bit hard to see; if you're curious, the entire deck (sans nifty Keynote transition effects) is available at SlideShare.
You can get a high-quality MPEG (.m4v) version at the Internet Archive page for the video, if you're eager to download just under a gigabyte...
My thanks to Kevin Keck and Ella Grapp for inviting me to give the talk, and to Robert Wald for dealing with the video stuff.
As always, please let me know what you think of the talk.
Comments
Good talk, in my blog posts I focus on responding to things I disagreed with, but I found many of the points interesting and meaningful.
I don't believe that the public or government will believe that AGI is possible up until it actually happens. It's just too radical. Even when we have powerful infrahuman AIs, people will still view them as fancy machines, not agents.
I am surprised that in your talk you seem to take objection to the very notion that there will ever be strong superintelligences that make decisions more effectively than all humans or take more grandiose actions than all humans. You seem to be uncomfortable with the idea of humans not always being #1. Am I wrong in this? Isn't one of the fundamental ideas of transhumanism that we'll eventually be surpassed by our creations and future selves which are radically different than our present selves, not just superficially but in terms of fundamental cognitive design?
I'm not really terribly freaked out by a world where humans are surpassed, as long as the greater agents explicitly value our existence and don't run us over, so to speak. If that happens, doesn't it make sense that they might want to help us out with some of our deeper problems, and they'll be able to solve those problems more effectively than we can? Is that so bad, or so unlikely in the long run?
In his recent paper and talk, Aubrey points out that he predicts that friendly superintelligence would probably fade into the background because it would know that we wouldn't want it to pester us and intervene in our lives in obtrusive ways. This also seems like a reasonable idea, and not quite covered by your four scenarios.
Posted by: Michael Anissimov | November 15, 2009 10:00 PM
Also, I wanted to mention that I really really wanted to make this talk but I had so little sleep over the few days before that I practically fell on my face after getting out on the first day of the Summit. I didn't secure any modafinil supplies beforehand either, and honestly I haven't tried it yet.
Posted by: Michael Anissimov | November 15, 2009 10:08 PM
buy instagram likes
Posted by: get likes for instagram | January 17, 2017 3:00 AM