I’ve been interested in computers for a long time. The first experience I remember dealing with a computer was 4~5-year old me trying, (and failing), to play a game on my Dad’s IBM desktop running DOS. Since then, I’ve grown up with video games on both PCs and home consoles and played countless hours on both. I wanted to learn more about these machines and software that provided so much entertainment for me. I also grew up with quite a few 80’s and 90’s action and sci-fi movies, and anime (Japanese animation). Naturally, movies like Blade Runner, Ghost in the Shell, Fifth Element, and The Matrix gave me a great interest in the possibilities of science, computers, and the blending of the two. The ultimate expressions of which I would consider to be human-level(or greater) Artificial Intelligence. and cybernetic implants that enhance human capability.
On top of movies and anime, I’ve always loved reading, and that gets me to my greatest interest in computer science, Artificial Intelligence. Movies like Terminator, The Matrix, and other media and pop-culture have drilled into the general consciousness the idea that a bleak apocalyptic future is the only result of the development of AI. Initially, I thought the same way too, but as I watched and read Japanese sci-fi stories, I found that often, the viewpoints weren’t nearly as bleak about AI, and my mind was opened to other possible outcomes.
The key moment for me was reading a novel. A sci-fi novel I love and consider my favorite; The Stories of Ibis(アイの物語, Ai no Monogatari in Japanese) by Hiroshi Yamamoto(山本 弘). Initially the setting feels very similar to many other novels about AI and the apocalyptic consequences on humanity, but reading further you learn the story on why the world in the book seems so grim for humanity, and an idea for how “strong AI” could be developed, and how an AI apocalypse may be avoided.
Of course, it’s not as if the answers to all the problems on the path to a strong AI, AI capable of improving itself, were laid out in the novel, but it inspired me to do more research on the ideas presented, and although I’m a total novice they seemed to make sense. In some cases, they even seemed like they may be the only way a strong AI could be developed, and not cause the apocalypse.
Researching those topics, I’ve been convinced that if strong AI is possible, that not only is it inevitable, but the first to be developed, will also be the last, for better or worse. With that in mind, I can’t help but feel that it’s a field in urgent need of focus and funding. I don’t have a clue how long it may take to happen, but that’s exactly why starting sooner is better than later; I hope that I can develop the wide range of skills and experience to be on that front line.
Maybe strong AI is a pipedream, but right now, it’s my dream.