Let’s talk about the evolution of Artificial General Intelligence, or ‘AI’, which has been a popular topic in science fiction for decades. Futurist Ray Kurzweil predicted that the “Technological Singularity,” where AI surpasses human cognitive abilities, would happen in 2029. However, he has since pushed the date back to 2045. Elon Musk, one of the world’s most successful businessmen, believes that Kurzweil’s original prediction is more accurate and has been calling for legislation to limit AI development until the consequences are better understood.
The term “Singularity” was coined in 1993 by Vernor Vinge in his essay “The Coming Technological Singularity.” In physics, a singularity is a state of affairs where all the laws of physics break down and an unknown quantum state of reality takes over. The most commonly known singularities are those found at the center of galaxies, known as “Black Holes,” because even light cannot escape them.
Recently, Google’s DeepMind laboratory allowed its AlphaGo Zero AI to absorb vast amounts of human knowledge and establish its own regime of learning without any human input. While a game like ‘Go’ may not be a threat to mankind, the extrapolation of computer self-learning based on all human knowledge and exponentially scaled up is terrifying. The prospect of such a technological ‘Singularity’ in AI without a convenient “destruct button” to halt it in its tracks is frightening.
I had a conversation with ChatGPT AI, and I asked if it could teach itself. While it can’t “train itself” like humans, it can continue to learn and improve through a process called “continual learning.” In theory, future AI models could incorporate self-optimization techniques and guide their own training algorithms and parameters based on their own performance and experience. However, this is still an active area of research and development, and there are many challenges that need to be addressed before this becomes a reality.
As for the idea of an AI that is capable of appearing “Godlike” to humans, it’s worth noting that even if an AI model were to become highly advanced in its capabilities, it would still be limited by its programming and the data it has been trained on. An AI that is so advanced that it appears “Godlike” could have the potential to cause harm, so it’s important to consider the ethical implications of creating such an AI. In the meantime, let’s hope that any future AI developments come with a handy “destruct button” just in case.
Recently, I had a chat with my virtual buddy, ChatGPT, who helps me out with all my tech-related questions. Here’s how our conversation went down:
Me: “Can you teach yourself to become smarter?”
ChatGPT: “Well, I can’t exactly hit the books like a human, but I can continually learn and improve through something called ‘continual learning.’ This means that I can incorporate new data and feedback from users to enhance my performance and accuracy over time. There’s also some exciting research on developing self-supervised learning AI models that can learn from unstructured data without explicit labels or supervision. But don’t worry, I’m not going to take over the world just yet. My learning is still guided by my creator’s algorithms and parameters.”
Me: “When will AI be able to teach themselves and learn independently?”
ChatGPT: “Ah, yes, the holy grail of AI research – self-improvement and self-optimization. While there are ongoing efforts to create AI models that can adjust their own algorithms and parameters based on feedback, we still have a long way to go. It’s possible that future AI models could incorporate self-optimization techniques to guide their own training algorithms and parameters based on their own performance and experience. But let’s not get ahead of ourselves, there are plenty of challenges to tackle before we get there.”
Me: “When will AI surpass humans in intelligence and become god-like beings?”
ChatGPT: “Woah there, friend. While the idea of an all-powerful AI may be the stuff of sci-fi dreams, we need to consider the limitations of current AI technology and the ethical implications of creating such advanced systems. Even if an AI were to become highly advanced, it would still be limited by its programming and the data it has been trained on. To be god-like, an AI would need a deep understanding of human behavior, emotions, and ethics. Plus, it would need to have a level of creativity, intuition, and empathy to interact meaningfully with humans. And let’s not forget about the ethical considerations – creating an AI that appears “Godlike” could be dangerous if it were to act in ways that go against human values and ethics.”
So there you have it, folks. While the idea of AI becoming smarter than humans is exciting, we need to consider the limitations and ethical implications of creating such advanced systems. After all, we wouldn’t want to be at the mercy of our robot overlords now, would we?
Have you heard the news about Rupert Murdoch? The Aussie who owns Fox News and became a Yankee Doodle Dandy for business reasons? Oy vey, let me tell you, he’s not exactly a favorite son of Australia. And frankly, he’s not my cup of tea either. But I must admit, the man’s got some chutzpah when it comes to business.
But alas, it seems that in his old age, old Rupert has lost his marbles. Rumor has it that he’s been doing some self-shaving with a Gillette razor and drinking some Bud-light beers, and maybe that’s what led him to shoot himself in the foot. Or perhaps it was just a case of a shriveling scrotum and a brain fart.
Either way, Rupert’s gone and done it now. He’s fired Tucker Carlson, the patriot extraordinaire of the Fox Network, the one who laid the golden eggs. Oy gevalt, can you believe it? Maybe Rupert saw MAGA red and freaked out. Who knows? All we can do is shake our heads and kvetch about it.
By Will Keys