blobblobblobblobblob
cross icon

Get in touch

You can also subscribe to our newsletter to stay up to date with the CE Conference.
Subscribe
cross icon

Get in touch

Thank you! Your message has been received!
Oops! Something went wrong while submitting the form.
CE Conference 2022 - Artificial Intelligence  - Big statistics machines, or rationalising agent blog

AI: big statistics machines, or rationalising agents?

Jakub Lála, 2nd Mar 2022
Disclaimer: the opinions of our writers do not reflect the opinions of the conference as a whole.
Jakub Lála, our Head of Operations, ponders upon the possibilities of AI.

Although the conception of Artificial Intelligence (AI) dates back to World War II, only with the advent of computing technology does it seem that literally every company requires an AI engineer. As we started overutilizing the term ‘AI’ in job posts, news articles and start-up pitches, the underlying essence of the word has become quite vague. AI might bring thoughts of the animated avatars from Isaac Assimov’s adaptation of I, Robot, or the virtual companions from the acclaimed movie Her. Nevertheless, all AI used today is a massive statistics machine, deciding what you see on social media, or whether you are a worthy candidate for health insurance. None of these are the images science fiction stories depict, yet it sometimes seems that the public might naively believe that such independent entities are beginning to emerge in the vastly complicated algorithms.

In her MIT course on AI, Kimberly Koile puts it well by highlighting that AI can be classified into two main disciplines – scientific and engineering AI. The first field focuses on replicating human intelligence, including all the intricacies of emotion, neural synapses, etc. On the contrary, most applications today arise from the engineering approach, where computer scientists use human intelligence just as an inspiration. By mimicking characteristics of cognition, they devise complex algorithms that match or even outperform human labour in a particular function. The world ‘particular’ is important here, as the bulk of AI algorithms used today is narrow AI (ANI) compared to general AI (AGI). ANI indicates the confined application domain of algorithms, where the heavy specialisation for a given task results in little to no transferability of knowledge elsewhere. For instance, a computer vision algorithm will be useless in providing emotional support or composing music. Although there is ongoing research to improve generalizability over a wider application domain, the over‑specificity of algorithms seems to be quite unscrutinised in public media. I believe that multi-domain intelligence in the form of AGI is farther away than some experts argue, and that today’s AI solutions are a weak attempt at finalising human intelligence inside a computer.

One of the main reasons for the recent AI surge is the advancement of computational power, also referred to as Moore’s Law. Thanks to modern nanotechnology, we have been doubling the number of transistors on a single chip every two years for several decades now. Some thus argue that this sheer ability to scale up computation can brute force our way towards AGI. Firstly, the premise that we can count and compare the amount of computational power of the human brain stands on the fleeing idea that we truly understand all mechanisms involved in neural activity. Secondly, even if everything about the human brain is understood, how accurate is our conversion from computer bits (ones and zeros) to the analogue nature of electrical signals in our neurons? Thirdly, is the ultimate goal of imitating the generalizable aspect of human intelligence even attainable with raw computation only? Might there not be something else that mediates between our specialised intellect in various domains? What if the complex hormone pathways, or the self-actualizing consciousness itself, are integral to the success of an all-around intelligent creature?

Similar to our own childhoods, the most successful AI is also developed and maintained through a process called learning. That is why most AI today is also referred to as machine learning, a methodology where data is the major ingredient to training an AI model. Many naïve AI solutions nowadays fail due to the lack of good quality data. That is why a lot of AI research focuses on data collection and preparation, rather than on the algorithms themselves. It’s a question of how close does this resemble human learning, as it’s hard to argue that babies learn to recognise what a dog is by viewing thousands and thousands of variously manipulated dog images, which is what most computer vision algorithms currently do. Nevertheless, promising endeavours such as one-shot learning, where AI replicates a task by observing a single example, might be pointing us in the right direction. Another interesting and successful AI approach is reinforcement learning, where an agent is placed into an environment with a range of possible rewards based on the actions it takes. Here we come close to an AI obtaining a sense of self-awareness through a self-actualizing interaction between itself and the rest of the world. Nonetheless, we are still restrained by the limited scope of the narrow and specialised training environment. A new environment will thus mean a relatively useless agent once again.

In the end, all this effort should lead to replacing human labour with computer code. If AGI arrives and it can surpass human productivity, will that alleviate all human endeavour? Will it even be sufficient to remove our creative pursuits? Or will we once again find something else to focus on? Colonising other worlds? Interpreting super-intelligent algorithms? Seeking spiritual journeys? Either way, we should not let ourselves be mesmerised by the over-exaggerating news headlines. AI is powerful, but the human experience has yet to be simulated. Maybe quantum computing will bring us tremendously close to AGI sooner than we think, or maybe the intricacy of the universe’s biology is far from being understood. Regardless of that, we should remain both sceptical and excited, both by the artificiality and unification of what intelligence is, and also by the yet unsolved uniqueness of the human condition.
References
Keen Steve, 2012 - source

More blogs...