Artificial Intelligence: Freedom of Thought
Let’s face it, artificial intelligence (AI) is a confused subject with the industry currently exaggerating its role, purpose and capabilities. We are all still scratching our heads wondering if the world will have to endure a society of robots with their associated “robot rights” and, of course, there’s the fear that they may render humanity pointless.
I want to continue with my own personal journey through what I understand AI is and how we might indeed one day create an artificial intelligent entity, something which I hope we don’t live to regret. Nevertheless, my rhetoric has been consistent regarding what, today, we understand AI is and, that is, it’s nothing more than clever programming and smart technology. But, in my endeavour to possibly create such an artificial intelligent entity, I suggested we use Cartesian dualism (René Descartes and George Campbell) as some sort of template where we separate the mind and body.
You may recall that I mentioned in my earlier posts, that I also perceive the brain and mind as two separate things, where the brain is a part of the body’s constitution, providing functional and, if you like, operational support while maintaining the body’s ecosystem. I followed the rationale provided by Cartesian dualism which, for me, offers a great template to begin to consider the creation of our intelligent entity.
Artificial intelligence: embryonic
In last month’s column, Artificial intelligence: embryonic, I talked about how AI research has been at our fingertips for decades – dating as far back as the 1940s in fact! And, of course, there’s an insurmountable amount of research that continues today. Moreover, last month’s post, touched upon how we could potentially create a humanoid based on developing separate computational components that mimic the human brain. I introduced Jean Piaget and Jerry Fodor, who both likened the human brain to a computer where “each module of the brain is like a special-purpose computer with a proprietary database.”
So, I conceived, artificial intelligence: embryonic (AIe) using dualism, as a template, which would allow us to potentially create an equivalent functional brain, of sorts, within our robot or humanoid. But one thing I omitted in my hypothesis was the “human mind” – that part of the human condition that continues to baffle neurologists and psychologists.
Establishing rudimentary capabilities
It’s a part of the brain, so we are led to believe, that remains unknown, yet makes us all unique in what we do, such as arts, music, writing, spontaneity and creativity, just to name a few facets. Many neurologists can only speculate as to where it might be located within our brain mass, but there are others who claim it is in fact our soul.
This brings me neatly to another speculative proposition. However, let’s first establish that our AIe must have numerous rudimentary capabilities that would maintain our humanoid – things like sight, touch, sense, walking, running and so on. Just like the human brain, which provides us with primitive functionality we must also originate basic functionality that would enable our humanoid, if you like, to function primitively – as a minimum.
A base set of principles
So, what if we were to compile every imaginable decision, good or bad, ever made and were to define these behaviours, along with associated outcomes, events and actions? We could use this fully comprehensive compilation to form a basic cognitive mindset of good and bad decision making for our AIe and to perhaps establish a psychological representation of “lessons learned.”
I mean, let’s conceive an exhaustive algorithm that we could perhaps throw out, in an “open source” manner, where we attempt to collate both the conceivable and inconceivable thoughts, decision processes and their outcomes as a basic set of principles. In other words, our humanoid would have a base set of principles that would allow it to undertake decisions autonomously based on previous experiences that we have defined.
Are you my father?
With potentially a large set of principles, our humanoid could then make unique decisions based upon previous experiences, behaviours and outcomes where, in turn, it could create its own decision flow and eventually become a self-governing being. Establishing an ability to “learn” along with the capacity to create new decision flow processes through its own behaviour outcomes would perhaps begin to somehow form a sort of mindset akin to the human mind.
I believe that this is the way forward for AIe, a way that mimics the nature/nurture balance within human development. Our embryo would be internally instilled with our base algorithm (our comprehensive behavioural compilation) from which it would learn and develop through an evolutionary trial and error behaviour process, all supported with the kind of nurturing from its creators that one would associate with parental guidance.
Until next time …
This kind of freedom in terms of self-governance could be considered a dangerous proposition, but I believe that if we are to create true artificial intelligence through AIe we must allow the nurturing of freedom of thought.
So, this is where your “AI psychologist” Dr. G signs off.
Originally published in Technically Speaking.