SO NEAR, YET SO FAR

sentience = sensitivity + X, And what is X? This is what Dennett had to say about sentience. There had been a lack of a binding definition for conscious behaviour. The same is true for free will and self-awareness.


Discussions on the definition of consciousness and designing artificial consciousness have often led to Bats, Robots and Zombies. (1) Bats - Nagel considers consciousness as ‘what it is like’ phenomenon. It is subjective and cannot be reduced. (2) Robot - as per the Cartesian maxim, consciousness is non-physical and cannot be determined as an artificial affect. (3) The philosophical Zombie - it has attained an iconic status in this debate, where it lacks emotions and does not feel happiness, remorse, pain, or any other psychological effects, and therefore no experiential content or qualia, but it still interacts with its environment.


Since robots, as of yet cannot perceive phenomenal consciousness, experience doesn’t directly relate to consciousness, and qualia is disputable in synthetic intelligence. Therefore it is speculative whether artificial consciousness if ever realized in robots will show similarity to human intelligence?

It is interesting to note that the focus has been on designing experiments to test for self-awareness in robots, but there has also been gathering enthusiasm to devise more sophisticated versions of the Turing test, which tap the unique abilities and tendencies of being a human being. The quest to find conscious robots has furthered the onus in the search of unique human values which machines lack. Instead of asking ‘what is a robot?’ the question to ask has been ‘what is a human being?’. Will a conscious robot be better than us? That is a difficult question, and the least that can be said is that conscious robots are sure to be smarter than we are. We stand at a unique place in the history of our civilization, and also the planet, as we are the first species which has dared to create intelligent beings designed to model ourselves.