INTRODUCTION

Opinions remain divided on whether synthetic intelligence can attain human levels of consciousness. At one end of the spectrum is Searle’s Chinese Room Problem, which is a thought experiment set in the near future. In the experiment, AI has developed artificial agency enabling it to understand Chinese and also pass the Turing Test, rendering indistinguishable output from human beings in at least text and speech content. Such an AI will be able to convince any Chinese person that it is also a Chinese-speaking human being. This brings forth the question “does the AI ‘understand’ Chinese? (strong AI) or is it merely simulating the ability to understand Chinese? (weak AI)?” The agency is producing the output by relating the input to a rule of syntax, as given in the program script without any grasp of the semantics. This cannot lead to ‘consciousness’, but mimic it, and therefore strong AI cannot exist.


The other end of the spectrum suggests that the rudiments of consciousness is in self-awareness, and researchers have attempted to design self-aware robots.


Characteristics of self-awareness are as following,

  1. Cognise to one‘s own behaviours
  2. Perform nearly consistent behaviours
  3. Have a sense of cause and effect
  4. Cognise to the behaviours of others
  5. Interact with other self-aware AI and robots


This approach tries to develop inner states which will constitute subjective experience, such as colors, smell, taste, sound, melody, emotions and therefore consciousness. Such states will be augmented with the incremental feedback process of cognition which will derive meaning from material objects.


Braitenberg vehicles are excellent examples for relating human emotions and virtue to robot behaviors. For more sophisticated versions, it can be argued that vehicle-12 is able to devise inner states based on memory, and is able to reflect on them. Similarly, robots that are controlled and manipulated using connectionist architecture, such as ANN can be related with internal states. For example, if the task on hand is navigation then for the first few navigational routes will be externally generated and based on purely reactive principles, such as observing the intensity of a light source or tracking known markers etc. Later similar routes will be tracked without using the sensors but through internally generated states based on the learning module of the ANN.

Cog was one of the earliest attempts to develop conscious behavior in robots using only reactive principles. It was designed to go around the symbol grounding problem. It was equipped with the ability to make choices over points of merit typifying a conscious decision process. Its video camera eyes saccaded to focus on a newly arrived human being in the room. Such eye-contact and gaze monitoring enabled attention focusing and encouraged social interactions. It was equipped with a number of self monitoring devices and therefore attempted to develop self-awareness. Since, it was based on sensory-motor, it did not really adhere to the concept of internal states and self reflection. Cog continues to appeal and motivate research in this domain.


Robots that are targeted to social functions do bring into play some degree of conscious behavior, such as the peek-a-boo and drumming mate experiments on KASPAR, and the mutual care paradigm designed for Hobbit.

Sentient Robots

By Arkapravo Bhaumik

Excerpt from: From AI to Robotics: Mobile, Social, and Sentient Robots, Taylor and Francis, 2018