An experiment conducted by Peters and Topp aimed to investigate how humans spontaneously interact with robots. The human guided a robot around a few rooms, pointing out the location of certain objects. The focus was on determining which methods the human uses to show objects - pointing at them, standing next to them, etc. But the experiment also revealed other things about how humans interact with robots.
At first, the humans treated the robot as highly intelligent, more or less as they would a person. Many started with greetings and general pleasantries. As they started guiding the robot around, they used reasonably complex grammar, or at least complete sentences. But as the experiment continued, things rapidly changed. The participants dropped all but the most rudimentary grammar. Any superfluous expressions disappeared, leaving what was essentially bare commands. Words like "please" were relatively common at first, but soon disappeared. Expressions which seemed to have no effect were dropped. Some stopped using expressions like "this is a table" in favour of only "table".
Analysing the transcripts of the whole duration for some of the participants, I find that the vast majority of their utterances consist of a few short expressions:
23% "Stop"
23%"Turn right" or "Turn left"
22% "Follow me"
8% "Move forward"
6% "Move back" or "Move backward"
In total, these few expressions make up 83% of what is said. Excluding the initial parts, or including variants like "please follow me" and "go forward", would make the results even higher.
What can we conclude from this? We know that humans are much more adaptable than robots, and it seems that we are so adaptable that we are unable to speak normally to a robot even with a conscious effort. Unless the robot looks and talks exactly as a human, we will inevitably respond by speaking in a simpler and clearer manner. This means that for robot communication, it is not meaningful to try to interpret perfectly natural language, as that is not the language we speak when talking to robots.
What makes much more sense is to try to control the simplifications. It seems clear that we are quick to learn, without conscious effort, which expressions work and which do not. A relatively human-friendly grammar can most likely be learned to near-perfection in an hour's conversation, without the need for any further explanation. Unfortunately it is difficult to test on real machines, since a conversation requires semantic understanding as well as syntactic.
Ostensibly naturalistic formal languages like Cobol or Applescript fail because they are only naturalistic on a very superficial level. There are specific expressions which look like a natural language, but when the user subconsciously tries to write analogous expressions they are not accepted, since the underlying structure is not like that of natural languages.
Languages based on predicate logic such as Lojban fail because they are based on a way of thinking which is different from the way humans normally think. In many cases there is also a link between the semantic and syntactic understanding, so it is impossible to separate the tasks of interpreting them.
No comments:
Post a Comment