Making machines understand human language has proven frustratingly difficult. All the methods used today are prone to misinterpretation; not only do they fail to understand many sentences, but they also fail to identify which sentences are problematic. Meanwhile, making humans understand machine language has been much more successful. Not surprisingly, perhaps, since humans are generally more adaptable than most machines. In fact, when humans interact with robots, they quickly tend to change their patterns of speech, and of communication in general, in order to make the machine understand as well as possible. So perhaps it would be possible to slightly adjust the human language, rather than trying in vain to adjust the models?
We have had models of human language since long before computers; it could be said to be far better studied than any sort of machine interaction. The problem is just that humans tend not to follow the rules set out by grammar all too strictly, and also that human grammars have some "flaws" which occasionally makes it impossible to interpret a sentence unambiguously. For humans this isn't a problem, since we have a great deal of outside information and experience which helps us interpret what is being said, but for a computer even small holes in the interpretation can lead to a complete breakdown of understanding. The idea is therefore this: to set up a set of simple grammatical rules resembling human language as far as possible while remaining unambiguous, and require that the user follow these rules exactly. The aim of this project is to investigate what such rules might look like, and try to get some idea as to how much such a language would differ from normal human language and to what kind of difficulties humans might experience in using it.
No comments:
Post a Comment