Researchers at Oxford University are attempting to recreate human thinking patterns in machines, using a language guided imagination (LGI) network.
Their work could inform the development of artificial intelligence (AI) that is capable of human-like thinking.
AI machines can now recognise images and process language, but this “continual” or imaginative thinking ability is only restricted to humans, at the moment. These machines are unable to understand and interpret language in the same way and with the same depth as humans.
The “human thinking systems” have a cumulative learning capacity that accompanies them as their brain develops. This system is associated with the prefrontal cortex, the part of the brain responsible for memory processes that take place as people are performing a task.
Human thinking requires the brain to understand a particular language expression and use it to organise ideas in the mind. The human brain is able to generate mental images guided by language.
For example, if a person notices it is raining, they would internally say, “I need an umbrella” before deciding to get an umbrella. As the thought travels through the brain, they will automatically understand what the visual input means, and how an umbrella will prevent them from getting wet.
While AI machines would be able to recognise the raindrops, there would be no similar thought process to link the rain with the need for an umbrella.
Feng Qi and Wenchuan Wu have used the model of a prefrontal cortex to create an artificial neural network, in an attempt to reproduce human-like thinking patterns in machines.
Qi told Cherwell: “I think this work may open a new page of AI.” In their paper, ‘Human-like machine thinking: Language guided imagination’, they wrote: “We proposed a Language guided imagination (LGI) network to incrementally learn the meaning and usage of numerous words and syntaxes, aiming to form a human-like machine thinking process.”
The LGI network developed by Qi and Wu has three key subsystems: a vision system, a language system, and a synthetic prefrontal cortex.
The vision system contains an encoder that unscrambles the input or imagined scenarios into abstract population representations, as well as an imagination decoder to recreate imagined scenario from higher level representations.
The language system imitates the part of the brain which extracts quantity information and converts binary vectors into text symbols.
The final component, which also imitates a part of the brain, is the pre-frontal cortex (PFC) which combines inputs of both language and vision representations and predicts text symbols and manipulated images.
Further research of the LGI network could lead to the development of more advanced AI, which is capable of more complex human-like thinking strategies.
Qi told Cherwell: “I think this work may open a new page of AI.
“LGI has incrementally learned eight different syntaxes (or tasks), with which a machine thinking loop has been formed and validated by the proper interaction between language and vision system.”
“The paper provides a new architecture to let the machine learn, understand and use language in a human-like way that could ultimately enable a machine to construct fictitious ’mental’ scenario and possess intelligence.”