From the perspective of "synthetic neural modelling", the robot offers a natural way to model the embodied construals of nouns and verbs during the acquisition of the complete sentence. Therefore developing a system that demonstrates some level of cognitive ability can lead to a better understanding of the neural machinery that leads to cognitive function.
During the second year of age, infants start to associate the names with the visual information that appears in the receptive field. Particularly with dynamic scenes (e.g., a man lifting a ball), with the guidance of visual attention, infants could construe the scenes flexibly, noticing the consistent action (e.g., lifting) and the consistent object (e.g., the ball). Gradually their construals of the scenes were influenced by the words from the auditorial inputs (e.g. from their parents) so that they learn how to use grammatical form of a novel word used to describe them (verb or noun), and successfully mapped novel verbs to event categories (e.g., lifting actions) and novel nouns to object categories (e.g., balls). Moreover, infants’ representations were sufficiently abstract to permit them to extend novel verbs and nouns appropriately beyond the precise scenes on which they had been taught.
In the context of POETICON++ project, we conducted the robotic experiment by taking direct inspiration from this child psychology studies of verbs and nouns learning. We explore how this embodied interaction supports the learning of noval nouns and verbs. The iCub robot learns the novel object and a particular motor action with the guidance of instructor. The robot was allowed to learn part of the combination of the verbs and nouns (as shown in the table). However, the experimental result showed that the MTRNN network based neural learning allowed the robot to acquiare generalisation ability, which means that the robot is able to react with novel combinations that it has not learnt before. Further analysis also impled that the nouns and the verbs are emerged as two independent activations in the internal neurodynamics which spreads in both the spatial and temporal domains. The resulting model qualitatively captures the infant data and makes interesting predictions that are currently being explored with new child experiments.
Technically, in this project, I also participated the debugging and testing of Aquila Software (Cognitive Robotics Architecture).