In this paper we introduce MCA-NMF, a computational model of the acquisition of multimodal concepts by an agent interacting with its environment. We explain how such computational models are also an answer to the question of what concepts are and not only of how they can be learnt. We detail why multimodality is essential to lower the ambiguity of learnt concepts as well as communicate about them. We then present a set of experiments that demonstrate the leaning of such concepts from real non-symbolic data consisting of sounds, images, and motion acquisitions. Finally we consider structure in perceptual signals and demonstrate that a detailed knowledge of this structure, named compositional understanding can emerge from, instead of being a prerequisite of, global understanding.
from HAL : Dernières publications http://ift.tt/1admes6
from HAL : Dernières publications http://ift.tt/1admes6
0 commentaires:
Enregistrer un commentaire