VANDAL - Visual and Multimodal Applied Learning Laboratory
Dealing with multi-modal information is crucial for any intelligent agent and it is an essential for robot life-long learning. From self-driving cars to detecting and handling objects for service robots in homes, from kitting in industrial workshops, to robots filling shelves and shopping baskets in supermarkets, etc. All these applications, and many more, imply interacting with a wide variety of objects, requiring in turn a deep understanding of what these objects look like, their properties, functionalities and space context. Moreover, regardless of how much knowledge has been manually encoded into a robot, it will inevitably face novel situations, information conflicts or ambiguities, gaps in its own capabilities and should be able to learn continuously updating its knowledge.
The mission of the VANDAL group is to develop algorithms necessary to robots, and intelligent systems in general, to learn autonomously in an open-ended manner. This implies using and developing new tools from machine learning, computer vision multimodal signal processing and analysis, data visualization and mining. Although intelligent embodied systems are the main application driving research in VANDAL, other applications are non-invasive control of prosthetic hands, scene understanding and automatic geolocalization.