Projects

BCBT is project oriented

A significant amount of time will be dedicated to applying the knowledge acquired during the school - and hopefully before - to realize pilot studies using different tools and technologies. All registered students will work in small teams - including other participants - to complete a project by the end of the school. Projects can be chosen from a list of predefined projects (see pdf below) or proposals can be made on the first day of the school. Everybody is invited to bring their own equipment and tools. These projects are supposed to be realizing new ideas and not to be a continuation of work already performed.

BCBT Awards: BCBT will award a prize for the best projects.

Projects 2014

# 1   iCub - Extraction of Sensory-Motor contingencies

The iCub is a humanoid robot that posses a large set of sensors and effectors designed to be used as a research platform for cognitive development. In this context, the interaction between these systems can generate rich sensory-motor data allowing for extraction of contingencies. We developed a library for multi modal integration inspired from the Convergence Divergence Framework of Damasio and implemented using a multimodal self organizing neural network (Multi Modal Convergence Map, MMCM).. We will demonstrate and test the MMCM algorithm by extracting the regularities in the sensory motor stream in different setups (visuo-motor, visuo-auditory, motor-haptic, etc).
Our initial plan is to reproduce the rubber hand illusion experiment in the robot and then move on to other modalities in other scenarios such as self-tickling cancellation or vocabulary grounding.

References:
Meyer, K., & Damasio, A. (2009). Convergence and divergence in a neural architecture for recognition and memory. Trends in Neurosciences, 32(7), 376–382. doi:10.1016/j.tins.2009.04.002
Lallee, S., & Dominey, P. F. (2013). Multi-modal convergence maps: from body schema and self-representation to mental imagery. Adaptive Behavior, 21(4), 274–285. doi:10.1177/1059712313488423

TASKS: Record data on the robot. Design and train the neural models. Use the model to generate behaviors and predictions. Analyse the quality of the learning and predictions

KNOWLEDGE: Basic C++ understanding. Statistical Analysis. Basic knowledge about YARP is a plus. Basic knowledge about ICUB is a plus

SUPERVISOR:  Stéphane Lallée

CONTACT: stephane.lallee@gmail.comgreg.pointeau@gmail.com

STUDENTS: Grégoire Pointeau and ...

# 2  Understanding human spatial cognition using virtual reality environments.

Spatial memory is a process that enables to encode, store and retrieve information on spatial locations and configurations. An important question is how the mode of navigation affects spatial memory and the understanding of a space. Previous studies on this topic report inconsistent results. This is mainly due to the fact that active exploration of a natural environment involves different cognitive processes that are difficult to control systematically in standard experimental designs. The eXperience Induction Machine (XIM) is a human accessible mixed reality space that has been constructed to conduct experiments in ecologically valid conditions. The XIM incorporates the advantages of a laboratory within a life-like setting and constitutes a unique environment to study human spatial cognition. The goal of this project is the development of an immersive 3D world (a "virtual house") in the XIM and the conduction of an experiment to investigate the impact of active or passive navigation on recall of locations and local/global visual features of an unfamiliar environment.

TASKS: 1. Implementing the experimental setup in XIM through the integration of different sensors (eye tracker, floor pressure sensors and tracking) and effectors (projectors, floor lights); 2. Running the experiment with volunteer participants; 3. Statistical analysis of the data collected.

KNOWLEDGE: Knowledge of Unity 3D and R/Matlab is a plus (but not compulsory).

SUPERVISORs: Alberto Betella, Laura Serra Oliva, Pedro Omedas

CONTACT: mailto:alberto.betella@upf.edu

STUDENTS

# 3  Learning to counterbalance perturbations in a self-balancing robot setup through cerebellar learning

The role of the cerebellum in anticipatory postural adjustment tasks (APAs) is well established [1]. APAs are acquired compensatory
and anticipatory motor responses maintaining balance and equilibrium against self-induced or external perturbations.

TASKS: We are going to design a real robot experiment in which a perturbation is induced to a self balancing robot and after several trials it learns to counterbalance the perturbation through cerebellar learning. The robot is built but not adapted to the task. The robot will needto anticipate the perturbation through an infrared proximity sensor. All the state data of the robot is transmitted via bluetooth and the cerebellum learning is implemented in a separated computer.

KNOWLEDGE:
Either Python / Matlab,  (to deal with the cerebellar model and analysis of the data) or Arduino programming  (for working with the robot)

SUPERVISORS: Giovanni Maffei, Marti Sanchez, Ivan Herreros

CONTACT: giovanni.maffei@gmail.com

STUDENTS:

References:
[1] Maffei, G., Sanchez-Fibla, M., Herreros, I., & Verschure, P. F. (2014). The role of a cerebellum-driven perceptual prediction within a
robotic postural task. In From Animals to Animats 13 (pp. 76-87). Springer International Publishing.

[2] Herreros, I., Maffei, G., Brandi, S., Sanchez-Fibla, M., & Verschure, P. F. (2013, November). Speed generalization capabilities of a
cerebellar model on a rapid navigation task. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on (pp.
363-368). IEEE.

# 4 Cerebellar control of predictive grasping in a robot with tactile feedback

Skilled manipulation relies on tactile feedback, to the extent that even with perfect visionand no time constraints, the grasping of small or delicate objects becomes very difficult and error-prone (https://www.youtube.com/watch?v=0LfJ3M3Kn80). However, tactile feedback contol, with its inherent delay, might not suffice for allowing fast and dexterous grasping. In that case, we have to resort to predictive control strategies. We want to show that adaptive anticipation of the tactile perception by the cerebellum can provide the basis for this predictive control of grasping.

TASKS: We will provide the student with an implementation of the cerebellar controller, that has been used in several publications, and his/her task will consist on building a two-layered control architecture including the feedback (reflex) and a feed-forward (cerebellum). The system (virtual or physic, depending upon the participant's skills) will have a single degree of freedom (the grasping force), tactile feedback and visual feedback to guide the anticipatory component. We will invesitigate what is the optimal mixture of anticipatory and reactive motor contro for that task.

KNOWLEDGE: (Python) programming. Modelling, Simulation environments, robotics.

SUPERVISORS: Ivan Herreros, Santiago Brandi, (Marti Sanchez, Giovanni Maffei)

CONTACT: ivanherreros@gmail.com

STUDENTS:

REFERENCES:
Nowak, D. A., Topka, H., Timmann, D., Boecker, H., & Hermsdörfer, J. (2007). The role of the cerebellum for predictive control of grasping.
The Cerebellum,6(1), 7-17.

Herreros, I., & Verschure, P. F. (2013). Nucleo-olivary inhibition balances the interaction between the reactive and adaptive layers in
motor control. Neural Networks, 47, 64-71.

Herreros, I., Maffei, G., Brandi, S., Sanchez-Fibla, M., & Verschure, P. F. (2013, November). Speed generalization capabilities of a
cerebellar model on a rapid navigation task. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on (pp.
363-368). IEEE.

Maffei, G., Sanchez-Fibla, M., Herreros, I., & Verschure, P. F. (2014). The role of a cerebellum-driven perceptual prediction within a
robotic postural task. InFrom Animals to Animats 13 (pp. 76-87). Springer International Publishing.

Johansson Testing

# 5 Assessing the influence of information spatialization and spatial navigation in human memory using head-mounted Virtual Reality

Several studies in the neuroscientific literature have shown a tight relationship between space and memory functions in the human brain. Early studies with rodents already showed the confluence of signals with spatial information and memory signals in the hippocampus of rodents (Squire, 1992). More recent studies have shown that, in humans, the same brain systems involved in spatial navigation also support declarative memory (Burgess, 2002, Nadel, 1991). At both the neuronal and behavioral level, it is now known that encoding and recollection memory processes are strongly regulated by space (Miller, 2013).
On the other hand, studies on human spatial behavior have explored the effects of navigation modes in spatial learning and memory, leading to contradictory results (Chrastil, 2012). While some have reported that active exploration of “space information” imply more spatial learning and memory than a passive exposure to it, others have shown no significant differences between the two modes.
This might be due to the different definitions and methodologies used. While some studies require that participants move in the real world environment, leading to the lack of control in experimental conditions, others use desktop virtual reality systems where embodiment and physical activity are highly restricted. What is really the influence of information spatialization and physical, embodied spatial navigation on memory? To answer this question, we have build an ecologically valid experimental setup using immersive virtual reality.

References
J. F. Miller, M. Neufang, A. Solway, A. Brandt, M. Trippel, I. Mader, S. Hefft, M. Merkow, S. M. Polyn, J. Jacobs, et al. Neural activity in human hippocampal formation reveals the spatial context of retrieved memories. Science, 342(6162):1111–1114, 2013.
L. Nadel. The hippocampus and space revisited. Hippocampus, 1(3):221–229, 1991.
L. R. Squire. Memory and the hippocampus: a synthesis from findings with rats, monkeys, and humans. Psychological review, 99(2):195, 1992.
N. Burgess, E. A. Maguire, and J. O’Keefe. The human hippocampus and spatial and episodic memory. Neuron, 35(4):625–641, 2002.
Chrastil, Elizabeth R., and William H. Warren. "Active and passive contributions to spatial learning." Psychonomic bulletin & review 19.1 (2012): 1-23.

TASKS: Interface an immersive VR system with head mounted display and tracking system (100 square meters, Polyvalent room UPF) for the embodied human-data interaction. Record navigation information (position, orientation, timings), as well as users responses to memory tests.
Statistical analysis of the data and presentation of the results.

KNOWLEDGE: Basic Programming skills (C#, Javascript).  3D Graphics. Memory. Spatial Navigation

SUPERVISOR: Daniel Pacheco

CONTACT (email): daniel.pacheco@upf.edu

# 6 The Rehabilitation Gaming System: The influence of reinforcement history on action selection

Stroke patients often under utilize their paretic limb despite sufficient residual motor function. This so-called “learned non-use” can lead to reversible loss of neural function (Taub, et al. 1994). Motor rehabilitation protocols in which the use the paretic limb is strengthened through systematic reinforcement may be effective in mitigating the effects of this pathology. The aim of this study is explore the effects of reward intensity and performance manipulations on reinforcement history and hand selection profiles. For this purpose, we will design a Virtual Reality based task for the evaluation of effector selection patterns using as a basis the Rehabilitation Gaming System (RGS), a VR-based rehabilitation tool that integrates a paradigm of action execution and action observation. The RGS allows the user to control a virtual body (avatar) seen from a first-person perspective on a screen. In addition, we will use a gravity-supporting exoskeleton apparatus (Hocoma Armeo) for tracking the upper extremity movements of the user. We expect to find that amplifying the reward when selecting the non-dominant limb to perform a reaching movement will correlate with higher estimates of the influence of previous reward history on current hand choice.

References:
Taub, E., Crago, J. E., Burgio, L. D., Groomes, T. E., Cook, E. W., DeLuca, S. C., & Miller, N. E. (1994). An operant approach to rehabilitation medicine: overcoming learned nonuse by shaping. Journal of the experimental analysis of behavior, 61(2), 281-293.

TASKS: Design experimental protocol. Design a simple virtual reality scenario. Run experiments with subjects. Data analysis

KNOWLEDGE: Statistical Analysis. Basic C# is a plus. Basic knowledge bout Unity 3D is a plus

SUPERVISORS: Belén Rubio, Martina Maier

CONTACT: belen.rubio@upf.commartina.maier@upf.com

# 7 Dissociation of visual anticipation and conscious experience.

Lately theories of predictive coding (Clark, 2013) were suggested to explain how internal representation of the external world could be developed and dynamically updated. The role of the top down predictive mechanisms is to ‘explain away’ the bottom up stimuli by successfully inferring its current state.
Experiments in the framework of predictive coding had so far focused on low-level perceptual processes (Friston et al., 2012; Rao & Ballard, 1999), never extending the methodology and scope of interest into higher cognitive tasks, such as the construction of a conscious experience.
We investigate what constitutes our conscious experience: are the top down predictions also the content of our perception, or is the conscious experience dominated by errors in the predictions, i.e. the unexplained stimuli? This group will focus on developing an innovative experimental design that will allow for empirical study of consciousness in the predictive coding framework.
To investigate this question we are conducting a psychophysical experiment using Tobii eyetracker and a visual task in Unity 3D. We are also introducing a second setup using Occulus and Kinect, focusing on ecological validity of psychophysical experimentation. We have already conducted several experiments using this setup and we are looking for developing it further. Because the infrastructure is already tested we are aiming to run the experiment and analyze the results during BCBT14.

TASKS: Experiment design and setup, Eyetracking and experimental data analysis,

SKILLS (all a plus, not a requisite): Data analysis (Python, Matlab), Programming (C#), Eye tracking, Occulus, Kinect

References:
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. The Behavioral and Brain Sciences, 36(3), 181–204.
Friston, K., Adams, R. a, Perrinet, L., & Breakspear, M. (2012). Perceptions as hypotheses: saccades as experiments. Frontiers in Psychology, 3(May), 151.
Rao, R. P., & Ballard, D. H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79–87.

SUPERVISOR: Richard Cetnarski
CONTACT: cetnarski.ryszard@gmail.com

STUDENTS:

# 8 Co-sociality in Human-Robot Interaction: How does personality affects a task’s performance?

As technology advances, robots will be introduced to our societies and we will interact with them on a daily basis. In some cases, humans may even need to perform interactive tasks with robots. For this reason, it is important to investigate what characteristics are important in a Human-Robot Interaction (the study of how humans interact with robots). Following the Human-Computer Interaction paradigm, where Reeves and Nas [1] explored several human factors concepts we are interested in exploring the role of personality in human-robot interaction. The goal of this study is to investigate and evaluate reactions and responses of humans in response to different robotic behaviors in a game-like task.
- Human-robot interaction will be explored via game playing (game to be defined) with a humanoid robot, the iCub. To investigate the role of personality, we will use the big Five Personality Inventory [2]. Further data will be collected to measure empathy (to control for participant’s emotional responsiveness which an individual shows to the feelings experienced by another person [3]) as a way for making sense of an agent’s behavior, and how people perceived the robot during the interaction [4].
- This experiment aims to demonstrate how personality is a key factor in human-robot interaction. We expect to find that a person’s and the robot’s personality are important within the context of co-sociality.

TASKS:
- Customize the interactive game to match the two personalities of the robot (agreeable/anti-social) in terms of turn taking/ interrupting and responses
- Customize the Web application to fit the demands of the survey (change the total number of questions, add button of finish/take me to next, etc.)
- Run the experiment and find subjects
- Run statistical analysis and presentation of the results

KNOWLEDGE: C++, Statistics (SPSS/Matlab), PHP

SUPERVISORS: Vicky Vouloutsi, Emily Collins, Armin Duff

CONTACT: vicky.vouloutsi@upf.edu

STUDENTS:

References:
[1] Byron Reeves and Clifford Nass. The media equation: how people treat computers, television, and new media like real people and places. Cambridge University Press, New York, NY, USA, 1996.
[2] Rammstedt, B., & John, O. P. (2007). Measuring personality in one minute or less: A 10-item short version of the Big Five Inventory in English and German.Journal of research in Personality, 41(1), 203-212.
[3]  David P. Farrington. (2006) "Development and validation of the Basic Empathy Scale." Journal of adolescence 29.4. 589-611)
[4] Bartneck, C., Kulić, D., Croft, E., and Zoghbi, S. (2008). Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social Robotics, 1(1), 71–81

Nb of Participants needed:
We require 15 participants per state for ideal base-level statistical power. Aim n=30. But we will start by alternating states and gathering participants until we run out of time.

Useful links:
Nice paper layout for ref = http://link.springer.com/chapter/10.1007/978-3-642-39470-6_13#page-1
Online 60 item empathy quotient (EQ) = http://psychology-tools.com/empathy-quotient/