BCBT is project oriented
A significant amount of time will be dedicated to applying the knowledge acquired during the school - and hopefully before - to realize pilot studies using different tools and technologies. All registered students will work in small teams - including other participants - to complete a project by the end of the school. Projects can be chosen from a list of predefined projects (see pdf below) or proposals can be made on the first day of the school. Everybody is invited to bring their own equipment and tools. These projects are supposed to be realizing new ideas and not to be a continuation of work already performed.
BCBT Awards: BCBT will award a prize for the best project.
PROJECTS 2013
1. Multimodal outdoor navigation using a humanoid robot (click on the title for the PDF)
The objective of this project is to provide a bio-inspired navigation system to the humanoid iCub robot mounted on an iKart mobile base. While the existing navigation system (ROS navigation stack) is working well with the laser range finder information, it does not teach us anything about how biological systems handle this task, neither it makes use of the other available sensors (cameras, inertial sensor). There is abundant evidence from mammalian brains demonstrating that the hippocampus plays a key role in spatial representation and spatial memory (for review see “Place Cells, Grid Cells, and the Brain’s Spatial Representation System.” EI. Moser, E Kropff, and MB Moser, Annu. Rev. Neurosci. 2008. 31:69–89). Given this information, this project will take as a starting point the model described in “The mechanism of rate remapping in the dentate gyrus”, C Rennó-Costa, JE Lisman, PFMJ Verschure - Neuron, 2010, and we will concentrate on the mutual interaction loop between grid cells and place cells the hippocampus in order to be able to integrate information from the different robot's sensors, e.g : vision and vestibular system.
Tasks:
The final goal is to design and perform an experiment where the humanoid robot navigates autonomously outdoor.
Knowledge: C++ programing, experience with YARP can be of use but not mandatory. Experience on image processing could be useful.
Supervisors: Stéphane Lallée and Diogo Pata
Contact: stephane.lallee@gmail.com
Students:
Benjamin Wild - Freie Universitat Berlin, Germany
Leonardo Gualano - Dublin City University, Ireland
Jakub Mozaryn - Warsaw University of Technology, Poland
2. Performing safe limb control using a cerebellar model
We are going to use a cerebellar model based on an adaptive filter neural network (Herreros and Verschure 2013) that will learn to safely position a limb on a surface approaching it at different velocities and make contact at a desired contact/collision force.
The limb moves down at a constant velocity and the cerebellum will learn a brake response to collide with the surface with a desired force, and not exceed it.
Tasks:
We will try to answer two questions:
- what are the generalisation capabilities to different speeds of the acquired response?
- what could be the role of the Nucleo Olivary Inhibition in the balancing of the learning and the risk that we can take in the collision?
The cerebellar model is implemented in Python and networked via Open Sound Control (UDP protocol)
The Physics simulation was implemented using a physics library in Processing.
Knowledge: Programming skills are required.
Supervisors: Giovanni Maffei, Santiago Brandi, Marti Sanchez-Fibla,
Contact: giovanni.maffei@upf.edu
References:
1. Nucleo-olivary inhibition balances the interaction between the reactive and adaptive layers in motor control.
I. Herreros and P. Verschure, Neural Networks, 2013.
2. Speed generalization capabilities of a cerebellar model on a rapid navigation task.
Ivan Herreros, Giovanni Maffei, Santiago Brandi, Marti Sanchez-Fibla, Paul Verschure. IROS 2013
Students:
3. Classical conditioning in an immersive 3D interactive environment (click on the title for the PDF)
The advances in virtual reality techniques led to their application to a wide range of scientific fields. VR has been providing an important contribution to the understanding of human behavior since it allows to setup ecologically valid environments where the user can act and behave in life-like conditions. The eXperience Induction Machine (XIM) is an immersive space we constructed to conduct experiments on human behavior in ecologically valid conditions ( http://specs.upf.edu/research_in_mixed_and_virtual_reality ). We enhanced the XIM with unobtrusive wearable sensors, the sensing shirt and glove, capable of measuring the users’ psychophysiological signals (Electrodermal response and heart rate). Using the XIM infrastructure we aim to reproduce an experiment about emotional conditioning in a VR setting. We will create a complex but experimentally controlled virtual world where the participants can interact with a number of 3D objects through the use of the wearable sensors that will also measure their psychophysiological state.
Tasks:
1. Define in detail the experimental protocol
2. Develop the virtual world and setup the experiment in the XIM
3. Conduct the experiment 4. (preliminary) Statistical analysis of the data collected Technical Knowledge:
Knowledge: Unity 3D and R/Matlab is recommended (but not compulsory). Other skills: expertise in the analysis of EDR and HR (arousal peaks, HRV, etc...) is a plus.
Supervisors: Alberto Betella, Riccardo Zucca, Xerxes Arsiwalla.
Contact: alberto.betella@upf.edu
Students:
Roberta Bardini - University of Torino, Italy
Ryszard Cetnarski - Universit of Warsaw, Poland
Riccardo Zucca - University Pompeu Fabra, Barcelona, Spain
4. Experimental VR scenarios for the rehabilitation gaming system RGS using the Oculus system
Stroke represents one of the main causes of adult disability. Hemiparesis (partial paralysis and weakness on one side of the body) as well as different cognitive deficits are common consequences of stroke. To provide new means of rehabilitation SPECS has developed the Rehabilitation Gaming System (RGS). RGS is a novel and innovative Virtual Reality (VR) tool for the rehabilitation of motor and cognitive deficits after a brain lesion due to stroke. RGS is based on the principle of combining execution and observation of movement to solve cognitive and motor task involving, motor movements, memory, attention, problem solving etc. According to the sensorimotor contingency theory, our ability to perceive and interact with the world is coded by sensorimotor representations in the brain. Sensorimotor representations are patterns of motor action and sensory feedback. They allow planning and execution of movement, e.g. reaching and grasping an object. A recent study suggests that the manipulation of visual feedback of one’s own hand movement may be used to facilitate activity in select brain networks, such as premotor and motor cortex. The potential to bolster motor cortex excitability through visuomotor manipulation may have important clinical relevance because these areas are typically underactive in a number of neurological disorders (such as stroke and Parkinson’s disease). We would like to study wether the effects of hypo- and hypermetric visual feedback (i.e. mismatched visual feedback) can be exploited to enhance the processes of motor learning and consolidation.
Tasks:
1. Experimental design
2. Implementation of a experimental scenario for the Oculus VR and RGS.
3. Execution of an experiment with healthy subjects. Url: http://rgs-project.upf.edu//
Knowledge: Basic programming skills, interest in rehabilitation techniques Keywords: Stroke, neurorehabilitation, motor learning, sensorimotor contingencies, Virtual Reality. Oculus VR
Supervisors: Armin Duff armin.duff@upf.edu, Belen Rubio. Contact: belen.rubio@upf.edu
Students: