BCBT is project oriented

A significant amount of time will be dedicated to applying the knowledge acquired during the school - and hopefully before - to realize pilot studies using different tools and technologies.

All registered students will work in small teams - including other participants - to complete a project by the end of the school.

Projects can be chosen from a list of predefined projects (see pdf below) or proposals can be made on the first day of the school.

Everybody is invited to bring their own equipment and tools. These projects are supposed to be realizing new ideas and not to be a continuation of work already performed.


BCBT Awards

This year BCBT has awarded a prize for the best 3 projects see projects photo gallery below



1. Real Robot Foraging Experimentation
The synthetic forager (SF) project aims to understand the basis of foraging behavior by constructing an autonomous agent that can act and survive in the real-world. Here we want to look at spatial cognition especially focusing on the hippocampus: the robot has to perform in ways that are consistent with the behavior of live foraging animals. The objective of this project is to design real-world experiments (e.g. in the street) that could validate robot foraging behavior.
1. Outdoor Robot Experiments.
2. Benchmarking and advancing the DAC cognitive architecture.
Knowledge: C++ programing, Linux and Matlab skills are welcome Keywords: Cognitive Systems, Robotics, Foraging, Outdoor Navigation, Benchmarking
Supervisors: Encarni Marcos and Cesar Costa Contact: encarni.marcos@gmail.com
Students: Raffaele Limosani, Guillermo Ludueña   

find the project presentation in pdf below.

4. Influence of navigation mode on spatial memory in mixed-reality.
The eXperience Induction Machine (XIM) is an immersive room equipped with a number of sensors and effectors that has been constructed to conduct experiment in mixed-reality (http://specs.upf.edu/research_in_mixed_and_virtual_reality). XIM is a unique environment that allows us to study human spatial cognition.
We use the XIM to represent a virtual house composed of 4 rooms. Each room has different local features (objects e.g. paintings, shelves) and global features (e.g. wallpaper).
The goal of the experiment is to understand how local and global cues are used in the construction of space and is influenced by behavioral strategies. The user enters the XIM and navigates into the virtual house using two different modalities: active (free navigation from one room to another) and passive (automatic or “guided” navigation).
Our hypothesis is that the navigation mode has consequences for: a) what is remembered; b) the level of immersion and presence experienced by the user.
1. Developing and running the experiment in the XIM
2. Statistical analysis of the data collected.
Knowledge:  Unity 3D and R/Matlab
Supervisor: Alberto Betella; alberto.betella@upf.edu
Students: Virginia Castelli, Costanza Diversi

find the project presentation in pdf below.

6. Eye movements during target-oriented reaching in a Virtual Reality system for stroke rehabilitation.
Stroke represents one of the main causes of adult disability. Hemiparesis (partial paralysis and weakness on one side of the body) as well as different cognitive deficits are common consequences of stroke. To provide new means of rehabilitation SPECS has developed the Rehabilitation Gaming System (RGS). RGS is a novel and innovative Virtual Reality (VR) tool for the rehabilitation of motor and cognitive deficits after a brain lesion due to stroke. RGS is based on the principle of combining execution and observation of movement to solve cognitive and motor task involving, motor movements, memory, attention, problem solving etc.
It has been found that during target-oriented reaching movements the movement of the hand is preceded by a corresponding eye-saccade. In adition, there is evidence that reaching also involves online correction of the arm movement dependent on sensory feedback. By studying eye-movements during a reaching task created for the RGS, we want to answer the questions:
• Can we observe the same relationship between eye- and hand- movements in VR as when reaching for targets in the real-world?
• When disrupting (to different degrees) the movements in VR, does the visual attention move towards the arm itself rather than the target?
The purpose of the project is to investigate if we can use eyetracking to predict where the users of RGS are trying to reach and detect when they perceive unexpected visual feedback. Either of these findings could be useful to study sensorimotor adaption.
1. Integration of an eye-tracker with the RGS.
2. Implementation of a experimental scenario for the RGS.
3. Execution of an experiment with healthy subjects.
Url: http://rgs-project.upf.edu//
Knowledge: Basic programming skills, interest in rehabilitation techniques Keywords: Stroke, neurorehabilitation, motor learning, sensorimotor contingencies
Supervisor: Armin Duff armin.duff@upf.edu
Students: Tanis Mar, Belén Rubio, Jens Nirme

find the project presentation in pdf below.

7. Neural Network Models for Learning eSMCs
eSMCs capture regularities between actions of an agent the resulting changes in sensory input, and in this project we will investigate neural network models that can support the learning of eSMCs. The two main components will be a physical robot that generates eSMCs, and a neural network simulator that allows the investigation of different network models. eSMCs could be generated, for example, by an iCub moving an object in front of its cameras, or a Kephera exploring an arena. An important aspect to consider when selecting appropriate network models is the fact that eSMCs capture instantaneous regularities between actions and sensory inputs, so-called modality-related eSMCs, as well as regularities over time when dealing with an object, called object-related eSMCs. Promising networks in this respect would be echo state networks or other recurrent networks of the reservoir computing framework.
Keywords: artificial neural networks, recurrent networks, associative memory, sensorimotor prediction
Knowledge: Required skills: basic knowledge in artificial neural networks, NN simulator software, and programming simple robots like kephera or ePuck.
Supervisor: Alex Maye/ Marti Sanche; marti.sanchez@upf.edu
Students: Andrei Melnik, Sirous Mhammad, Stefan Skoruppa, Luis Bobo

8. eSMCs of Grasping
Using iCub's skin (tactile sensors at the hand) we will record and analyze the robot's eSMCs of grasping different objects. According to SMCT the relation between sensory patterns and palpating actions constitutes the perception of the object under inspection, and this project aims at visualizing this relation. In a first step sensory data are recorded from the tactile sensors, when the iCub's hand palpates different objects, e.g. a sponge, a box, and a sphere. Then the joint action-sensor data are visualized using appropriate techniques. The next step is to reveal potential structures in these data sets. Candidate approaches could analyze the correlation structure or information flow. If time permits, methods from project #1 could be employed to learn grasping eSMCs and generate object-related behavior, thus linking this study to affordances. Recommended literature: Hosoda K, Robust haptic recognition by anthropomorphic robot hand, in: Krichmar JL, and Wagatsuma H (eds.) Neuromorphic and brain-based robots, Cambridge University Press, 2011
Keywords: robot tactile sensing, tactile exploration, object-related eSMCs Knowledge: Required skills: basic knowledge and programming skills for correlation analysis and visualization, basic knowledge of robot grasping techniques and problems
Supervisor: Alex Maye/ Marti Sanchez Contact: marti.sanchez@upf.edu
Students: Alessandro Roncone, Virgile Högman

9. Learning new eSMCs in humans
Using the eXperience Induction Machine (XIM) at SPECS, we will investigate properties of the learning process of new eSMCs in humans. We will start by determining the average deviation when blindfolded subjects try to cross the arena on a straight line. Using XIM's person localization function, we will then repeat the experiment with the subjects receiving instantaneous auditory feedback about their deviation from a straight line. A decrease of the final deviation can certainly by expected. We will also study the dynamics of the learning process by repeating walks across the arena with feedback and measuring changes in final deviations or walking speed. When subjects have sufficient mastery of these walking-auditory eSMCs, we will test if modifications of the feedback can be used to make subjects follow more complex trajectories, e.g. curves or a circle.
Keywords: sensory adaptation, sensory substitution, sensorimotor learning Knowledge: basic neurophysiological knowledge, psychophysiological methods, programming skills required for XIM
Supervisors: Alex Maye/ Marti Sanchez : marti.sanchez@upf.edu
Students: Guido Schillacci, Sasa Bodiroza, Giovanni Maffei, Andrew Martin, Anna Lisa Gert, Benedikt Ehinger