One of the promising applications of autonomous systems is in human rehabilitation and assistance, especially for severely disabled patients. Such systems require a reliable yet intuitive human interface that does not require neuromuscular control.
Brain Computer Interfaces (BCIs) have been extensively developed in recent years as a potential interface for a range of applications for human rehabilitation and assistance. Non-invasive BCIs captures brain signals through an electroencephalogram (EEG) recording, with an array of electrodes mounted in an EEG-cap . While initial applications of BCI focused mainly on spelling applications more recent applications extend to the control of semi-autonomous rehabilitation robots, the control of neuroprosthetic devices, and navigation of autonomous wheelchairs, and for gaming entertainment. BCIs are especially promising for severely handicapped patients since relevant EEG responses can be elicited even by motor imagery. This is the basis of the Berlin Brain-Computer Interface (BBCI).
The success of BCIs in interpreting human intention depends on proper preprocessing and machine learning techniques for both feature detection and classification. Typical success rates are around 70%. Remaining errors are detected and corrected by the user, thereby hampering the efficiency and transparency of the interface. To facilitate automatic error correction by the BCI itself it needs to detect the EEG activity associated with error detection by the user, and augment the control accordingly. While the importance and potential benefits of error detection for improving BCI systems has been noted, investigations focused mainly on detecting cognitive errors. Here we propose to investigate the existence and detection of movement related errors, which are expected to arise from estimation errors. The proposal is motivated by our recent results from invasive BCIs (known as Brain-Machine Interfaces, BMIs) which indicate that the modulations in the neural activity increases when Monkeys first started to use the interface, and especially after they stopped moving the hand. It is hypothesized that this enhancement is due to increasing variance of estimation errors and control signals. Most importantly, we hypothesize that this enhanced activity would result in detectable changes in EEG activity. Our proposal is targeted at detecting and characterizing EEG potentials related to motor-errors. Once detected these potentials can be used for on-line autonomous motor correction and enhance the application of BCIs handicapped patients.
Motion capture is a generic term denoting recording movement and translating that movement onto a digital model. In particular application to the human body, the purpose of motion capture is to measure the motions of rigid segments of the human body during various activities. Such systems have numerous applications, including human motion study in biomechanics, spanning from clinical gait analysis to sport performances and injury prevention, military applications, and motion picture industry. In filmmaking, motion capture (also known as mocap) is used to record actions of human actors, and using that information to animate digital character models in 2D or 3D computer animation.
Most existing motion capture systems track special markers placed on rigid parts and joints of the body. The disadvantage of such approaches is first of all the fact of objects attached to the human body or clothes, which in some cases make the subject feel uncomfortable and make the motion unnatural. Secondly, since typically a small number of markers are used, only a sparse set of points is available for tracking. Subtler motions, such as non-rigid deformations of the soft tissues are lost in this way. Finally, the use of markers limits the application to controlled scenarios, in which the captured object is prepared and well-known in advance.
The scope of the proposed project is to develop a markerless motion capture approach for analysis of generic dynamic objects in natural conditions. Besides removing the standard drawbacks of marker-based motion-capture systems, markerless motion capture would constitute a qualitative improvement, and it would no more be limited to a calibrated lab setting, but could rather be applied in a wide range of natural scenarios. Of particular interest are applications relevant to autonomous systems, such as robot navigation and interaction with surrounding objects (e.g. humans). The proposed solution would naturally allow motion capture of multiple objects and would not be limited to objects known in advance. Motion capture in autonomous systems could become important component, adding novel functionality and extending existing one.
The proposed system will be based on a 3D sensor that acquires the body geometry in motion, producing a 3D video of a moving subject. At the second stage, a correspondence between the 3D video frames is established. Since the body is non-rigid, we use intrinsic correspondence which is deformation- invariant. Given the correspondence between every two consecutive 3D frames, the motion is estimated using a 3D version of over-parameterized optical flow. This process automatically determines rigid parts of the body. Finally, the recovered motion and articulated parts will be fit into a kinematic model.
Tripped roll-over is a major cause of truck accidents: when the wheel of a truck hits the pavement stone lining, or conversely, when a truck tries to get back on a road and hits the elevated asphalt rim, roll-over may occur. Untripped roll-over accidents with SUVs driven off-road with inexperienced drivers is also a serious problem. Warning and prevention of untripped roll-over accidents can save lives. The proposed research will investigate and develop more reliable roll-over indicators. An extended active safety Medlinger roll-over avoidance algorithms will be tested for the first time.
The extensions will include new kinds and combinations of sensors and actuators improving roll-over avoidance control performance and stability. A novel approach, using combined quantitative feedback theory (QFT) and model predictive control (MPC) will be investigated to cope with a wider spectrum of roll-over scenarios. The algorithms will be tested first in simulation and then on an actual autonomous unmanned ground vehicle (UGV) driven in the cooperative autonomous systems (CASY) lab.
The mobility of autonomous mobile robots over complex urban terrains such as stairways, sidewalks, and outdoor paths is a major challenge in the field of robotics. This proposal seeks to develop a novel flexible track robot, Robotrek, capable of safely carrying 10-30 kg loads over virtually any three-dimensional terrain. The robot, is a closed chain of hinged rigid links that can freely adapt its shape to the underlying terrain. While the chain’s bottom part adapts to the terrain’s geometry, its top part flows through an innovative rigid frame “riding” the chain’s rear part. A single powerful motor mounted on the rigid frame drives the entire chain based on a principle from kinematics of mechanisms theory. The resulting robot provides huge traction with any shaped terrain while carrying payloads for myriad civilian and military applications.
Robotrek represents a fundamental breakthrough and offers significant advantages over existing robots: a) It is the only robot that can fully adapt its shape to the underlying terrain, thus providing perfect traction and anti-slippage safety. b) It is operated by a single powerful motor, thus providing a simple mechanical structure that can reliably operate during autonomous missions. c) It is operated in open loop fashion- the chain’s links locally adapt to the terrain’s shape during locomotion, without any need for complex centralized control and global sensory feedback. d) Energy “bonus:” heavier loads mounted on the chain’s top frame provide better ground traction and thus improve locomotion safety.
Dynamically stable legged robots are promising to provide significant advantages in navigating unstructured terrain. This capability makes them suitable for a range of important applications in surveillance, terrain exploration, rescue missions and service.
Dynamically stable legged robots are commonly controlled by networks of coupled oscillators, which model the biological central pattern generators. These systems are capable of autonomously generating and, most importantly, preserving the desired gait despite disturbances. However, disturbances do affect the direction and speed of motion, so while movement continues, the goal-directed behavior is sacrificed. Thus, additional control strategies are needed to modulate the gait and correct the speed and direction of movement.
Here we propose to integrate event-based sensory feedback to track and modulate the gait. Gait modulation is expected to reduce the effect of disturbances on the planned trajectory, and gait tracking to support self-tracking, These capabilities are critical for autonomous goal directed behavior.
Specifically, the timing of leg touch-downs will be detected using simple force sensors incorporated in the legs. These events will be used as direct or indirect inputs to the network of coupled oscillators, for gait tracking and adaptation. This proposal is motivated and extends our successful application of a similar approach to the control of a robotic yoyo.
More generally, legged locomotion is an example of hybrid dynamical systems in which the continuous-time dynamics change at discrete events (here leg touch-downs and lift-ups). In this framework, the long term goal of this proposal is to investigate the use of event-timing for controlling hybrid systems.
This study is concerned with mixed, bidirectional, composition of artificial-intelligence and motion-planning as applied to the cooperative decision and control of autonomous unmanned aerial vehicles (UAVs).
A scenario of interest is that of a team of UAVs cooperatively performing multiple tasks on multiple targets.
In this type of scenario there is a need of providing in real-time the UAV group with an assignment plan and the specific trajectories that each vehicle must follow under its dynamic constraints.
Thus, the high-level artificial intelligence (AI) mission planning problem is naturally coupled with the low-level motion planning problem of optimizing trajectories; making it one of the most challenging problems in cooperative autonomous multi-vehicle operation.
This multidisciplinary research focuses on developing novel methodologies for solving such coupled AI and motion planning problems.
The developed methods will be software incorporated and the system’s complexity will be analyzed, both formally and empirically.
We expect that our synergetic approach will render the real-time solution of large-scale UAV cooperative decision and control problems feasible.
In this study we look for optimal algorithms for patrolling, searching and surveillance using multiple cooperative autonomous unmanned aerial vehicles (UAVs).
The proposed algorithms will employ realistic dynamical modeling as well as flocking rules via inter-vehicle ad-hoc communication.
Combining some recent progress in solving for the optimal search pattern of UAV flocks (MARS Lab) and recent results in inter-vehicle communication, simulation and control (DSSL) will lead to beyond the state-of-the-art algorithms for cooperative self-organization and search using UAV flocks.