You are hereBrain Machine Interfaces

Brain Machine Interfaces

Fig. 1. Illustration of BMIC system for prosthetic limbs. (full size)
The remarkable ability of monkeys and humans to achieve basic movement control of robot arms using signals extracted from motor cortex has now been amply demonstrated [1]-[5]. However, the controlled movements are in general slow, involve only a few degrees of freedom, and are not dynamically demanding (movements are executed in environments with no disturbances and/or loads). In the future, neuroprosthetics must move as quickly as natural limbs through three dimensions in natural environments.

Brain-machine interactive control (BMIC) of prosthetic limbs for high speed and natural movements is a major challenge. The current BMIC paradigm employs a feedforward interface between the brain and (artificial) limb, often referred to as the “decoder”, whose success relies heavily on the ability of the brain to adapt appropriately utilizing visual feedback information in a “certain” environment [1]-[8]. Such decoders are typically trained using data from healthy subjects but are eventually implemented as interfaces for amputees or for patients with spinal cord injuries. The motor cortical output of the healthy subject is substantially different from that of an injured patient, and decoders do not account for spurious signals generated in the cerebellum due to the loss of proprioceptive data (see Fig. 2A). It has been shown that neural signals can be used in a feedforward decoder to predict repeatable low speed movements. Here, the decoder performs well because the motor cortical output of the healthy subject and the injured patient are very similar. However, the loss of proprioceptive feedback is detrimental when executing fast movements in uncertain environments. Such discrepancies may be fully compensated for by brain adaptation. The key challenge facing the field is to account for cerebellar inputs to achieve advanced features of high-speed and loaded movements. Thus, we need to design robust decoders for BMICs of the future that take into account both cerebellar and cortical contributions and to address the realistic control of prosthetics faced by the injured or diseased human subjects.

To address this challenge, We will work towards developing a novel model-based Robust Decoder-Compensator (RDC) architecture for interactive control of fast movements in the presence of uncertainty. The RDC is a feedback interconnection that 1) decodes cortical signals to produce actuator commands that reflect motor intent, 2) corrects for spurious signals generated by the cerebellum in the absence of proprioceptive feedback, and 3) makes robust the interconnection to account for mismatches between models and reality (Fig. 2B). Formally, a goal of such a robust decoder-compensator (RDC) would be to minimize the error between a healthy limb trajectory, x_a (not shown here), and the prosthetic trajectory, x ̃_a shown in Fig. 2B. The healthy output x_a= Hx_ref, where comprises a model of the limb, L , all delays in the feedforward and feedback loops, as well as the healthy CerebroCerebellar circuit. Formally, we want to design D and K (Fig. 2B) to minimize ||  Hx_ref-x ̃_a ||. If sufficiently reduced-order models of the limb, prosthetic and CerebroCerebellar processing are known, and if the architecture for the RDC is known, then the RDC can be designed to minimize the error and this optimization problem can be solved either exactly or approximately.

To carry out this ambitious project, We will have the unique opportunity to work with clinicians at the Cleveland Clinic (CC) with expertise in electrophysiology and neurosurgery.
Fig. 2. (A.) Schematic of Current BMI Systems (B.) Schematic of proposed interactive System with RDC. (click for full size)
Together, we will collect neural spiking activity and local field potential data from patients with implantable electrodes admitted for epilepsy surgery. During recordings of cerebral motor and premotor areas, the patients will perform a behavioral task involving a manipulandum (robotic arm). Patients will attempt to move the manipulandum to targets as quickly as possible while the robotic arm may perturb or resist the patient’s motion. This data will then be used to estimate neuroanatomically-based models of the cerebellum (extending work in [9]-[11]) and linear parameter varying (LPV) phenomenological models of motor sensory areas. These models will be incorporated in the RDC, and their predictions will be used to compensate for the effects of spurious signals generated by these regions which no longer receive proprioceptive feedback.

  1. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D, Penn RD, and Donoghue JP (2006). Neuronal ensemble control of prosthetic devices by a human with tetraplegia, Nature, vol. 442, pp. 164-71.
  2. Chapin JK, Moxon KA, Markowitz RS, and Nicolelis MA (1999). Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex, Nat Neurosci, vol. 2, pp. 664-70.
  3. Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, and Donoghue JP (2002). Instant neural control of a movement signal," Nature, vol. 416, pp. 141-2.
  4. Wolpaw JR and McFarland DJ (2002). Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans, Proc Natl Acad Sci U S A, vol. 101, pp. 17849-54, Dec 21.
  5. Taylor DM, Tillery SI, and Schwartz AB (2002). Direct cortical control of 3D neuroprosthetic devices, Science, vol. 296, pp. 1829-32.
  6. Acharya S, Fifer MS, Benz HL, Crone NE, Thakor NV (2010). Electrocorticographic amplitude predicts finger positions during slow grasping motions of the hand. J Neural Eng. Aug;7(4):046002
  7. Acharya S, Thakor NV, Schieber MH (2010). Single motor cortex neurons represent the kinematics of multiple digits simultaneously. 40th Annual meeting of Society for Neuroscience.
  8. Aggarwal V, Acharya S, Tenore F, Shin HC, Etienne-Cummings R, Schieber MH, and Thakor NV (2008). Asynchronous decoding of dexterous finger movements using M1 neurons, IEEE Trans Neural Syst Rehabil Eng, vol. 16, pp. 3-14.
  9. Massaquoi SG. and Slotine J-JE (1996). The Intermediate Cerebellum may function as a Wave-Variable Processor. Neuroscience Letters 1996, 215; 60-64.
  10. Jo S, and Massaquoi SG (2004). A model of cerebellum-stabilized scheduled hybrid long-loop control of balance. Biol.Cybern. 2004 91(3):188-202
  11. Jo S, and Massaquoi, SG (2006) A model of cerebrocerebellar-spinomuscular interaction in the sagittal control of locomotion” Biol. Cybern, 2006 96(3): 279-307.
  12. Dahleh MA, and Diaz-Bobillo I (1995). Control of Uncertain Systems: A Linear Programming Approach. Prentice-Hall.