The successes and limitations of brain-computer interface technology

…and why we don’t all have badass robotic exoskeletons

by Claire Warriner

Philadelphia, 1999. A thirsty lab rat vigorously presses the lever in its cage. This act causes a swinging robotic arm to deliver a droplet of water to within reach of the rat’s parched tongue. At the same time, an array of electrodes implanted it the animal’s motor cortex records the activity of about 30 neurons, the neural signals that drive the lever-pressing behavior. As this sequence is repeated, researchers in the Nicolelis Lab amass enough information to form a computational model of the signal that drives the movement. They then switch control of the water-delivery arm from depression of the lever to the rat’s neural signals—and the arm still works. As the rat’s brain continues to issue that specific signal pattern, the robotic arm continues to deliver water droplets. After a few trials, the rat doesn’t even bother to press the lever anymore, it instead rests its white, murine* arm casually on the lever. It perhaps realizes that the actual physical manifestation of its intent is unnecessary: it is now controlling the robotic arm with its brain1.

And thus was the advent of brain-computer interfaces, or BCIs. A year later, in an experiment involving owl monkeys controlling a joystick, the same lab went on to demonstrate the feasibility of BCI in the primate brain2. An explosion of research in this technology followed in the hope that it would one day lead to the creation of neuroprostheses, devices that replace a missing sense or motor capability by linking external hardware to the brain.

A water-restricted rat (a) operates a lever (b) to swing an external arm (c) from its resting state at (d) to a water dropper (e). Upon release of the lever, the arm returns to deliver the water droplet. An electrode array (f) in the rat’s motor cortex records the activity of neurons (g). The firing rates of these neurons (h) are analyzed. When control of the swinging arm is switched from lever to neural control (j), the decoded neural activity (i) is used to drive the arm. From Chapin et al., 1999
A water-restricted rat (a) operates a lever (b) to swing an external arm (c) from its resting state at (d) to a water dropper (e). Upon release of the lever, the arm returns to deliver the water droplet. An electrode array (f) in the rat’s motor cortex records the activity of neurons (g). The firing rates of these neurons (h) are analyzed. When control of the swinging arm is switched from lever to neural control (j), the decoded neural activity (i) is used to drive the arm. From Chapin et al., 1999

The collective work of the Nicolelis Lab seemed to come to fruition at the 2014 World Cup: Juliano Pinto, a Brazilian paraplegic man, wore a neuroprosthetic exoskeleton of their design to perform the games’ symbolic kick-off. It was anticipated that he would rise from his wheelchair, walk to soccer ball, and then kick it without human assistance3-4. Though a historic and highly public moment for scientific research, he had to be transported to the kick-off area on a golf cart and supported by technicians on either side while a ball was gently pitched towards his right foot. Though his foot did in fact move, one cannot help but notice that the performance did not live up to the hype.

Neuroprosthetics have developed at a roaring rate in the past decade and a half, but the above event illustrates that brain-computer interface technology still faces significant technical limitations. Widespread clinical adoption is hampered by concerns over invasive surgery, imperfect recording and decoding algorithms, and our limited understanding of neural system function. The brain produces electrical activity, often referred to as “spikes,” that carry information through their timing and rate. This activity, on the order of microvolts, can be detected by non-invasive means using surface electrodes on the scalp, as was the case with the Nicolelis exoskeleton. However, the signal they capture has a lower spatial resolution and provides less detailed information than intracranial electrodes. These probes are usually implanted in the cerebral cortex—the highly convoluted brain structure that, though classically thought of as mediating “higher-order” mental tasks, seems to be active during practically every human activity.

Most intracranial BCI-based neuroprostheses receive their input from a grid of up to 192 metal electrodes5. A connector affixed to the skull transmits this information to a computer that analyzes the signal and translates it into the movement of a robotic limb. The accuracy of a BCI system increases with the number of neurons recorded, and thus, the amount of neural information extracted6. But the number of neurons that can be recorded is constrained by technical and physiological parameters, namely by the number of electrodes that the brain can tolerate, the wiring issues that come with a high number of output channels, the associated computer’s capacity to make online analyses of neural data, and the maintenance of signal quality. Intracortical BCIs have a shorter life span than their non-invasive counterparts and pose a higher risk of infection and surgical complications. Over time, neurons at the implantation site die, scar tissue forms around the electrodes, and the conductive properties of the electrode degrade, reducing their ability to capture neural signals7. Despite these drawbacks, the development of invasive BCIs is hotly pursued due to their relatively high signal quality in at least the short-term.

The most successful intracortical BCI prosthetic to date was implanted in Jan Scheuermann, a 52-year-old tetraplegic woman in Pittsburgh. After about 4 months of training, she was able to feed herself a chocolate bar by controlling a robotic arm with 7 degrees of freedom, one of which was the opening and closing of a robotic hand (fig 2).  This particular BCI system was adaptive. The earliest stages of training focused on simple, ballistic gestures and involved help from a computer. Over time, the gestures grew in complexity and computer assistance decreased5 (fig 2c).  Like this system’s predecessors, the system that interpreted the neural data needed to be recalibrated almost every day. Though an unequivocal technical success, the device’s long training period, unstable neural read-out, dependence on the presence of highly trained technicians, and conspicuous head mount (fig 2a) makes it ill suited for plug-and-play clinical use.

In addition to ease of use and portability, an effective BCI prosthetic must remain functional over time. Currently, the materials used for intracortical recordings are somewhat incompatible with neural tissue. Our brain is neither mushy nor hard; it’s about the softness of a young Camembert cheese. It is buffered by a layer of cerebrospinal fluid that allows it to move somewhat independently from the skull. It is this anatomical feature that makes the long-term recording of neural signals difficult. When the implanted brain shifts relative to the skull, it also moves relative to the rigid electrodes of the BCI system. This shearing action can cause tissue damage, which in turn induces inflammation and the formation of scar tissue around the electrode that obstructs transmission of the neural signal. Additionally, the body raises an immune response to the electrode itself, causing further inflammation, an aggregation of immune cells, and the death of neurons around the electrode site7. Electrode arrays made of flexible wire that would minimize this type damage have been proposed. The structural components necessary for placement of these arrays in brain tissue would be removed post-implantation, leaving the electrode wires in place yet able to move more freely with the brain in relation to the skull6. For a BCI prosthetic to be a viable tool for long-term use, more flexible and biocompatible materials must be used to detect neural signals.

Figure 2: A tetraplegic individual was implanted with a 192-electrode intracortical array (a) and was trained over 4 months to use her motor cortical activity to control a robotic arm (b). The training process was considered adaptive in that the computer assisted the robotic arm’s movement initially but yielded over time to the subject’s total neural control of the prosthetic(c). From University of Pittsburgh Medical Center and Collinger et al., 2013
Figure 2: A tetraplegic individual was implanted with a 192-electrode intracortical array (a) and was trained over 4 months to use her motor cortical activity to control a robotic arm (b). The training process was considered adaptive in that the computer assisted the robotic arm’s movement initially but yielded over time to the subject’s total neural control of the prosthetic(c). From University of Pittsburgh Medical Center and Collinger et al., 2013

Once the neural signals are acquired by the electrode arrays, they need to be decoded to extract the relevant information driving the intended movement. The algorithms that perform these data-extractions are termed “decoders.” In a healthy subject, neural signals would be recorded from the motor cortex during a reach then used to tell the decoder what the activity associated with that reach looks like. However, in a tetraplegic or locked-in patient, physically performing a template reach is impossible. Instead, the patient is instructed to imagine or observe another individual performing the gesture, and that pattern of activity is used to inform the decoder. Perhaps surprisingly, motor cortex activity during a performed reach and an observed reach are quite similar8, allowing BCI prostheses to be accessible to locked-in patients who are unable to make any voluntary movement.

Once the template neural signal is used to format the decoder, it can then “read” subsequent intended movements from the subject’s neural activity. Nearly all decoders use neural signals to get information about the kinematics of an intended movement, that is, the path through space that a limb may take during a movement and the speed at which it moves. This method has been shown to be fairly effective. However, the brain additionally encodes information on the amount of force and limb stiffness necessary to carry out a movement, known as kinetics7. For example, the movement one would make to pick up an empty or full growler of beer would be identical in terms of it’s kinematics, but would be very different in terms of kinetics. If you pick up the empty growler with the same force as you would the full, you might send the thing flying into the air. The more parameters of the movement one wishes to draw from neural signal, the more neural signal one must have, bringing us back to the problem of array size and recording site number.

The motor system does not operate in a vacuum; it acts in close collaboration with the sensory systems. Somatosensory feedback, the tactile sensation of skin contact and a sense of one’s body in space, proprioception, is essential to everyday motor behavior. This is apparent in anyone who has ever put booties on a dog. The constant pressure on the dog’s feet makes it think it’s still in contact with the ground when it steps, resulting in an exaggerated gait. Most current BCIs are guided by only visual feedback. These unidirectional, or “open-loop” systems take information from the brain but do not return information on limb position or skin sensation to the brain’s sensory centers. This makes online correction of movement errors very difficult. Bidirectional, or “closed-loop” BCIs would allow for better error correction and faster learning. Steps have been made in this direction by creating artificial input channels that deliver electrical signals to the sensory cortices based on movement of the robotic limb. The animal subjects in these tests were able to use the artificial sensory feedback to better guide their BCI-executed movements6, but since it is currently impossible to deliver sufficiently detailed electrical signals to recreate the true sensation of touch or limb position, these artificial sensory cues were arbitrary to a large degree. Once our understanding of sensory signaling grows and our technical ability to recreate these signals increases, their incorporation into BCI prostheses will become an integral part of the system. These systems, in theory, would have far better error-correction and a reduced training period.

What is perhaps most fascinating about BCI technology is that, though our understanding of motor coding is incomplete, these systems do work to some degree. The drift in what a neuron encodes- requiring frequent recalibration of BCI systems- and the long training processes the subject must undergo to gain control of his or her robotic limb reflect the cortex’s plastic nature and its ability to adapt over time. Though at first unable to control the limb, the brain learns to modify its activity. A 2009 study used a randomized decoder, rather than an algorithm based on neural activity during natural movement, to translate monkeys’ motor cortical activity into the movement of a cursor. At first, the monkeys were unable to control the cursor’s movements, but after fewer than 8 days, the monkeys had learned to change their neural activity in a way that accommodated the “naïve” decoder9.

Maybe our technology does not need to be perfect because our brain meets us halfway. It is able to shape its output into a useable signal, while adaptive BCI decoding algorithms learn to better isolate the meaningful neural patterns. Though it would probably reduce training time and increase reliability, a successful BCI prosthetic may not need to perfectly decode what would otherwise be the natural signals motor cortex sends to limbs to drive movement. If even a modest channel of communication can be established, brain and machine learn to work with each other through an adaptive process towards the common goal of totally awesome mind-controlled machines.

 

Works Cited

  1. JK Chapin, KA Moxon, RS Markowitz, MAL Nicolelis. Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex. Nature Neuroscience, 1999; 2(7):664-670
  2. J Wessberg, CR Stambaugh, JD Kralik, PD Beck, M Laubach, JK Chapin, J Kim, SJ Biggs, MA Srinivasan, MAL Nicolelis. Real-time predicition of hand trajectory by ensemble of cortical neurons in primates. Nature, 2000; 408:361-365
  3. A Martins & P Rincon. Paraplegic in robotic suit kicks off World Cup. BBC, 12 June 2014. Web, 5 Nov 2014.
  4. I Sample. Mind-controlled robotic suit to debut at World Cup 2014. The Guardian, 1 Apr, 2014. Web, 5 Nov, 2014.
  5. JL Collinger, B Wodlinger, JE Downey, W Wang, EC Tyler-Kabara, DJ Weber, AJ McMorland, M Velliste, ML Boninger, AB Schwartz. High-performance neuroprosthetic control by an individual with tetraplegia. Lancet, 2013;16(381):557-564
  6. MA Lebedev, AJ Tate, TL Hanson, Z Li, JE O’Doherty, JA Winans, PJ Ifft, KZ Zhuang, NA Fitzsimmons, DA Schwarz, AM Fuller, JH An, and MAL Nicolelis. Future developments in brain-machine interface research. CLINICS 2011; 66(S1):25-32
  7. SL Bensmaia, LE Miller. Restoring sensorimotor function through intracortical interfaces: progress and looming challenges. Nature Reviews, 2014; 15:313-325
  8. D Tkach, J Reimer, NG Hatsopoulos. Congruent activity during action and action observation in motor cortex. J Neurosci, 2007; 27:13241-13250
  9. K Ganguly & JM Carmena. Emergence of a stable cortical map for neuroprosthetic control. PLoS Biology, 2009; 7(7):1-13


Join the conversation!