Advanced Neural Interfaces for Bionic Hands

Neurosprosthetics Cover Image

Our brains are intimately connected to our bodies by a sophisticated network of nerves and touch receptors. Even the simple act of grasping an object involves a fully integrated system of remarkable complexity and nuance. It is time to incorporate these capabilities into our bionic hands.

The Limitations of Myoelectric Control Systems

Myoelectric arms/hands use sensors to detect muscle movements in the residual limb. These sensors can be placed against the skin or surgically embedded and can offer users either direct control of the myoelectric device or control based on advanced pattern recognition systems.

These types of control systems are unquestionably useful. For example, consider this young lady, who lost both her hands to meningitis when she was only 15 months old. Here she is pouring a glass of water using her direct-control myoelectric Hero Arms from Open Bionics:

This and the many tasks she demonstrates in other videos would be extremely difficult, if not impossible, without this technology.

But here are a couple of gifs demonstrating some myoelectric shortcomings at the Cybathlon event in 2016. In this first gif, the user has difficulty grasping a cone-shaped object:

ilimbconegraspattemptcybathlon

In this next example, the user is unable to pick up a clothespin in a position that he can use to pin an article of clothing to the clothesline:

You may think these are merely examples of poor electromechanical dexterity but there is more to it than this. The bionic hand being used here is an i-Limb. It has a feature called “automatic finger stalling”, which causes fingers to stall when they encounter a certain amount of resistance but allows other fingers to continue to close until they also meet resistance. This is very similar to how our natural hands behave when grasping an irregular shaped object.

Now, picking up a cone is not easy. But the inability of the user to make a better attempt at this task is in large part a myoelectric control problem. Either she lacks the training to attempt a grip where finger stalling will help her or she knows what to do but can’t make her bionic hand do it.

The failed attempt to grasp the clothespin is even more telling. When we grab something like a clothespin with our natural hand, we don’t attempt to pick it up so precisely. We don’t need to. As long as we grab it in the right general area, we can use our sense of touch combined with multiple small finger adjustments to achieve the perfect grip. We don’t even need to look at it as we do this — our touch of sense alone allows us to complete the task.

The goal of an advanced neural interface is to connect a bionic hand to the brain in a way that restores this natural intuitive control and sensory feedback.

Upper Limb Neural Interface

Scientists are also trying to make bionic control systems more reliable. The fact is, using skin surface sensors to detect the electrical signals is incredibly challenging. Inadvertent muscle movements can trigger the wrong commands. Sensors can shift away from the muscles they are trying to monitor or lose contact with the skin. Changes in temperature, humidity, and the state of the residual limb can all impact the quality of myoelectric signals.

Repeated failures like this shake the user’s confidence in a prosthetic, making it less useful than it should be.

Surgically embedding the myoelectric sensors does help with control issues but does nothing for sensory feedback. And if a patient is going to incur the cost and inherent risks of surgery, why not attempt to improve both control and sensory capabilities?

The History of Neural Interfaces

Neural interfaces have been around since 1924. Their first involvement in medicine began with the introduction of the cochlear implant and the heart pacemaker in the 1950s, though it is doubtful that anyone viewed the interfaces used by these devices as separate from the devices themselves.

The modern concept of a neural interface began to emerge in the 1970s with research on brain-computer interfaces (BCI).

The expansion of this concept as a means to control bionic limbs is a much more recent development, only achieving critical mass with funding from programs like Hand Proprioception and Touch Interfaces (HAPTIX), launched in 2015 by the U.S. Government’s Defense Advanced Research Projects Agency (DARPA).

Within a few short years, universities working with DARPA were already creating impressive prototypes, as shown in this lighthearted video by the University of Utah:

How Neural Interfaces are Used in Bionic Hands

We would explain this to you in writing but there happens to be a PBS video that does a great job of this already. It involves an early prototype built by Case Western Reserve University — another of DARPA’s university partners:

As the video shows, sensors on the bionic hand transmit signals to electrodes that have been surgically embedded in the user’s arm. The electrodes stimulate the nerves that they are attached to, which then transmit the requisite information to the brain.

That’s how the sensory feedback portion of the system works. The methods used to exert control over the bionic hand vary. In some cases, a myoelectric control system is still used:

DARPA's Hybrid Neural Interface for Bionic Hands

The advantage of this hybrid model over a traditional myoelectric system is that the user’s attempted actions are better informed by the sensory feedback.

Other approaches use embedded electrodes for two-way communication:

DARPA Two-Way Neural Interface

In this model, sensors interact directly with motor nerves, while the sensory nerves are still stimulated as described above. In theory, this direct interaction with motor nerves should eliminate the remaining problems with embedded myoelectric sensors.

Another, more recent approach by a company called Integrum is to offer a two-way neural interface as part of an osseointegrated implant (called their “e-Opra Implant System”:

Integrum e-Opra Implant System

This approach uses embedded myoelectric sensors for control and nerve stimulation for sensory feedback as part of an integrated system.

All of these approaches are promising but challenges remain.

Remaining Challenges

Challenge # 1 — Eliminate Invasive Surgery

Surgery is surgery, with all its incumbent risk of scarring and infection. It is also expensive.

Fortunately, researchers at the University of Pittsburgh recently discovered that existing spinal cord stimulators can be used to produce a sense of touch in missing limbs.

With 50,000 people already receiving implants of these stimulators every year in the U.S. alone, that means there are doctors all over the world who are already trained in this procedure.

Even more important, it is a simple outpatient procedure, thereby avoiding the potential complications of more invasive surgery.

There is still a fair bit of work to be done before this can be used as a neural interface for a bionic hand but here is what one female patient said about her experience with the new technique:

Challenge # 2 — Improve Sensory Capabilities

Touch sensors are useful but to truly take advantage of neural interfaces, we need to expand the sensory capabilities of our bionic hands. One solution might be to use electronic skin.

Challenge # 3 — Improve Signal Processing and Interpretation

Signals sent from bionic sensors to electrodes must be processed and interpreted to determine the correct stimulation of the attached nerves. Similarly, commands sent from the brain to the nerves must be processed and interpreted to determine the signals that need to be sent to the bionic hand’s control system.

All of this is not only processing-intensive; it also requires sophisticated artificial intelligence.

Currently, the AI routines are still too specific. That is, they require too much task-specific training.

For example, a baseball and an orange are quite similar. The baseball has seams, whereas the orange is heavier with pitted skin. But both objects are of a similar size and shape.

When neural interface AI routines have been trained to handle a baseball, they should know roughly how to handle an orange without having to be retrained all over again.

Right now, that is not the case with many similar tasks, which imposes too high a training cost on all involved.

Challenge # 4 — Reduce Costs

In what is a near-universal complaint about all bionic technologies, we need to get the costs down to make the technologies more accessible. There is no easy solution to this problem. We just have to keep pushing.

If we need a little hope on this goal, look no further than the cost of traditional myoelectric systems. A few years ago, even the most basic myoelectric arm costs many tens of thousands of dollars. Now there are 3D printed myoelectric devices available for a fraction of that price.

Related Information

For more information on sensory feedback for bionic hands, see our latest article on the subject.

Click this link for more information on bionic hand control systems.

For a comprehensive description of all current upper-limb technologies, devices, and research, see our complete guide.