Advanced Neural Interfaces for Bionic Hands

Neurosprosthetics Cover Image

Bionic hands will only reach their full potential if we restore true thought control and sensory feedback, which requires a neural interface.

The Limitations of Traditional Myoelectric Control Systems

Myoelectric arms/hands use sensors to detect muscle movements in the residual limb. These sensors are placed against the surface of the skin and require the user to flex specific muscles to control a bionic hand.

These types of control systems have been useful. For example, consider this young lady, who lost both her hands to meningitis when she was only 15 months old. Here she is pouring a glass of water using her direct-control myoelectric Hero Arms from Open Bionics:

This and the many tasks she demonstrates in other videos would be extremely difficult, if not impossible, without this technology.

But here are a couple of short videos demonstrating some myoelectric shortcomings at the Cybathlon event in 2016. In this first video, the user has difficulty grasping a cone-shaped object:

It might appear at first glance that the hand is not capable of picking up a cone but this is not the case. The hand used here is an i-Limb. It has a feature called “automatic finger stalling”, which causes fingers to stall when they encounter a certain amount of resistance but allows other fingers to continue to close until they also meet resistance. This should allow the user to grasp an irregular-shaped object like a cone.

Her inability to do so is in large part a myoelectric control problem. Either she lacks the training to attempt a grip where finger stalling will help her or she knows what to do but can’t make her bionic hand do it. To read more about these types of problems, please see our article on Finding the Right Myoelectric Control System.

In this next example, the user has difficulty picking up a clothespin:

Part of the problem here is his attempt to grasp the clothespin so precisely. We don’t do this with our natural hands. As long as we can grab an object in the right general area, we can use our sense of touch combined with multiple small finger adjustments to achieve the perfect grip.

The goal of an advanced neural interface is to connect a bionic hand to the brain in a way that restores this intuitive combination of user control and sensory feedback.

Recent Advances in Control Systems

Scientists are trying to make bionic control systems more reliable. Flexing specific muscles is awkward and tiring, so many companies are turning to pattern recognition systems. In these systems, the user simply thinks about moving his hand in a certain way. This triggers a pattern of muscle movements, which a group of sensors detects and translates into a command or sequence of commands for the bionic hand. It doesn’t matter which pattern is triggered as long as it is repeatable. You can read about these types of systems in our article on advanced pattern recognition systems.

Unfortunately, even good pattern recognition systems can be defeated by poor sensor data, which is a regular flaw of skin-surface sensors. Inadvertent muscle movements can trigger the wrong commands. Sensors can shift away from the muscles they are trying to monitor or lose contact with the skin. Changes in temperature, humidity, and the state of the residual limb can all impact the quality of myoelectric signals.

Repeated failures like this shake the user’s confidence in a prosthesis, making it less useful than it should be.

One solution to this problem is to surgically embed the sensors in the patient’s residual limb muscles. This helps eliminate the control sensor problem but does nothing for sensory feedback. And if a patient is going to incur the cost and inherent risks of surgery, why not improve both control and sensory capabilities?

Follow Us

Restoring a Sense of Self

Almost all medical problems are more complex than they appear. When we talk about user control over bionic hands, most of us interpret this as getting the hand to obey the user’s intent. But is that intent well-informed? For example, to improve your grip, you need to know both the current location of your hand as well as the desired location.

We track the location of our natural hands and all our joints using proprioception. More specifically, our brain knows the position of our limbs relative to our body based on sensory feedback from muscle pairs. For example, if you curl your arm toward your chest, you contract your bicep, which stretches your triceps. Your brain gets sensory feedback from these muscle movements and knows the precise location of your limbs at all times.

A recent procedure called the Agonist-antagonist Myoneural Interface (AMI) restores this capability for amputees (or, more typically, ensures it is never lost at the time of amputation). In this procedure, surgeons recreate muscle pairs that would otherwise be severed:

AMI Recreating Agonist-Antagonist Muscle Pairings

Devices like bionic hands are then calibrated to match the movements of the muscle pairs via myoelectric sensors. This means that the brain tracks the movements of its missing limb via the muscle pairs, and since the bionic hand mirrors these movements, the brain is effectively tracking the bionic hand!

This is an implicit neural interface. However, this type of system still lacks sensory feedback. For that, we need an explicit neural interface.

The History of Neural Interfaces

Neural interfaces have been around since 1924. Their introduction began with the cochlear implant and the heart pacemaker in the 1950s, though it is doubtful that anyone viewed the interfaces used by these devices as separate from the devices themselves.

The modern concept of a neural interface began to emerge in the 1970s with research on brain-computer interfaces (BCI).

The expansion of this concept as a means to control bionic limbs is a much more recent development, only achieving critical mass with funding from programs like Hand Proprioception and Touch Interfaces (HAPTIX), launched in 2015 by the U.S. Government’s Defense Advanced Research Projects Agency (DARPA).

Within a few short years, universities working with DARPA were already creating impressive prototypes, as shown in this lighthearted video by the University of Utah:

How Neural Interfaces are Used in Bionic Hands

We would explain this to you in writing but there happens to be a PBS video that does a great job of this already. It involves an early prototype built by Case Western Reserve University — another of DARPA’s university partners:

As the video shows, sensors on the bionic hand transmit signals to electrodes that have been surgically embedded in the user’s arm. The electrodes stimulate the nerves that they are attached to, which then transmit the requisite information to the brain.

That’s how the sensory feedback portion of the system works. The methods used to exert control over the bionic hand vary. In some cases, a myoelectric control system is still used:

DARPA's Hybrid Neural Interface for Bionic Hands

The advantage of this hybrid model over a traditional myoelectric system is that the user’s attempted actions are better informed by the sensory feedback.

Other approaches use embedded electrodes for two-way communication:

DARPA Two-Way Neural Interface

In this model, sensors interact directly with motor nerves, while the sensory nerves are still stimulated as described above. In theory, this direct interaction with motor nerves should eliminate the remaining problems with embedded myoelectric sensors.

Another, more recent approach by a company called Integrum is to offer a two-way neural interface as part of an osseointegrated implant (called their “e-Opra Implant System”:

Integrum e-Opra Implant System

This approach uses embedded myoelectric sensors for control and nerve stimulation for sensory feedback as part of an integrated system.

The Current State of Neural Interfaces

The “Science of Touch” video above is six years old. If you want to truly understand how far we’ve come with neural interfaces, watch the opening segment of this video from 1:27 to 10:49. As you do so, ignore some of the technical terms and focus instead on the general nature of Dr. Tyler’s explanation:

Some of the central points of this segment are:

  • The nervous system remains highly organized all the way from the peripheral nerves to the brain or at least high in the spinal cord, meaning that even in cases such as an amputation near the shoulder, the system is still organized enough to be able to differentiate finger sensations.
  • We are slowly learning how to communicate with this system in increasingly sophisticated ways.

Now watch the segment from 11:31 to 27:07. We know that this a much longer video segment than we typically show, but we encourage you to watch it nonetheless. Not only will it give you a great understanding of the current status of neural interfaces for bionic hands. It will also fill you with hope and optimism for the future. These guys are going to figure this out!

Remaining Challenges

Challenge # 1 — Eliminate Invasive Surgery

Surgery is surgery, with all its incumbent risk of scarring and infection. It is also expensive.

In looking at the preceding videos, it is difficult to imagine implementing a neural interface without surgery. However, there are a few possibilities.

For example, researchers at the University of Pittsburgh recently discovered that existing spinal cord stimulators can be used to produce a sense of touch in missing limbs. With 50,000 people already receiving implants of these stimulators every year in the U.S. alone, that means there are doctors all over the world who are already trained in this procedure.

Even more important, it is a simple outpatient procedure, thereby avoiding the potential complications of more invasive surgery.

There is still a fair bit of work to be done before this can be used as a neural interface for a bionic hand but here is what one female patient said about her experience with the new technique:

Challenge # 2 — Improve Sensory Capabilities

Touch sensors are useful but to truly take advantage of neural interfaces, we need to expand the sensory capabilities of our bionic hands. One solution might be to use electronic skin.

Challenge # 3 — Improve Signal Processing and Interpretation

Signals sent from bionic sensors to electrodes must be processed and interpreted to determine the correct stimulation of the attached nerves. Similarly, commands sent from the brain to the nerves must be processed and interpreted to determine the signals that need to be sent to the bionic hand’s control system.

All of this is not only processing-intensive; it also requires sophisticated artificial intelligence.

Currently, the AI routines are still too specific. That is, they require too much task-specific training.

For example, a baseball and an orange are quite similar. The baseball has seams, whereas the orange is heavier with pitted skin. But both objects are of a similar size and shape.

When neural interface AI routines have been trained to handle a baseball, they should know roughly how to handle an orange without having to be retrained all over again.

Right now, that is not the case with many similar tasks, which imposes too high a training cost on all involved.

Another problem is that the stimulation of nerves can sometimes be imprecise and the brain seems incapable of adjusting its sensory map to compensate. This has implications for both user control and sensory feedback.

Challenge # 4 — Reduce Costs

In what is a near-universal complaint about all bionic technologies, we need to get the costs down to make the technologies more accessible.

There is no easy solution to this problem. But if we need a little hope on this goal, look no further than the cost of traditional myoelectric systems. A few years ago, even the most basic bionic hand costs many tens of thousands of dollars. Now there are 3D-printed bionic hands available for a fraction of that price.

We can do this. We just need to keep pushing!

Related Information

For information on sensory feedback, please see Sensory Feedback for Bionic Hands, Sensory Feedback for Bionic Feet, and Understanding Bionic Touch.

Click this link for more information on bionic hand control systems.

For a comprehensive description of all current upper-limb technologies, devices, and research, see our complete guide.