Grasping the Future: Automating the Assistive Robot for Wheelchair Enabled Patients

You’re thirsty. You see a water bottle in front you and you reach out to grab it. You bring it to your mouth to take a sip, and now you are refreshed! Yes, I have just described to you a task as banal as drinking from a bottle, but this might be a process that is banal to just you and I. You see, with all our faculties intact, cognitive and gross motor skills included, grasping a water bottle and drinking from it can take as long as one second to perform from the instant we process our initial thought to drink the water. It’s easy, it’s quick, you might never give it a second thought. For others, however, this process is all but that.  For the person with reduced motor capabilities who requires a wheelchair, these seemingly simple tasks are often virtually impossible. A person with cerebral palsy, rheumatoid arthritis, a severe spinal cord injury, or for whom with age came reduced mobility, will have difficulty doing something as essential as drinking water from a glass or a bottle.

In light of this problem, a major focus of biomedical research has been geared toward developing rehabilitation systems that give back to people in wheelchairs the sense of autonomy that makes our lives seamlessly unburdened by everyday minutiae. Professor Sofiane Achiche, founder of his Mechatronics Lab at L’École Polytechnique de Montréal, and members of his lab have identified this societal need and are using their most innovative ideas and extensive expertise to develop the best solutions. Now you might have asked yourself, he founded a Mecha-what lab? As Prof. Achiche explains, it’s simply a field of science that encapsulates the design and development of systems that integrate mechanical, electrical, and algorithmic components.  There might not be someone better poised to provide us with such a definition. After having obtained his mechanical engineering degree from L’École Polytechnique, he continued his graduate studies specializing in mechatronic conception and design from the microscopic to the macroscopic scale as well as artificial intelligence. He eventually pursued a post-doctorate at Technical University of Denmark, where his focus was shifted to mechanical design theory. Six years ago, he returned to L’École Polytechnique, with varied expertise and research experience in hand, and founded a mechatronics lab, which has grown from 5 graduate students to just over 25 in the last three years! And what he and his team are doing is at the forefront of one domain of mechatronic systems: robotics, particularly assistive robotics.

Prof. Sofiane Achiche, along with his main collaborator Prof. Maxime Raison and their students like Dominique Beaini, Ph.D. candidate in mechanical engineering, has explored many ways to automate the assistive robot designated for people in wheelchairs with reduced motor capacity. This project began when Kinova, a fast-growing robotic arm producer based in Montreal, approached Prof. Achiche with a proposition to collaborate on finding new ways to operate their assistive robots. Their current robotic arms, like their JACO model, are powered by motors controlled by a joystick. While the joystick controls and the robot’s movements are precise, the joystick itself can be difficult to manipulate for someone with a physical disability, hence the need for a more automated functioning of their systems. In their article Fast scene analysis using vision and artificial intelligence for object prehension by an assistive robot from April 2017, Beaini and Prof. Achiche developed the first of its kind, A-Z solution for the problematic they were faced with. Using an Xbox Kinect camera, they were able to acquire images from a targeted visual field and process them in order to locate objects in the scene, classify and identify them using artificial intelligence, and then, knowing what the object is, send the right command to the arm to correctly grasp it. The major advantage of their system is that the scene analysis takes about 0.6 seconds to complete, making it the fastest method to date, and the object grasping takes about 15 seconds, all of which constitutes significant time savings for a person in a wheelchair who may not have been able to grab the object in the first place.

“The response time of your system is very important”, Beaini explains. “Scientists in mechatronics might often overlook certain specifications when it comes to designing systems for medical purposes. You want your system to mimic as closely as possible the body’s natural functioning which is relatively quick when it comes to movement. Prof. Achiche highlights a poignant example of the importance this. “Think of when you are using a computer and it takes longer that 10 seconds to upload; it is frustrating for your average user. The same would apply to grasping an object. Imagine the impatience you might feel if it took 1 minute to grab something that would otherwise take about a second to do normally. These are elements that need to be considered in design”.

While there are clear advantages to their innovation, there is still some work left to do before we will see this technology on the market. In fact, each sub-element of this project has become a full research project of its own. For example, with the grasping point analysis module of the system, Beaini has gone on to develop a new image segmentation technique that can determine the optimal grasping points on any object of any shape or size, whereas the initially published technique used a fixed set of 12 objects. For the detection of objects in the scene, Prof. Achiche and his team have gone a step further to introduce an eye-tracking system to identify the exact object in the visual field that the patient wants to grasp by detecting the movements of their eye. “This is a promising technology”, Beaini says, “it’s the way to determine which object in the scene that the patient wants grasp and that best mimics our body’s natural mechanisms”. However, the technology presents some limitations in terms of everyday use, particularly for outdoor activities. They would need to opt for a more expensive eye-detector, that could filter out interference from the sun, which currently inhibits the cheaper model they initially used. However, while performance is top priority in their research, in Prof. Achiche’s lab, cost-effectiveness is a factor they are careful to consider.

“The Kinova robotic arm costs around 45 000$” Prof. Achiche explains. “These technologies are very expensive and not covered by insurance in North America, making cost-effectiveness an important factor in our design process as we do not want to exacerbate this already high price.” The robotics field is notoriously recognized for the expensiveness of its technologies as is the case with of the most performing limb prosthetics on the market. For this reason, it is reassuring to know that there are researchers who are paying specific attention to their end-user. One of Achiche’s lab’s most recent and most impressive advancements was the development of a 600$ hand prosthetic powered by the body’s own bioelectric signals which was approved by Quebec’s Health Insurance Department (RAMQ) for universal coverage. A medical device, as effective as it might be to diagnose or treat a disease or disability, needs to accessible, or affordable enough, to help the people in need. It’s advancements such as this prosthetic hand that serve as grounded evidence that mechatronic designs where “quality isn’t compromised for cost”, as Prof. Achiche puts it, have the potential to be feasible solutions for those that could use the help of an assistive robot will provide.

This is a prospect that became more plausible to me when learning that the researches of Prof. Achiche’s lab are ones that think outside of the box. With a very multidisciplinary team, creative solutions to tough technical problems are conceived in the lab. For example, in this article, the transformation of the visual information on the objects into data points that are processed by the artificial intelligence algorithms, was done using a technique inspired by mechanical probing. Like with mechanical probing, the surface points of your objects are acquired to characterize its 3-D features, which reduces the number of data points the artificial intelligence algorithms need to process and the computation time required to do so. Beaini’s work on his image segmentation algorithm is also another example of the multidisciplinary work done in Prof. Achiche’s lab. With a background in physics engineering, Beaini applied classical physics principles to conceive his algorithm which is not only the fastest when used for grasping-point identification, but has also been patented!

Knowing that there are researchers who have well-founded scientific objectives that will benefit society, very much like Prof. Achiche and his team, having assistive robots for people with gross motor disabilities needing wheelchairs is an outlook we can look forward to. It might be another 5 years before we see an automated robotic arm in everyday use, as these technologies required are still in the developmental or optimization stage, but it’s a possibility that could potentially benefit many people.

*All direct quotations from interviewees were translated by the author from the original French.

Reference:

Bousquet-Jette, C., Achiche, S., Beaini, D., Law-Kam Cio, Y. S., Leblond-Ménard, C., & Raison, M. (2017). Fast scene analysis using vision and artificial intelligence for object prehension by an assistive robot. Engineering Applications of Artificial Intelligence, 63, 33–44.

 

Leave a Reply

Your email address will not be published. Required fields are marked *