Could your face be a window to your health? U of T startup gathers vital signs from video
Kang Lee has invented an astonishing new technology. By analyzing video of a person’s face, his artificial intelligence-based system detects blood flow in facial tissue and uses it to measure vital signs and emotions.
A U of T professor at the Ontario Institute for Studies in Education, Lee co-founded NuraLogix to commercialize his research breakthrough. Their first product will allow doctors to use regular webcams to monitor the heart rate, blood pressure, stress and pain of their patients remotely, helping to reduce costly and unnecessary visits to the emergency room. We will soon be able to measure our loved ones’ vital signs with just a video app, whether they are in the room or on the other side of the world.
Lee also envisions his technology as “an emotion engine,” empowering smartphones, cars, laptops and even social robots to help calm people in stressful situations and to better attune machines to our needs.
U of T News spoke with Lee ahead of Entrepreneurship@UofT Week.
Can you describe your research-based startup NuraLogix?
What we’re doing is we’re using a very new technology to reveal the invisible emotions underneath our face.
The mission of the company is really to put our technology – as an emotion engine – in all the technological devices we have now or in the future, such as an iPhone, car, laptop or a social robot.
The idea is to make these devices emotionally aware so that when you’re looking at your electronic devices such an iPhone your device actually knows your emotional state. Then the phone may say, “Hey, it looks like you’re feeling anxious, can I play soothing music for you?”
Or, if you’re driving, your car can detect if you’re angry at the driver in front of you and remind you, “You may want to calm down and take a deep breath.”
The social robot right now is very popular and many companies are producing it. In the future, it will be widely used in such areas as health care, child care and elder care. If you have a social robot that is able to detect the invisible emotions we are experiencing then the robot can really help us a lot.
Our technology’s application for health monitoring and telehealth is one area we are focusing on right now. The other one is marketing research – for example, finding out whether people like or dislike a product such as a car or a designer handbag.
How does this technology work? How does it use artificial intelligence/deep learning?
The technology can find out the blood flow on your face instantaneously, and it works in most natural lighting situations with any kind of digital video cameras.
This is how the technology works: first, we take the video images from a video camera. Then, using our transdermal optical imaging software, we strip away the facial skin to see the blood flow changes underneath the facial skin.
Blood flow changes are associated with physiological and emotional changes we are experiencing. Using data we have collected with people experiencing various emotions, we can build emotional models based on different patterns of facial blood flow change and use these models to detect whether you’re happy, you’re sad, if you’re experiencing fear, surprise and so on.
In order to get the blood flow out of your face, we need machine learning. So we used data – what we called ground truth data to help us extract blood flow. We use ground truth to separate the noise from the data that actually corresponds to happy/not happy. Once you have that, then you do deep learning – you build two models. One is the happy model, one is the sad model. Now you have these two computational models. They are basically like equations with weights.
For example, we discovered that when you tell lies, the facial blood flow on the cheek actually goes down, but the facial blood flow on the nose goes up, which we call the “Pinocchio Effect.”
Why is it important to detect people’s true and invisible emotions?
Currently, people are mainly using facial expressions to detect your emotions. But it turns out that’s not good enough. The reason is that a former student of mine did a study, and she found that 90 per cent of the time, our facial expressions are neutral. However, during the 90 per cent of the time period, people are not emotionless but experiencing various inner emotions that are invisible.
If your technology relies solely on facial expressions, you’re not going to do well because you only can detect less than 10 per cent of the time how someone really feels. On top of that, we can fake our emotions, and the system can be defeated.
But invisible emotion is controlled by the autonomous nerve system, which is beyond our conscious control and because of that you can really reveal people’s real emotions, not just expressed emotions or faked emotions with the use of our technology.
Are there any ethical issues?
You can do this without the target’s consent or knowledge that their emotions are being monitored. That’s why we believe there has to be some kind of legal or ethical guidelines to go along with the use of this technology.
What made you realize that your research could be commercialized?
I wasn’t thinking about commercializing it, but the University of Toronto had this contract with MaRS. The MaRS people came over, and said, “You can commercialize this research and make it work even beyond lie detection.”
This motivated me to have our technology commercialized. We are now hiring almost one person every month. We just hired someone today at noon! We have 22 employees now.
Other than emotions, what else can you detect with your transdermal optical imaging
My parents live in China, and they’re very old so I talk to them every day using Skype or the Chinese Skype. Sometimes they don’t want to tell me the truth about their health conditions, but when talking to them I can measure their health via the laptop camera and my software.
We can measure heart rate and breathing very accurately. We can measure blood pressures very close to the FDA standard. We can measure stress. We can measure mood, and whether you’re suffering from pain. In the future, we can do more such as blood oxygen saturation and risks for heart attack or stroke.
If there's a camera in the home, like for example in my parent’s home, and if I detect problems with their vital signs, I can call my sister to run to their house to try and check up on them. There are tons of things we can use this technology for in the telemedicine field.
Can you explain more about what your software can do for telemedicine?
Our technology can be used remotely and noninvasively without attaching sensors to the patient’s body because we are using video cameras that are now ubiquitous in our environment.
Furthermore, our system can do more than current vital sign measurements. For example, our system can measure blood pressure continuously – for every second – if a video camera is on. However, during a visit to a doctor’s office, the doctor can only take one or two measures.
The real application in the future is telemedicine. When I’m talking to my research partners in China – who are medical doctors – they get too many patients in their hospital. A lot of them really do not need to come to see a doctor, but they are not certain so they just show up. We often go to the emergency for no good reason, and it costs the system a lot of money and wastes a lot of resources. My friend, who is president of the hospital, wants the patient to be pushed back to the local clinics. Ultimately he wants to push them even farther back – to within their own home.
[With our technology] before you go to a hospital or clinic, you ring your doctor up, and they can do a remote screening of your conditions [such as heart rate, blood pressure, stress etc...]. Then the doctor can make a decision about whether you should come to the hospital or not.
How close are you to bringing this technology to market in the health-care sector?
We’re almost done with our laptop-based system. We’re going to take it to a hospital setting to measure participant’s heart rate, blood pressure and stress for physical examination purposes.
Because we don’t have FDA or Health Canada approval, this is not for diagnosis. We can provide information to doctors so they can make medical decisions about whether to follow up and to refer the patients to see a particular specialist.
So you’d be licensing software this to Apple?
Yes. We’re giving them the model.
Why do you think it’s important for researchers to get involved with startups?
I know a lot of my colleagues at U of T have discovered a lot of very useful things. We just send out for publication and that’s it – we kind of wash our hands after publishing our papers because the ultimate goal for our research has always been publication.
But now I realize that’s really a waste of our resources.
A lot of our discoveries can not only be commercialized, but it can also be done in a way that has impact on a lot of people, not just in terms of knowledge but in terms of using knowledge for the good of people.
NuraLogix is one of more than 150 research-based startups launched in the last five years at U of T. Learn more at utoronto.ca