Just over three years ago, Zebra medical vision launched into the world with two fundamental convictions and one mission. We knew that the potential for every novel technology is defined by its most profound application- which can be discovered only by people with ample doses of creativity, persistence and good first aid skills.
We felt the same about human potential: defined by a person’s decisions rather than her ability- a view captured by Shimon Peres in a quote on our wall: “you are as great as the cause you serve.”
Our single mission was to realize the full potential in medical imaging by harnessing advances in machine-learning – image analysis technology and enabling the best minds anywhere to tackle the challenge with us.
Our publication in this week’s Journal of Digital Imaging, “Malignancy detection on mammography using dual deep convolutional neural networks and genetically discovered false color input enhancement” (explained below) is a strong validation of our vision.
Here’s the story behind the paper: two years ago, Phil Tear was working as a highly sought after software architect in Berkshire, UK. He heard about a crazy Israeli company called Zebra, which was bent on applying GPU’s (Graphic Processing Units) to improve medical imaging interpretation. GPUs contained the technology which was powering the boundless growth of the gaming industry. Zebra’s message was: “we have millions of medical imaging studies to train on, and we’ll give you all the GPUs you can use if you choose to take this journey with us.”
The message resonated with Phil. Although still under 40, he had lost his wife to ovarian cancer three years earlier. Within six months, Phil became Zebra’s first full time scientist based outside of Israel. Within a year he made his first breakthrough in cancer detection and a year later here we are, publishing the vs 1.0 results of his mammography algorithm, which automatically and within seconds detects breast cancer with an accuracy (sensitivity of 91% and specificity of 80%) similar to that of expert radiologists.
For the small minority of you who will not be reading the entire paper (contact me for a copy), here’s an oversimplified version of how he did it. Phil understood the technology (Neural Networks) which has made it possible in the last few years to automatically identify almost any real life object (cars, street signs, individual faces, rare cat breeds…) This technology is embedded everywhere from satellites to smart phones, but it’s nowhere to be found in medical imaging. He set out to apply neural network training (also known as deep learning) to mammograms: high resolution images of the breast obtained for breast cancer screening.
Phil had thousands of examples to learn from. He experimented to find the best processing filters which naturally brought out the difference between normal breast tissue and abnormal cancer. He then took an off-the-shelf neural network which had been pre-trained on millions of natural world images and was capable of identifying thousands of different objects. This network seemed to struggle to find meaning in the black-and-white shadows created by x-rays through the breast, in the same way a scholar of English literature might struggle with a Japanese restaurant menu. Phil sought a way to translate, ultimately converting black and white density to a spectrum of color on the RGB (Red Green Blue) scale.
Along the way, Phil was supported by an all-Star team of radiologists, including Dr. Oshra Benzeqen of Rabin Medical Center (Tel Aviv University) and Dr. Michael Fishman of Beth-Israel Deaconess (Harvard University).
The result is a remarkable coalescence of immense knowledge – clinical, radiologic and algorithmic – condensed into a code that can run on any digital mammogram anywhere. A radiologist can use it as a “second reader,” and the benefit may be even greater in the vast majority of the world where the radiology shortage limits how many mammograms can be interpreted at all. Congratulations to Phil and the entire Zebra team!
-Eldad Elnekave, MD