The software working behind the robotic voice of Stephen Hawking was released for public use on August 18 by Intel, the company that developed it. Although principally developed for Hawking, the ‘tool’ has since been made available to many other people suffering from motor neurone disease, an ailment that gradually but steadily deadens the neurons that control various muscles of the body, rendering its victims incapable of, say, moving their cheek muscles to elicit speech. Intel’s software, called the Assistive Context-Aware Toolkit (ACAT), steps in to translate visual signals like facial twitches to speech. Its source code and installation instructions are available on GitHub.
ACAT is an assembly of components that each perform a unique function. In the order of performance: an input device picks up the visual signals (cheek muscle twitches, in Hawking’s case), a calibrated text-prediction tool generates the corresponding unit of language, and a speech synthesiser vocalises the text. The first two components are unified by the Windows Communication Framework. In ACAT’s case, the text-prediction is performed by a tool called Presage, developed by Italian developer Matteo Vescovi. Other input tools include proximity sensors, accelerometers and buttons.
According to the BBC, the UK’s MND Association has celebrated the release. Of the motives behind it, Intel wrote, “Our hope is that, by open sourcing this configurable platform, developers will continue to expand on this system by adding new user interfaces, new sensing modalities, word prediction and many other features.” Company spokesperson Lama Nachman also noted that, with the current release, Intel isn’t anticipating ‘all kinds’ of innovations as much as assistive ones. A detailed user guide is available here.