Mycroft + Mozilla DeepSpeech = voice.mycroft.ai
Collaboratively building the world's best speech to text engine
Your data should stay under your control, not anyone else's. Join the open dataset to help researchers build technology without giving away your data forever.
As a machine-learning system, DeepSpeech’s effectiveness is directly tied to the type and volume of data it has for training its models. Initial training has used various private and publicly available sets of recordings — things like LibreSpeech and VoxForge. Mozilla also started the Common Voice Project to generate a fully public domain set of training data to be used for DeepSpeech and other voice researchers. These provided a solid foundation to help DeepSpeech make a promising start.
Unfortunately, the majority of this training data was recorded in pristine conditions. As a result, performance in an uncontrolled environment isn’t the best. In other words, we need better performance for the scenarios where a voice assistant will be used — like a kitchen with a stove fan humming, dogs barking, and a TV droning in the background.
Recognizing this need, Mycroft proposed working with its users to provide exactly the kind of recordings Mozilla needs to produce a great general model. We looked at joining the Common Voice project, but due to the personal nature of these recordings we didn’t feel that publishing all interactions straight to the public domain is wise or fair to ask of our users.
So we created the Mycroft Open Dataset. Mycroft users can join the effort today, helping to build this technology by building the data that shapes it. Most importantly, by helping they are not forced to give up the ownership of their own data. They can freely withdraw from the dataset and retrieve everything ever donated. This is a new approach to data sharing and one we believe empowers the individual while still allowing easy sharing for collaborations. We think this will become a new standard for collaboration. We began offering users the option to Opt In to this effort several months ago, and 1300 have elected to so far. We can’t thank you enough!
The next step is to validate this training data so we can provide the first 100 hours of speech for a new DeepSpeech model. Mycroft is building the tools to allow the community to “tag” these recordings in collaboration with us. More to come soon, keep check here!
Together we grow stronger. A better experience for Mycroft users, and a better voice technology for everyone. Truly a win-win.