We are pleased to announce that Mycroft Core 0.6 Alpha is available for download today.  Mycroft Core is a lightweight, portable piece of software written in Python.  You can run it on anything from a Raspberry Pi to a gaming rig.  Mycroft Core includes Adapt, Mimic, OpenSTT, and multiple open APIs to create an experience that allows users to interact with their technology using the most natural form of human communication – speech.

This software is the core from which our community will soon develop a host of new skills.  We’ve released it because we want you to build on top of it.  It contains an awesome skills framework that allows you to do things like control your Phillips Hue lights or add new voice commands to your Team Fortress avatar (both of these are real examples).

This is such an exciting day for me, and has made me think about how my Mycroft journey started. About a year ago, Mycroft CEO Joshua Montgomery asked me if I would be interested in helping create an artificial intelligence for our makerspace.  He wanted to create an environment like the one in Star Trek or Iron Man – where an on-site AI was able to control the lights, locks and communications of the facility.  The more I thought about the idea of a voice-enabled artificial intelligence that inhabited a physical space, the more excited I became.  I dreamed of an entity that could understand natural language and answer questions, perform tasks, and interact with the environment. I believed that this was potentially the next computing paradigm, and could substantially change the way we interact with technology.

Fast forward a year – I’m now CTO of Mycroft AI, Inc.  The first iteration of Mycroft was exceptionally slow and needed exact phrases to perform tasks and answer questions. Subsequent iterations improved the user interaction, but the language was still rigid and brittle.  As we improved the tech over time, we began to see value in what we were doing beyond creating a smart makerspace.  So we did what any inspiring inventor does these days – we went to Kickstarter.

Through crowdfunding we raised nearly $160,000 to pursue our vision – an open source artificial intelligence platform for everyone.  It was our commitment to open source values that engaged people and prompted many to back us.  This validated our core thesis that natural language processing has the ability to change the way we interact with computers.

In pursuing our vision we’ve successfully achieved a number of milestones over the past year.  We released our open source intent parser, Adapt, and the awesome text-to-speech software Mimic. We’ve also begun working on an open source speech-to-text model through our OpenSTT initiative. These are substantial projects filling voids in the open source community around voice command and speech technologies.

We are releasing Mycroft Core as an alpha to get your feedback and to allow you, the community, to help shape the direction of Mycroft moving forward.  We want your help in creating a platform that serves everyone, not just the interest of technology companies.  We want to create a platform that respects its users, its developers, and ultimately changes the way that people interact with technology.  It’s not a small undertaking – this is a big task – but the open source community should be lead the way.

The Mycroft Core 0.6 Alpha code was made available today on Github. Now that you have access, what skills are you going to build?

Ryan Sipes
CTO at Mycroft
CTO of the Mycroft project. A long time open source software developer Ryan has contributed to the Solus Project, GtK, DuckDuckGo and many others. He is a former systems administrator for the Northeast Kansas Library System. Ryan has extensive experience with datacenter management including the construction and maintenance of high availability virtualization systems.