We are proud to announce that REM recently received an SBIR Phase I Grant from the National Science Foundation for “An Automated Design Flow to Build Energy Efficient Vision Processing and Machine Learning Chips for the Internet of Things”. This grant will go towards the continued development of our own internal CAD tooling for our proprietary resilient, asynchronous technology. Our existing tools have enabled us to move extremely quickly, tape-out our first chip, and drastically increase both performance and power-efficiency. We take this grant as one more vote of confidence in our technology and look forward to bringing it to market.
Over the past few days the entire REM team made the trek down to San Diego for ASYNC 2017. Most of us had previously visited this conference, but this was the first time that the entire team was able to visit. We got a chance to share some of our story and research with the community as well as catch up on the latest developments from other labs around the world. After such an exciting conference, we wanted to take a moment to share our thoughts on some of our favorite presentations!
Sharp – A Resilient Asynchronous Template
Our very own Hardware Engineer Max Waugaman presented the Sharp resilient asynchronous template that he co-developed at REM in 2016. Sharp builds upon Michigan’s Razor and USC’s Blade to offer an asynchronous, resilient protocol that is free of metastability and offers higher throughput than previous efforts. As a direct descendant of Blade, Sharp’s major improvement is increasing the speculation window to accelerate throughput when operations end sooner. We expect to publish more about our technology and Sharp here, but for us this presentation embodies a few things we deeply believe in:
- The modularity of asynchronous design allows for quick, encapsulated improvements that impact the entire system
- Real, “hard tech” innovation and research can happen at startups on the way to product, and faster than it can happen in big companies or academia
We’re very proud of this work and the response it received, and look forward to sharing further progress at future technical conferences.
Interleaved Architectures for High-Throughput Synthesizable Synchronization FIFOs
Ameer Abdelhadi and Mark Greenstreet from the University of British Columbia received the best paper award for their paper Interleaved Architectures for High-Throughput Synthesizable Synchronization FIFOs [PDF]. They presented a high-performance synchronization FIFO, which can be used to move data between asynchronous clock domains. Not only did the authors thoroughly compare their own design with other relevant designs, they also pointed out a common glitch hazard that many FIFO designs fall victim to, and explained how they avoided it.
Maybe best of all, the authors open-sourced their FIFO design, testbench, and RTL-to-GDSII code and put it on Github with a permissive license. This synchronization FIFO can serve as a benchmark, starting place, or complete solution for engineers in need of a high-quality synchronization FIFO. The open source movement in software has enabled tremendous advances and collaboration, and we’re excited to see this trend to continue to gather momentum in hardware, and especially happy to see great asynchronous designs become available to engineers.
Digital Delay Lines
Alberto Moreno presented a paper on Synthesis of All-Digital Delay Lines from Jordi Cortadella’s group at Universitat Politècnica de Catalunya in Barcelona. Delay lines are an essential element in bundled data design. For designs that operate across many design corners, the delay in a delay line must scale its performance at the same rate as the datapath the delay line is matched against. Moreno and Cortadellla’s work addresses this problem by using a mix of gates from the same libraries used to synthesize the datapath. They developed an algorithm to test and measure the performance of the delay line compared to the datapath and a set of heuristics to select the delay line. Their algorithm crucially includes wire estimation and selection to match against wire limited paths as well as gate limited paths. The results from their work look compelling, and we’re very excited to experiment with their algorithm and see if we can improve our delay lines!
One of the most exciting events of ASYNC 2017 was the Startup Panel hosted by Dr. David M. Harris. Andrew Lines (Fulcrum Microsystems), David Ditzel (Transmeta, Esperanto Technologies), and William Koven (REM) had a chance to share their founder stories and talk about the struggles of a startup, what it takes to build a semiconductor company, and how to best commercialize novel technology. While the conversation was off the record we can say that founder stories are always interesting (Tuesday dinners were one of the highlights of our YC experience) and we hope to see more semiconductor startups bring interesting new technologies to market.
Overall it was very refreshing to learn about other async research, present our tech and story, and catch some San Diego sun. See you in Vienna for ASYNC 2018!
Hello, world, we’re Reduced Energy Microsystems!
We’re building the most power-efficient computer vision SoC and neural network accelerator (NNA) for embedded devices. We’re doing this by combining our asynchronous resilient design technology and custom NNA architecture. In non-technical terms: we’re working on chips to make your devices last longer and be smarter.
Over the coming weeks and months we will be sharing more information about our chip and technology. We’ve made tremendous progress and cannot wait to share it here! On top of covering our own progress and announcements this blog will cover a range of topics including asynchronous VLSI, neural networks, and market trends. If that sounds up your alley click here to subscribe now!
Chris Rowen PhD FIEEE, founder of Tensilica, and CEO of Cognite Ventures spoke at the Embedded Vision Alliance yesterday about “The Vision AI Startups That Matter Most”, and featured Reduced Energy Microsystems. Chris shared his perspective on the development of Vision AI startups, and the market need for new solutions to bring advanced neural networks to embedded devices. We share Chris’ enthusiasm for this sector and look forward to helping build the embedded AI ecosystem.