Maker Pro
.NET Gadgeteer

TinyML In Action—Creating a Voice Controlled Robotic Subsystem - Projects

August 31, 2022 by regression0707221

We’ll be walking you through creating a robotic subsystem with a voice-activated motor leveraging machine learning (ML) and an Arduino Nano 33 BLE Sense.

You might be a bit overwhelmed since there is a lot going on in this code; however, for the purposes of this project, we don’t need to concern ourselves with most of it.

Step 4: Interpret Inference and Write Our Motor Driver Code


Figure 6. We must change the expected category labels for our new words


After this, the only thing to change relating to the model is in the micro_features_micro_model_settings.CPP file. As shown in Figure 6, change the category labels of “yes” and “no” to “forwards” and “backward”. Make sure you don’t touch the “silence” or “unknown” labels.

In the micro_features_model.CPP file, copy and paste just the hexadecimal characters from your Colab in place of the characters that are already in the file. At the very bottom of the Colab’s printout, there should be a line that says unsigned int g_model_len followed by a number. The last thing to do is to copy this number from your Colab and insert it in place of the number currently used for const int g_model_len at the bottom of the Arduino code file.


Figure 5. A snippet of the outputted TensorFlow Lite Micro model from our Google Colab

With all of the software out of the way, we can now build our motor driver circuit. The BOM is listed above, and the schematic is shown in Figure 9 below.

As shown in Figure 7, at the top of the file we will be adding a couple of #define statements to define which pins on the Arduino will go to which pins on our motor driver. For our project, we’ll use D2 for the ENABLE signal, D3 for the Driver1A input, and D4 for the Driver2A input. Make sure to also set these pins as output with pinMode() function in the RespondToCommand() function.

At this point, the TFLite Micro model should run as intended, and now we need to drive our motor in response to the TinyML inference output. To do this, we will be modifying the  arduino_command_responder.CPP file.

Once we have a fully quantized and converted TensorFlow Lite model, we need to deploy it to our Arduino. We will be modifying Harvard’s pre-existing micro_speech example, which you can find in the Arduino IDE under: Files → Examples → INCOMPATIBLE → Harvard_TinyMLx → micro_speech. 

Before going any further, I want you to know that you can find my full code as a reference here.

Step 3: Deploy the Machine Learning Model to Arduino


When fully converted, the Colab has scripts available for comparing accuracy between the quantized and unquantized models. If everything went correctly, the accuracies should be almost identical.

First, we freeze our model, which is the process of combining all relevant training results (graph, weights, etc) into a single file for inference. Once we have a frozen model, we will convert the model into a TFLite model. The script that Harvard has set up makes this process relatively easy, and the outputted TFLite model should be fully quantized. The final model should be under 20 kB in size.

When training is done, you will reach a point in the Colab that is labeled as Step 2. Here is where the quantization begins.

Step 2: Quantize and Evaluate the ML Model


Keep in mind that training may take a couple of hours to complete, so make sure your computer is plugged in and your internet connection is stable.


In this project, we were able to create an audio keyword spotting model small enough to be run locally on an MCU powered by standard AA batteries. Hopefully, this project helped demonstrate the value and applications of TinyML.



Once everything is wired up, we can upload our code to the Arduino and watch it work. Shown here is a demonstration of our finished working project [video].

Step 6: Upload the Code and Show it Off!


Using a voltage source from 4.5 to 21 V, we power both the Arduino and the L293D. The wiring has D4 going to the motor driver 2A input, D3 going to motor driver 1A input, and D2 going to EN1,2. We have a 1 kΩ pull-down resistor on each of these signals to make sure our states are always defined, and we have a 0.1 μF capacitor for decoupling just to be safe.


Figure 9. Our motor driver circuitry.

First, we must enter our new TFLite Micro model in place of what is currently used in the micro_speech example. In the very last cell of the Colab, we should have outputted a large matrix of hexadecimal characters, as shown in Figure 5. This is our TensorFlow Lite for Microcontrollers model that will be used in our Arduino code. 

Generally, an ML workflow would begin with collecting and labeling a dataset, followed by designing a model architecture from scratch. For the sake of time and simplicity, we’ll be "cheating" by leveraging some ready-made datasets and a pre-trained keyword spotting model, both developed by Pete Warden. To utilize these resources and train our model we will be using scripts in a Google Colab developed by the TinyML team at Harvard University. 

Step 5: Build the Motor Circuitry—Motor Driver Circuit


Figure 8. We’ll be controlling the motor to move either forward or backward based on the found command by the ML model.


Now the only thing left to do is to change the command responses that already exist in the code. As shown in Figure 8, we’ll change the command response so that if the first character of the found command is “f” (i.e the found command is “Forward”), it will spin the motor forwards. We do the same for the “Backward” command.

From there we can define our motor control function. This function takes in a speed (which we won’t alter for the purposes of this project), and a logic value for both Driver1A and Driver2A. Basically, if Driver1A is HIGH, and Driver2A is LOW, the motor will spin in one direction. If the reverse is true, our motor will spin in the opposite direction.


Figure 7. We need to define our pins, set them as outputs, and write our simple motorCTRL function.



Below in Table 1, you'll find a bill of materials (BOM) for this project.

Setting Up TinyML Software for Arduino Nano 33 BLE Sense


In this project, you'll have a large amount of freedom to choose other, similar parts to replicate this project. 


Figure 1. The parts I used in this project


For this specific project, I selected most of the parts, seen in Figure 1, from what I already had on hand.


(* Note that all costs are from Sept 2021)

A note before digging into this project, I just wanted to make clear that this project will be using pre-existing datasets, Google Colabs, and Arduino code developed by both Pete Warden and the TinyML team at Harvard University. To deploy on our microcontroller unit (MCU), their resources will provide us with:

The Google Colab needed can be found here.

To run TinyML scripts on our Arduino Nano 33 BLE Sense, we need to install some packages and dependencies. If you don’t already have the Arduino IDE installed on your computer, you can find it here.

BOM for a TinyML Robotic Subsystem With a Voice-activated Motor


Since there is already a lot of good information on controlling motors with a microcontroller, this article will primarily focus on demonstrating how to: 

In this project, we will be building a simple robotic subsystem that uses machine learning to respond to voice commands. A microcontroller will collect inputs from a microphone, use ML to listen for the wake words like "forwards" and "backwards," and then drive a small DC motor in the commanded direction.

TinyML Project—Building a Voice Command Robotic Subsystem


All said and done, this project assumes a basic understanding of programming and electronics.

As a disclaimer, we did not develop the vast majority of this code, and we do not own the rights to it. 

  • Access to datasets
  • Model architectures
  • Training scripts
  • Quantization scripts
  • Evaluation tools
  • Arduino code 

With a solid foundational understanding of the concepts that underlie the field of TinyML, we’ll be applying our knowledge to a real-life project. 


Figure 4. This is the section in our Colab where we define what words we’re training for, our training parameters, and our model architecture


The model architecture we are using is tiny_conv, and we will be training 15,000 steps in total. The first 12,000 will be with a learning rate of 0.001 and the last 3,000 will use a learning rate of 0.0001. Additionally, we will be training the model to understand the words “forwards” and “backwards,” which Warden’s keyword spotting (KWS) dataset already includes. This can be seen in Figure 4.


Figure 3. You must ensure that you’re using a GPU runtime in your Colab


First, make sure you are using a graphics processing unit (GPU) runtime in your Colab (as shown in Figure 3), as this will significantly speed up training time. Once you do this, all of the code is ready to be used as-is. Simply run each cell in order by clicking on the black “run” button in the upper left-hand side of each individual cell. 


Table 1. BOM for example TInyML voice-activated motor project. The whole project will cost under $30.

Step 1: Training a Machine Learning Model With TensorFlow Lite 


With this done, we can start the project!

After this, we’ll need to install the necessary libraries for this project. To do this, go to Tools → Manage Libraries. From there, search for and download the following libraries:


Figure 2. We need to install the board files for the Nano 33 BLE Sense


This is shown in Figure 2 below.

Once that is installed, we’ll need to install the board files for the Arduino Nano 33 BLE Sense. To do this, from the IDE, go to Tools → Board → Boards Manager. Here, search for “mbed nano” and install “Arduino Mbed OS Nano Boards."

Related Content


You May Also Like