Working With Azure and the MXCHIP to Create my own Translator

Kabeer Makkar
4 min readJan 21, 2021

After you’ve watched the video, you’ll have a basic knowledge of what I did, and how I did it. Read this article further to expand your knowledge on Azure and to explain my main motives when completing it.

Properties

These .files below are considered as groups in my code and ill just give a brief explanation on them for you to understand what's going on behind the scenes.

.azurecomponent:

  • So in this folder of files (in this case, it's just a single file), it just has the azure configuration JSON code which basically allows the whole project to run.
  • It’s basically a .JSON file to customize the MXCHIP (id, name, folder, function language)

Folders:

/Device

  • devKitTranslator.ino: which is basically the file for Arduino code. In this case, the .ino file is used to customize and control your board, which is the MXCHIP.
  • Inside, you can find lines such as:
static int currentLanguage = 1; // Default is Chinese
static const char *allLanguages[LANGUAGES_COUNT] = {"Arabic", "Chinese", "French", "German", "Italian", "Japanese", "Portuguese", "Russian", "Spanish"};

Which basically tells the device what languages it works on, and that the default language when you boot the device up is Chinese.

→ Inside of /Device: /vscode

In the /vscode folder inside of the device folder, the 3 files that are in there are: arduino.json, settings.json, and tasks.json.

exemplar — arduino.json:

{    
"sketch": "DevKitTranslator.ino",
"board": "AZ3166:stm32f4:MXCHIP_AZ3166",
"configuration": "upload_method=OpenOCDMethod",
"output": "./.build"
}
  • The arduino.json file is basically telling the system whats running. In this case, its Arduino code and for the first line, sketch is basically the name that Arduino uses for its program. It’s the unit of code that is uploaded to and run on an Arduino board or a MXCHIP.
  • For the second line, its pretty straightforward as its named “board”. It’s telling the system what kind of board you’re working with (MXCHIP = AZ3166)
  • The third line is representing something that is needed later; Board Configuration. This is needed because it is the process for managing application configuration data, which in our case is the speech data.
  • The last line is pretty straightforward as well, it just an indicator on where the output of all the code will go.

/Functions:

// Subscription Key of Speech Service        
const string speechSubscriptionKey = "<personal>";

// Region of the speech service, see https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/regions for more details.
const string speechServiceRegion = "<personal>";

// Device ID
const string deviceName = "<personal>";
  • This folder was the code that I worked on the most with, as all my data and subscription information and all the little tweaks that I made with the code was placed here (especially the devkittranslatorfunction.cs file).
  • In the code shown above, this was the code also mentioned in the video where I talked about subscription keys. My personal code is still in VSC because some of the information is private, but the outline that I worked with and tweaked is mentioned in the Github.

How the Project Really Works

So even though it might look really simple from one perspective, there's actually a lot more than you think! As I explained in my video, I connected my MXCHIP to the IoT hub first and practiced sending telemetry data, which was to make sure my information was actually being taken care of, not just slipping away.

Now going to the diagram, it shows that the data taken from the MXCHIP goes to the Azure Function (in this case, it's the one I created labeled speech). What's really going on is that The MXCHIP records your voice, then posts an HTTP request to trigger Azure Functions, which in this case would be the speech software.

Once it grabs the voice, Azure Functions calls the cognitive service speech translator API to do the translation using Microsoft Translate, since Azure is apart of Microsoft. But once Azure Functions receives back the translation, it sends a C2D (Cloud to Device Messaging) message to the device and a translation is displayed on the LED screen.

What I Could’ve Done Better

Even though I had a great time learning about Azure and it’s features, I think that if I connected this speech software to another service or at least tried to make it more complicated, the result would’ve been much more cooler! But now I’ve had the experience working with Azure Portal, Azure IoT Hub, and even Azure Functions, I think that my future projects could possibly even incorporate these features. They’re quick, simple and you learn a lot through the process. Overall, I think working with the MXCHIP was a pretty good experience, just hoped it had more recognition!

If you enjoyed reading this article or learned something new, I’d love to connect with you on LinkedIn. Also, if you’d like to stay updated on what I’m up to, you can subscribe to my monthly newsletter here!

--

--

Kabeer Makkar

18-year-old tech enthusiast wanting to change the world.