Audio in embedded systems
Our world is full of sounds. The wind, birds, traffic, buzz of voices, discussions, radio, music, sirens. The soundscape is an essential part of our environment, but all sounds are not natural anymore. Our technology causes noise and we are surrounded by devices designed to create artificial sound. One of the keys to our wellbeing is to master the soundscape.
Digital audio processing requires versatile competence
We could very well call Anders Axelsson an audio guru, even if his role at Etteplan is to design embedded systems. His passion is digital sound, and his mission is to give us a better soundscape. “I have worked a lot with embedded systems, mainly the software, but you always have to deal with hardware as well in some way.” He’s talking about the systems that contribute to our soundscape. It quickly becomes clear that this is a complex matter that requires many kinds of competence.
He continues: “The signal processing is essential, how you deal with the signal when receiving from a microphone or feeding to a speaker.” It’s routine to move audio data from one place to another, but the restrictions of embedded systems makes that a challenge too. An audio experience is always time-critical, even a short outage can ruin it. And this need to be achieved on devices with resource restrictions. The devices are small and must run on battery power. They must react promptly and efficiently, but still save power. Not to talk about error conditions, like missing or delayed data, for example.
Now Anders escalates the discussion to a higher level. Just technical skills aren’t enough, embedded system development also requires adequate working methods. And in addition, it’s not enough to just produce a certain pressure wave that hits the eardrum. Sound is an experience and our brain make a lot of signal processing on its own. Anders reminds us, “You need to understand the ear to create an algorithm that produce the desired result.” He goes on by telling how a more powerful sound easily covers a lower sound on a different frequency, for example. All this, and much more, must be considered when designing audio systems.
Digitazing a product is a challenge
“Companies start with a mechanical product, and then they want to make it smart as well.”, he continues.
“I have worked with 3M, who make the Peltor hearing protection devices.” That’s an excellent example of the challenges companies face when mechanical products meet the digital world. You need a much broader competence when it’s time to integrate electronics and the first microprocessor. You may not think about it when using a headset, but it’s a quite complex device.
Anders starts from the fundamentals once again: “On the lowest level we have an electronics engineer drawing the signal paths on a circuit board, ensuring signal integrity at all stages.” He goes on by telling how analog filtering must be applied when microphone input is fed to the AD and microprocessor. Now we are getting close to Anders’ core competence. “C is the preferred language for embedded software. But a DSP, for example, may have instructions that you can’t address from C. Then you have to program at a lower level to call those functions.”
You can’t meet the requirements for compactness and energy efficiency with just careful programming and planning of the circuit boards. “We prefer to work with chips from STMicroelectronics, they have good platforms for this kind of systems.” An important competence is to know the component market and choose the right chip. That enables you to implement the device with a minimum of components, and save both space, weight and energy.
The final step is often software for the computer, which nowadays is a common companion for devices. Yes, that’s an important component too, but Anders doesn’t mind if someone else handles that. It’s apparent that he thrives closer to the hardware, where the real processing is done.
There’s plenty left to invent
It’s easy to become enthusiastic when discussing the potential of digital technology with Anders. Listening to streamed music, always and everywhere, is perhaps what most of us think about when you say audio technology. But the possibilities are wider than that, far wider. The audio devices are becoming even smaller and will soon be integrated in our clothes. Earbuds support performing artists and technology can improve wellbeing significantly for hearing impaired people. Internet enables us to transfer sound anywhere in the world and perform more advanced processing in datacenters.
We are used to access the net from gadgets with an operating system, like computers or phones. But Anders reminds as that there are lighter solutions as well. You only need hardware for 4G/5G or WiFi, and the rest is relatively simple programming. That makes it easy to exchange data with a server, either sound or other kinds of information. This opens breathtaking possibilities for new applications. A server could for example analyze audio from elderlies’ apartments and use smart algorithms to detect accidents. Or even understand spoken words and commands, to alert the relatives.
Maybe you remember those communicator badges that the Star Trek crew used? A light touch and they had voice contact with each other. When talking to Anders you get a strong feeling, that things like this are just waiting to be implemented.
The last question is what Anders would do if he had free hands, and unlimited budget, to create something cool and audio-related. He thinks for a while and starts: “I’m very interested in noise-cancellation. Is it possible to create silent zones without solid walls or other acoustic constructions?” That’s a brilliant idea as it hits one of our big problems spot on. Our soundscape is often too complex and distracting. We are used to control what music we listen to, but it would be equally important to be able to eliminate what we don’t want to hear.