Why This Work Mattered
Most IoT gateway hardware forces a bad tradeoff.
Small embedded devices are power-efficient and cheap to deploy, but they are usually limited in protocol support and edge compute. Larger gateways can run Linux, containers, and richer workloads, but they are often too expensive, too closed, or too large for real field deployments.
Smart metering makes that tradeoff worse.
A field gateway may need to read Wireless M-Bus meters at 868 MHz, maintain NB-IoT or LTE-M backhaul, expose WiFi or BLE locally, buffer data offline, and integrate cleanly with a cloud IoT platform. Most commodity boards do not cover that full stack.
The goal of S0 and S1 was to build a modular gateway family that could scale from low-power embedded deployments to Linux-based industrial edge computing without changing the core connectivity model.
S0 handles the constrained side.
S1 extends the same wireless core into a Linux gateway.
I worked on making that system reliable enough for real deployments, from board-level power paths to Zephyr firmware behavior and platform integration.
Systems I Changed
A Modular Gateway Instead of Another Closed Box
I worked on shaping the S0 as a deployable gateway rather than just another radio board.
The S0 is built around the ESP32-C6, a RISC-V microcontroller running Zephyr RTOS, and combines Wireless M-Bus, NB-IoT, LTE-M, WiFi, and BLE in a compact open-source module.
I worked across the hardware and firmware decisions that made that architecture practical for smart metering deployments where a single gateway needs to handle local meter collection, cellular backhaul, commissioning access, and offline recovery without becoming multiple separate devices.
The baseboard extends that module with Ethernet, wired M-Bus, SD card logging, 24V industrial power input, battery backup, and DIN-rail mounting.
That turns the S0 from a development board into field infrastructure.
I Designed the Power Architecture Around Real Deployment Conditions
I worked directly on the board schematic design, especially around the power architecture of the S0 baseboard.
The board was designed for 24V DC industrial input with a wide operating range of roughly 18–36V because that reflects the reality of industrial cabinets, smart metering installations, and building systems—not clean lab power supplies.
I worked on the regulator path, voltage conversion flow, battery backup behavior, charging path, and power sequencing needed to keep the gateway stable across noisy field power conditions.
The board needed reliable conversion down to the rails required by the ESP32-C6, the SIM7080G cellular modem, the Ethernet controller, storage components, and peripheral interfaces.
Battery backup was especially important because telemetry systems cannot simply disappear during short outages. I worked on ensuring clean switchover behavior between external power and backup power without destabilizing the radio modules or SD logging path.
I also focused on modem startup behavior. Cellular modules like the SIM7080G are extremely sensitive to unstable enable lines and poor power sequencing. A bad startup sequence often looks like a networking failure, which makes debugging painful.
Stable power paths and predictable startup behavior were critical.
Good power design is invisible when it works, but it is usually the first reason field hardware fails.
I Designed Networking Interfaces for Failure, Not Just Connectivity
I worked on the networking sections of the board so deployments could survive real field conditions instead of ideal assumptions.
The S0 handled wireless connectivity through Wireless M-Bus at 868 MHz for smart metering, NB-IoT and LTE-M for cellular backhaul through the SIM7080G, and WiFi and BLE through the ESP32-C6 for commissioning and local access.
I worked around how those interfaces were exposed in the schematic, including modem control lines, SIM routing, Ethernet reset handling, interrupt paths, and how those signals were carried through the module-to-baseboard edge connector.
The baseboard added wired Ethernet using the W5500 controller, which gave deployments a stable primary uplink where cellular was unnecessary, too expensive, or needed as fallback redundancy.
That split mattered because different installations need different failure models. Some rely entirely on cellular, some depend on Ethernet, and others require both.
The goal was not just adding connectivity.
It was making connectivity predictable.
A gateway should not require a hardware redesign every time deployment conditions change.
S1: Keeping the Same Wireless Core, Adding Linux Compute
I helped shape the system boundary between S0 and S1 so the architecture stayed clean.
The S1 pairs the S0 module with the BeagleV-Fire, giving the gateway a full Linux environment on RISC-V while keeping the S0 as the wireless connectivity core.
The S0 handles RF, meter communication, low-power connectivity, and protocol-facing work. The BeagleV-Fire handles Linux-side processing, containers, edge AI, protocol translation, local storage, and cloud integration.
That separation matters because it avoids forcing a microcontroller to behave like a server while still giving industrial deployments the compute they need.
S0 is the field radio layer.
S1 is the edge compute layer.
Together, they form a gateway platform instead of a single-purpose device.
I Built Firmware Around the Board, Not Around Demos
I worked on the embedded Zephyr firmware where the hardware becomes usable.
Zephyr's device tree model made it possible to map the S0 hardware properly. I worked around how radio interfaces, GPIOs, UARTs, SPI devices, modem control lines, and protocol-specific configuration were represented so the board definition stayed clean instead of pushing hardware assumptions deep into application code.
That mattered because the S0 is not a simple development board.
It has multiple radios, modem control paths, external interfaces, and expansion through the baseboard. Firmware needed to reflect the actual board architecture and make protocol examples usable without developers having to reverse-engineer hardware behavior first.
Good embedded firmware starts with making the board understandable.
Multi-Protocol Connectivity That People Can Actually Use
The S0 was designed as a Swiss knife for wireless IoT.
It supports Wireless M-Bus for smart metering, NB-IoT and LTE-M for cellular backhaul, WiFi and BLE for local connectivity, and expansion through the baseboard for Ethernet, wired M-Bus, and SD card logging.
The hard part is not listing protocols on a feature sheet.
The hard part is making the gateway architecture clean enough that those protocols can coexist without turning the firmware into a pile of one-off demos.
I worked around configuration paths, build targets, interface ownership, documentation, and how someone would actually bring up each protocol path during deployment.
A gateway is only useful if people can make it talk to the systems around it.
Magistrala Integration
I worked on making the gateways fit naturally into Magistrala, Abstract Machines' open-source IoT platform.
That mattered because hardware is only one half of the problem.
A meter reading is not useful until it can move through authentication, device identity, channels, messaging, storage, and cloud-side processing.
S0 collects and forwards data from constrained field devices.
S1 adds local compute and richer edge-to-cloud behavior.
Magistrala provides the platform layer for ingestion, device management, and messaging.
The result is not just hardware.
It is a complete path from field device to cloud infrastructure.
Engineering Impact
Most of my work sat between hardware bring-up and platform usability.
The boards needed to be open, modular, and technically capable, but they also needed to be understandable by someone trying to build on them.
That meant working across schematics, firmware examples, hardware documentation, protocol setup, pinouts, board architecture, and integration paths.
My goal was reducing the distance between receiving the hardware and collecting useful data from the field.
That is where gateway projects usually succeed or fail.
Not in the schematic alone.
In the first week someone tries to make the board useful.
What Operating This Taught Me
Hardware Fails at Interfaces
Theory and datasheets tell you what should happen. Field hardware tells you what actually happens.
Modem startup sequences specified as deterministic turn out to depend on exact power rail settling times. Edge connectors develop intermittent signal integrity problems in environments with vibration and dust. UART receive buffers fill silently when firmware does not read fast enough, and nobody notices until a deployment is live and a device is unresponsive.
Every one of these is an interface failure — between the schematic and the board, between the board and the firmware, between the firmware and the field condition. The SIM7080G startup behavior that caused unexpected resets was not a firmware bug. It was a power sequencing gap that only appeared under certain input voltage conditions. Finding it required treating the board as a system, not as a collection of individual components.
The interesting engineering work is never in the datasheet. It is in the gap between what the datasheet says and what actually happens in the field.
Documentation Is Part of the Deliverable
A board that works but that nobody can successfully bring up is not shipped hardware.
The S0 has multiple radios, a cellular modem with credential management, an edge connector with dozens of signals, and a baseboard with its own power architecture and protocol wiring. A developer trying to configure LTE-M backhaul or bring up a first Wireless M-Bus session needs to understand the board well enough to debug it when something behaves unexpectedly.
That meant treating device tree overlays, Zephyr board definitions, protocol setup guides, credential provisioning flows, and firmware example code as first-class engineering deliverables — not afterthoughts written after the schematic was finished.
The first week a developer spends with the hardware is where the design is actually evaluated.
Platform Scale
The S0 and S1 form the open hardware connectivity tier of Abstract Machines' IoT stack.
Both boards are fully open-source: schematics, PCB layouts, BOM, and firmware examples are publicly available. The design targets real field conditions — 24V DC industrial input with an 18–36V operating range, battery backup with clean switchover, DIN-rail mounting, and protocol coverage that does not assume stable cellular or clean lab power.
The S0 targets standalone deployments at the meter and sensor level: Wireless M-Bus meter reading, cellular backhaul, WiFi and BLE commissioning, and offline buffering for sites where connectivity is intermittent. The S1 targets sites that need Linux compute alongside wireless connectivity — edge processing, containerized workloads, Magistrala agents, or local data aggregation before cloud forwarding.
Together, the two gateways cover the hardware layer from individual field devices through industrial edge infrastructure, built around the same modular wireless core.
How I Think
Hardware engineering teaches you to think in constraints.
Most software problems can be retried, patched, or rolled back. A board in a utility cabinet in Nairobi cannot. A bad power path, a firmware that cannot survive a modem reset, a connector pinout with signal integrity issues at speed — these are not problems you fix with a CI pipeline.
That changes what you optimize for.
I care equally about schematics, firmware, and documentation because they all answer the same question: can someone deploy this without expert support? A gateway that requires specialist knowledge for every field deployment is not modular hardware. It is a dependency.
The goal of embedded hardware work is to make the board disappear — to become the invisible, reliable layer that the application stack above it can trust completely, regardless of what the field environment looks like.