All Work
R&DEdge AIComputer Vision2021

Edge AI Research

RoleR&D Engineer
Year2021
DevicesNVIDIA Jetson Nano · NVIDIA Jetson Xavier NX
StatusResearch — fed into Computer Vision System (2022)

Computer vision analytics is powerful — but traditionally expensive. Running inference in a data center means cloud infrastructure costs, bandwidth requirements for continuous video streams, and latency that limits real-time use cases. For customers with limited budgets, the math rarely works. Edge AI changes that equation. By running models directly on device — at the camera, not in the cloud — the infrastructure cost drops, latency drops, and the business case opens up for a much wider range of customers.

The Problem

A customer wants to count vehicles entering a parking lot, monitor queue length at a service counter, or track people flow through a building entrance. The analytics are straightforward. But streaming continuous video to a data center for processing is expensive, bandwidth-heavy, and overkill for the use case. Most small and mid-sized customers in Indonesia don't have the infrastructure budget that cloud-based computer vision assumes.

The Question

Can we bring the model to the camera instead of the camera to the model — and still get production-quality results? The Jetson line is NVIDIA's edge AI hardware — GPUs small enough to sit next to a camera, powerful enough to run real neural networks. The Nano targets cost-sensitive deployments; the Xavier NX steps up for more demanding workloads. I ran experiments on both to understand the performance ceiling of each.

Use Cases Tested

People Counting

Detecting and counting individuals passing through a defined zone. Useful for retail footfall, building access monitoring, and occupancy tracking.

Vehicle Counting

Detecting and counting vehicles by entry/exit or traffic flow. Relevant for parking management, toll monitoring, and traffic intelligence.

Queue Detection

Identifying and measuring queue length at service counters or entry points. Helps operations teams respond to wait time spikes in real time.

What Edge AI Made Possible

Lower costNo cloud compute bill for continuous inference. The device is a one-time hardware cost.
No bandwidth dependencyVideo stays local. Only detection results (counts, events, timestamps) are sent to the backend — a fraction of the data.
Lower latencyInference happens at the source. Real-time detection without round-trip network delay.
Viable for limited-budget customersThe economics work for customers who couldn't justify a cloud-based analytics deployment.

How This Led to the 2022 Computer Vision System

The edge AI research validated that on-device inference was production-viable — not just a research curiosity. The detection workloads that ran on Jetson in 2021 became the foundation for the full computer vision platform built in 2022: three custom-trained YOLO models for vehicle detection, object detection, and license plate recognition, deployed in a production system with a full operator dashboard.

The architectural decision to keep inference close to the camera — rather than centralizing everything in a data center — came directly from this research.

Outcome & Impact

Validated that on-device inference was production-viable for real detection workloads, not just a research curiosity
People counting, vehicle counting, and queue detection — all running on Jetson hardware without cloud dependency
Research became the direct foundation for the Computer Vision System built in 2022
Architectural decision to run inference close to the camera — not in a data center — came directly from this work
Opened up a viable path for smaller Indonesian customers who couldn't justify cloud-based analytics costs
What I Learned

Research is only valuable if it changes the architecture of what comes next.

The Jetson experiments weren't a side project — they were a deliberate attempt to answer a real question before committing to an architecture. That question shaped every decision in the 2022 Computer Vision System: where inference runs, how data flows, and what the cost structure looks like for customers. Good R&D doesn't produce papers — it produces better systems.