- 02/19/2026
- Expert knowledge
- Embedded Vision
embedded award 2026: Embedded vision nominees
The data volumes and data rates are very high for embedded vision systems. Consequently, the demands on the systems, their components and their connections are much higher than for classic sensor systems. As a result, embedded vision is driving the entire industry. The nominees in this category are also among the drivers of the industry.

A compact CAN FD interface card, an on site intelligent operations tool, and an integrated ST platform combining vision, decision, and motion

CAN FD Expansion Cards(MEC-CAN-2F14i) for Compact Embedded Vision Systems
Exhibitor: Cervoz Technology
Hall/Booth: 1-401
The Cervoz CAN FD expansion card, MEC- CAN- 2F14i, brings reliable multi-peripheral synchronization to compact embedded vision systems by providing four fully isolated CAN FD channels through a single M.2 PCIe x1 slot. It eliminates the need for external USB to CAN converters and excess cabling—reducing system complexity, improving EMI robustness, and ensuring secure timing between cameras, lighting, triggers, and motion equipment.
Its innovative design integrates four independent CAN FD / CAN 2.0 channels with 2.5 kV galvanic isolation per port, enabling clean signal separation for lighting control, motion feedback, and trigger distribution. A 3 in 1 breakaway format (2242/2260/2280) and switch controlled split termination allow mechanical flexibility and quick field level bus tuning without hardware rework.
Compared to fixed length, single channel solutions, the MEC CAN 2F14i offers unmatched versatility: one SKU fits multiple platforms, four independent buses simplify fault isolation, and industrial grade reliability ( 40°C to +85°C, 15 kV ESD) ensures stable operation even in harsh factory or outdoor environments.
With reduced material use, fewer external converters, and a long lived, platform flexible design, the card also supports sustainable system architectures. The result is a compact, robust, and future ready solution for high precision embedded vision and robotic control.
Memorence Operagents
Exhibitor: Memorence AI
Hall/Booth: 3-439
Memorence Operagents brings real-time intelligence directly to frontline operations. The system assists industrial and service personnel by understanding visual context, user intent, and procedural steps in real time. This ensures reliable execution of complex SOPs, reduces human errors, and adapts instantly to changing conditions without requiring retraining.

At its core is an instant-learning on-device AI developed by Memorence AI for real-time operational environments. Operagents integrates embedded vision, multimodal sensing, and adaptive learning within a compact embedded platform. The platform also incorporates vision-language models (VLM) and natural voice interaction, enabling contextual understanding and explainable guidance directly on site. It delivers step-by-step guidance, immediate validation, and contextual explanations—fully offline, low-latency, and compliant with strict data-privacy requirements.
Compared to traditional AI solutions that stop at detection or analytics, Operagents closes the loop between perception and action within operational workflows. Operator corrections are applied immediately, enabling rapid adaptation alongside conventional retraining processes.
The system can be deployed in real production environments without requiring dedicated fixtures or controlled lighting enclosures. Memorence Operagents increases quality and consistency by preventing errors early in the workflow and supporting reliable execution. The result is a practical and scalable human–AI collaboration framework for reliable industrial operations.

Vision, Decision, Motion - ST product chain in humanoid robot
Exhibitor: STMicroelectronics
Hall/Booth: 4A-148
Vision, Decision, Motion is the first platform to unify AI based gesture recognition, deterministic control, and ultra compact multi axis actuation within a fully integrated ST technology stack. Instead of relying on fragmented systems from multiple vendors, the solution uses ST chipsets end to end—from edge vision AI to kinematic decision-making to a 6 axis drive—drastically reducing integration effort, debugging complexity, and physical footprint.
The entire platform runs on ST hardware: An STM32N6 processes MediaPipe based hand landmarks at 30 fps directly on the edge, while the STM32MP257 functions as a soft PLC for real time kinematics and trajectory planning. A complete 6 DoF actuator is integrated on a single 6 × 6 cm PCB, setting a new benchmark for compact end effector design in robotics.
Its core USP is the seamless Vision → Decision → Motion integration: NPU accelerated perception, heterogeneous real time control, and highly condensed motion hardware operate in tightly coordinated firmware. This delivers deterministic, industrial grade behavior without the friction of multi vendor systems. Proprietary gesture mapping firmware, PCB integration methods, and protected IP reinforce the platform’s technological lead.
Such compactness in the actuation is made possible thanks to the newest technology of the STSPIN900 Series for motion control that allow precise and powerful movement to be positioned in very cramped space.
With reduced material usage, energy efficient STSPIN drivers, and long lifecycle ST components, the system also provides strong sustainability benefits. Vision, Decision, Motion forms a scalable foundation for next generation humanoid robots—combining high performance, minimal footprint, and maximum design efficiency.
