RealSense » News & Insights » 3D Scanning » Machine Vision for AI Edge Applications. DeGirum Enhance Computing at the Brink.

Machine Vision for AI Edge Applications. DeGirum Enhance Computing at the Brink.


A growing community of developers is discovering the power and versatility of ® RealSense™ depth cameras for creating machine vision applications. This thriving community recently received a boost when RealSense technology partnered with DeGirum to simplify the development and deployment of AI edge applications using PySDK, DeGirum’s hardware-agnostic software development kit.

Edge applications are software apps that bring computing power closer to the point of action, enabling more efficient, responsive, and secure solutions. Many edge apps incorporate machine learning algorithms to process visual data for object detection, recognition, tracking, and other visual tasks. For example, with autonomous vehicles, machine vision is used to detect pedestrians, traffic signs, and other objects in the environment. In manufacturing, machine vision systems can inspect products for defects, ensuring quality control and reducing waste. In security applications, machine vision is used for facial recognition, surveillance, and intrusion detection. By processing visual data locally, these edge applications can provide real-time insights, improve efficiency, and enhance safety.

DeGirum Corporation was founded in 2017 by semiconductor industry veterans with expertise in machine learning and AI. Their goal was to develop a flexible, powerful, and easy-to-use acceleration platform to enable more capable and complex edge applications. DeGirum’s primary product, the Orca™ Edge AI Accelerator, is complemented by a comprehensive cloud platform for developing and deploying AI models.

With this mature software platform as a foundation, DeGirum has integrated the RealSense SDK with its AI PySDK, allowing developers to quickly and easily leverage RealSense technology with DeGirum’s powerful AI processing capabilities—accelerating deployment times and simplifying the development process.

“Our solutions are designed to enhance object detection and spatial analysis across a wide range of camera systems,” says Bob Porooshani, Head of Business Development at DeGirum. “Our easy-to-use PySDK, integrated with RealSense’s depth-sensing capabilities, has significantly improved our ML-based inferencing and opened new market opportunities in fields such as autonomous systems, robotics, and smart cities.”

According to Porooshani, the RealSense SDK offers a straightforward integration process, allowing developers to easily incorporate advanced depth-sensing features into their applications. This, in turn, significantly enhances DeGirum’s inference capabilities, enabling fast, accurate AI-powered visual processing. They can run AI models on neural processing units (NPUs), graphics processing unit (GPUs), and central processing unit (CPUs) via the OpenVino runtime and all available AI models. “Working with technology also enhances customer trust and boosts our market credibility,” Porooshani adds.

Case in Point

At NRF 2024, RealSense and DeGirum showcased a joint demonstration of an AI model trained for fruit detection. The demo highlighted how the RealSense depth-sensing technology can be easily integrated with DeGirum’s PySDK and Orca Edge AI Accelerator. In just a few steps, developers were able to deploy the AI model to detect fruit in real-time, demonstrating the seamless combination of the two SDKs to enhance AI processing at the edge.

AI model trained for fruit detection
AI model trained for fruit detection

“The DeGirum engineering team highly values the unmatched depth accuracy of RealSense cameras for edge AI applications,” Porooshani concludes. “The integration has not only improved accuracy but also unlocked new functionalities, such as enhanced real-time object tracking and precision spatial analysis. Our platform remains camera-agnostic, supporting the full range of RealSense Depth Camera products.”

Advantages of PySDK for RealSense Developers

  • Develop in the cloud and deploy at the edge with the same code
  • Standardize on one software stack for multiple hardware options, including NPUs, GPUs, and CPUs
  • Leverage custom code for output visualization
  • Connect to APIs for decoding, resizing, pre-processing, inference, and post-processing.
  • Utilize advanced features such as batch prediction, tracking, slicing, and compound models.
  • Integrate with a wide range of camera systems, including RealSense products, thanks to PySDK’s hardware-agnostic architecture.

“We look forward to continuing our partnership to provide comprehensive edge AI solutions to enhance RealSense technologies.”

– Bob Porooshani, Head of Business Development, DeGirum

Contact Sales