What is theNeuReality
Solution?

The NR1™ AI Inference Solution is a holistic systems architecture, hardware technologies, and software platform that makes AI easy to install, use, and manage.

NeuReality accelerates the possibilities of AI by offering a revolutionary solution that lowers the overall complexity, cost, and power consumption.

While other companies also develop Deep Learning Accelerators (DLAs) for deployment, no other company connects the dots with a software platform purpose-built to help manage specific hardware infrastructure.

This system-level, AI-centric approach simplifies running AI inference at scale.

NeuReality’s AI-Centric Approach

Software/APIs

An ecosystem of upper-level AI tools for simplified deployment and orchestration

Architecture

Purpose-built and optimized for AI inference

Hardware

New network addressable system-on-a-chip to migrate simple but critical data-path functions from software to hardware

NeuReality Software

Develop, deploy, and manage AI inference

NeuReality is the only company that bridges the gap between the infrastructure where AI inference runs and the MLOps ecosystem.

We’ve created a suite of software tools that make it easy to develop, deploy, and manage AI inference.

Watch the video to learn more about our software.

Our software stack:

Works with any trained model in any development environment

Includes tools that offload the complete AI pipeline

Connects AI workflows easily to any environment

Any data scientist, software engineer, or DevOps engineer can run any model faster and easier with less headache, overhead, and cost.

NeuReality APIs

Our SDK includes three APIs that cover the complete life cycle of an AI inference deployment:

Toolchain APIProvisioning APIInference API
For developing inference solutions from any AI workflow (NLP, Computer Vision, Recommendation Engine)For deploying and managing AI workflowsFor running AI-as-a-service at scale

NeuReality Architecture

NeuReality has developed a new architecture design to exploit the power of DLAs.

We accomplish this through the world’s first Network Addressable Processing Unit, or NAPU.

This architecture enables inference through hardware with AI-over-Fabric, an AI-hypervisor, and AI-pipeline offload.

This illustration shows all of the functionality that is contained within the NAPU:

AI-centric vs CPU-centric

Traditional, generic, multi-purpose CPUs perform all their tasks in software, increasing latency and bottlenecks. 

Our purpose-built, AI-centric NAPUs perform those same tasks in hardware that was specifically designed for AI inference.

The following table compares these two approaches:

AI-centric NAPUTraditional CPU-centric
Architecture ApproachPurpose-built for Inference workflowsGeneric, multi-purpose chip
AI Pipeline ProcessingLinear“Star” model
Instruction ProcessingHardware basedSoftware based
ManagementAI Process natively managed by cloud
orchestration tools
AI process not managed, only
CPU managed
Pre/Post ProcessingPerformed in HardwarePerformed in software by CPU
System ViewSingle chip hostPartitioned (CPU, NIC, PCI switch)
ScalabilityLinear scalabilityDiminishing returns
DensityHighLow
Total Cost of OwnershipLowHigh
LatencyLowHigh, due to over partitioning
and bottlenecking

NeuReality Hardware

white-frame
NR1 Network Addressable Processing Unit™

NR1 Network Addressable Processing Unit™

The NeuReality NR1™ is a network addressable inference Server-on-a-Chip with an embedded Neural Network Engine, the world’s first Network Addressable Processing Unit (NAPU). As workflow-optimized hardware devices with specialized processing units, native network capabilities, and virtualization capabilities, NAPUs are the ideal form of devices specialized for specific capabilities and important in the heterogeneous data center of the future.

white-frame
NR1-M™ AI Inference Module

NR1-M™ AI Inference Module

The NeuReality NR1-M™ module is a Full-Height Double-wide PCIe card containing one NR1 Network Addressable Processing Unit™ (NAPU) system-on-chip and a network addressable Inference Server and can connect to an external Deep Learning Accelerator (DLA).

white-frame
NR1-S™ AI Inference Appliance

NR1-S™ AI Inference Appliance

The world’s first AI-centric server, NeuReality’s AI-centric NR1-S™ is an optimized design for an inference server which contains NR1-M™ modules with the NR1 NAPU™, which enables truly disaggregated AI service in a scalable and efficient architecture. The system not only lowers cost and power performance by up to 50X but doesn’t require IT to implement for end users.