ML/DL

Introducing Qualcomm Cloud AI 100 Accelerators for HPE Edgeline Converged Edge Systems

HPE and Qualcomm are collaborating to deliver high performance and power efficient solutions for customers deploying ML/DL models at the edge.

AI Inference workloads are often larger in scale than training workloads and frequently need to meet specialized requirements such as low-latency and high-throughput to enable real-time results. That’s why the best model deployment infrastructure often differs from what’s needed for development and training.…

Read more