New Taipei City, Taiwan, Oct. 14, 2024 – Vecow Co., Ltd., a global team of embedded experts, proudly introduces its next-generation edge AI server platforms. Powered by Intel® Xeon® D Series, Intel® Xeon® Scalable processors, and AMD EPYC™ processors, this new lineup brings unmatched performance to AI, edge, and IoT applications.

 

Edge servers are transforming industries by enabling data processing closer to the source, reducing latency, and boosting operational efficiency. Vecow’s new edge AI server platforms offer specialized features tailored to the evolving needs of embedded and industrial markets. From AI inference to large-scale IT deployments, these solutions are ready to power tomorrow’s innovations.

Optimizing Autonomous Vehicles and Industrial Automation with Edge Computing

Vecow’s edge AI servers play a critical role in autonomous vehicles by enabling real-time data processing for AI models at the edge. By leveraging the powerful processor and discrete GPU support, the edge AI servers can quickly handle complex AI inference tasks, such as object detection and decision-making, directly on the vehicle, ensuring reduced latency and enhancing the safety and efficiency of self-driving operations.

In industrial automation, Vecow’s edge AI servers are perfect for edge AI and real-time control systems. It enables predictive maintenance, real-time monitoring, and process optimization on the factory floor, minimizing downtime and improving production throughput.

2U Form Factor for Space-Constrained Environments

When space is limited, the Vecow RMS-4000/3000 provides an ideal solution. Featuring a 2U rackmount chassis design, these systems offer front access to networking and connectivity, ensuring efficient use of space while allowing easy maintenance and serviceability.

 

Supporting Diverse Applications with Leading CPUs

The RMS-4000 is driven by the dual up to 40-core 3rd Gen Intel® Xeon® Scalable processors, designed for requiring powerful computing capabilities applications. In contrast, the RMS-3000 , equipped with an up to 64-core AMD EPYC™ processor, provides an alternative processing power for customers seeking optimal performance across various use cases, including industrial IoT and large-scale AI workloads.

 

Converging High-performance Computing and Expansive Storage

Built with Intel® Xeon® D-2800/D-2700 processors, the Vecow ICS-1000 series -including the ICS-1110S and ICS-1000 – sets a new standard for compute and storage capabilities.

For businesses handling massive data loads, the ICS-1110S delivers exceptional storage performance and data integrity. With 10 U.2 storage devices supporting up to 160TB of capacity and support via Intel® Virtual RAID on CPU (Intel® VROC) RAID 0, 1, 5, 10, its flexible design also supports optional M.2 functionality, meeting the demands of data-intensive applications.

The ICS-1000 is a GPU-accelerated system featuring a 16-core Intel® Xeon® D-2876NT processor and enhanced memory support for 8 DDR4 UDIMM/RDIMM slots at 2933 MHz, up to 512GB. This makes it an ideal choice for AI-driven applications at the edge. With up to 6 PCIe expansion slots, it supports discrete graphics and can handle a maximum power budget of 1800W for dual NVIDIA or AMD 2-slot full-length graphics cards, ideal for AI and inferencing.

 

“Our latest edge AI server platforms are designed to empower businesses with cutting-edge performance, allowing them to scale and optimize their operations,” said Jerry Chen, Product Manager of the Embedded Systems & Platform Division at Vecow. “With the RMS-4000, RMS-3000, and ICS-1000 series, we offer a comprehensive suite of hardware solutions that can be equipped with additional GPUs, enhancing their capabilities for AI computing and driving next-generation applications for our customers.”

To learn more about Vecow Server-Grade Computing Platforms, please visit the Vecow RMS-4000RMS-3000ICS-1110S, and ICS-1000 product pages.