Quantcast
Channel: Cloud Computing | Network World
Viewing all articles
Browse latest Browse all 917

Lenovo introduces entry-level, liquid cooled AI edge server

$
0
0

Lenovo has announced the ThinkEdge SE100, an entry-level AI inferencing server, designed to make edge AI affordable for enterprises as well as small and medium-sized businesses.

AI systems are not normally associated with being small and compact; they’re big, decked out servers with lots of memory, GPUs, and CPUs. But the server is for inferencing, which is the less compute intensive portion of AI processing, Lenovo stated.  GPUs are considered overkill for inferencing and there are multiple startups making small PC cards with inferencing chip on them instead of the more power-hungry CPU and GPU.

This design brings AI to the data rather than the other way around. Instead of sending the data to the cloud or data center to be processed, edge computing uses devices located at the data source, reducing latency and the amount of data being sent up to the cloud for processing, Lenovo stated. 

Rolled out at the at the Mobile World Conference show, the SE100 forms part of Lenovo’s family of new ThinkSystem V4 servers, with the V4 serving as the on premises training systems and the SE100 placed on the edge, for hybrid cloud deployments. Like the V4, the SE100 comes with Intel Xeon 6 processors and the company’s Neptune liquid cooling technology.

But it is also very compact. Lenovo says the SE100 is 85% smaller than a standard 1U server. It’s power draw is designed to be under 140W, even with a GPU-equipped configuration, according to Lenovo.

The ThinkEdge SE100 is designed for constrained spaces and because it uses liquid cooling instead of fans, it can go into public places without being exceptionally noisy. The company said the server has been specifically engineered to reduce air flow requirements while also lowering fan speed and power consumption, and keeping parts cooler in order to extend their system health and lifespan.

“Lenovo is committed to bringing AI-powered innovation to everyone with continued innovation that simplifies deployment and speeds the time to results,” said Scott Tease, VP of Lenovo infrastructure solutions group, products in a statement. “The Lenovo ThinkEdge SE100 is a high-performance, low-latency platform for inferencing. Its compact and cost-effective design is easily tailored to diverse business needs across a broad range of industries. This unique, purpose-driven system adapts to any environment, seamlessly scaling from a base device, to a GPU-optimized system that enables easy-to-deploy, low-cost inferencing at the Edge.”


Viewing all articles
Browse latest Browse all 917

Trending Articles