Sign In  |  Register  |  About Burlingame  |  Contact Us

Burlingame, CA
September 01, 2020 10:18am
7-Day Forecast | Traffic
  • Search Hotels in Burlingame

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Tachyum Demonstrates Full BF16 AI Support in GCC and PyTorch

Tachyum® today announced that it has successfully integrated the BF16 data type into its Prodigy® compiler and software distribution, which is now available to early adopters and customers as a pre-installed image as part of beta testing.

BF16, or bfloat16, is a shortened floating point data type based on the IEEE 32-bit single-precision floating point data type (f32). It is used to accelerate machine learning by reducing storage requirements and increasing the calculation speed of ML algorithms. Tachyum now fully supports BF16 for use with GCC 13.2 (GNU Compiler Collection); HPC/linear algebra Eigen library optimized for Prodigy Universal Processor; and PyTorch AI framework.

Tachyum’s Prodigy was designed to handle matrix and vector processing from the ground up rather than as an afterthought. Among Prodigy’s vector and matrix features are support for a range of data types (FP64, FP32, TF32, BF16, Int8, FP8, FP4 and TAI); 2x1024-bit vector units per core; AI sparsity and super-sparsity support; and no penalty for misaligned vector loads or stores when crossing cache lines. This built-in support offers high performance for AI training and inference workloads, increases performance and reduces memory utilization.

"We continue to strengthen our software distribution package to ensure the greatest breadth of application, framework and library support for Prodigy in advance of its release," said Dr. Radoslav Danilak, founder and CEO of Tachyum. “The use of BF16 improves hardware efficiency by improving performance. Our support of the format is consistent with our goals of having Prodigy provide the performance required of hyperscale, high-performance computing and AI workloads without modifications and affirms our commitment to transforming data centers around the world.”

As a Universal Processor offering industry-leading performance for all workloads, Prodigy-powered data center servers can seamlessly and dynamically switch between computational domains (such as AI/ML, HPC, and cloud) with a single homogeneous architecture. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization, Prodigy reduces CAPEX and OPEX significantly while delivering unprecedented data center performance, power, and economics. Prodigy integrates 192 high-performance custom-designed 64-bit compute cores, to deliver up to 4.5x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.

A video demonstration of image classification using ResNet model utilizing native PyTorch implementation on Tachyum Linux on a Prodigy emulation system is available for viewing at https://youtu.be/BOZ2ZV6Nr48 . The demonstrated ResNet model has been quantized using the BF16 data type to take advantage of Prodigy BF16 vector instruction, particularly activation, loss and reduction functions. The next video will demonstrate the completion of FP8 testing.

Follow Tachyum

https://twitter.com/tachyum

https://www.linkedin.com/company/tachyum

https://www.facebook.com/Tachyum/

About Tachyum

Tachyum is transforming the economics of AI, HPC, public and private cloud workloads with Prodigy, the world’s first Universal Processor. Prodigy unifies the functionality of a CPU, a GPU, and a TPU in a single processor to deliver industry-leading performance, cost and power efficiency for both specialty and general-purpose computing. As global data center emissions continue to contribute to a changing climate, with projections of their consuming 10 percent of the world’s electricity by 2030, the ultra-low power Prodigy is positioned to help balance the world’s appetite for computing at a lower environmental cost. Tachyum recently received a major purchase order from a US company to build a large-scale system that can deliver more than 50 exaflops performance, which will exponentially exceed the computational capabilities of the fastest inference or generative AI supercomputers available anywhere in the world today. When complete in 2025, the Prodigy-powered system will deliver a 25x multiplier vs. the world’s fastest conventional supercomputer – built just this year – and will achieve AI capabilities 25,000x larger than models for ChatGPT4. Tachyum has offices in the United States and Slovakia. For more information, visit https://www.tachyum.com/.

Contacts

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 Burlingame.com & California Media Partners, LLC. All rights reserved.