Intel, Marvell, Qualcomm Pledge Support for Glow AI Compiler – ExtremeTech


This site may earn affiliate commissions from the links on this page. Terms of use.

Earlier this year, Facebook introduced Glow, a new open source machine learning compiler intended for heterogeneous systems. The goal is to provide superior performance and improved energy efficiency via generating more efficient code. Here’s how the team behind Glow described the project in its initial whitepaper:

In the Glow project, we focus on the lower parts of the software stack. We work to provide PyTorch and other frameworks with a low-level graph and a code generator for neural networks. The name Glow is an abbreviation for Graph-Lowering, which is the main technique that the compiler uses for generating efficient code. The Glow low-level graph will not replace the machine learning high-level graph, in the same way that the low-level intermediate representation in compilers does not replace the abstract syntax tree. We aim to provide a useful compiler toolkit that will allow hardware developers to focus on implementing efficient acceleration hardware, each of which likely differ in capabilities, and use Glow for automating compilation tasks such as instruction selection, memory allocation and graph scheduling. The full compiler toolkit is open-source and publicly available.

What Facebook is announcing now is a new suite of hardware partners that have pledged to support Glow in their own products. Cadence, Esperanto, Intel, Marvell, and Qualcomm have all committed to supporting Glow in silicon in future projects. The software isn’t just designed to generate code for a single given architecture — Facebook intends to support a range of specialized machine accelerators from multiple companies, with corresponding performance improvements for multiple vendors. This support for hardware accelerators isn’t limited to a single type of operation, either. FB’s press release notes that the hardware-independent aspects of the compiler focus on math optimizations that aren’t tied to any specific model. Glow also ships with a linear algebra optimizer, CPU-based reference implementation (for testing hardware accuracy), and various test suites. The goal is to reduce the amount of time it takes hardware manufacturers to bring new devices to market.

Glow versus conventional CPU performance according to FB. Click to enlarge.

FB is putting serious effort behind Glow. The company launched version 1.0 of its PyTorch deep learning framework earlier this year, new object detection models, libraries for language translation, and Tensor Comprehensions for automatically synthesizing machine learning kernels. There’s been a tremendous effort in recent years to build common frameworks for AI and ML that will run on a wide range of hardware, and Glow wants to be a part of it.


It’s interesting to see the two companies not on this list: AMD and Nvidia. Both take a keen interest in AI/ML — AMD as a newcomer to the industry that wants to make its mark with a 7nm Vega data center product later this year and Nvidia as a leader in the AI/ML space. AMD has participated in Facebook’s Open Compute Project before, so it’s possible we’ll see some activity on this front at a later date.

Now Read: General AI is Here and Impala Is Its Name, Stanford Researchers Build AI Into Camera Optics, and MIT Creates AI to Optimize Brain Cancer Treatments

Source link