IBM And MLCommons Show How Pervasive Machine Learning Has Become

  • 📰 ForbesTech
  • ⏱ Reading Time:
  • 88 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 39%
  • Publisher: 59%

Education Education Headlines News

Education Education Latest News,Education Education Headlines

This week IBM announced its latest Z-series mainframe and MLCommons released its latest benchmark series. The two announcements had something in common – Machine Learning – which is becoming pervasive everywhere from financial fraud detection in mainframes to detecting wake words in appliances.

MLCommons releases benchmark results in batches and has different publishing schedules for inference and for training. The latest announcement was for version 2.0 of the MLPerf Inference suite for data center and edge servers, version 2.0 for MLPerf Mobile, and version 0.7 for MLPerf Tiny for IoT devices.

To date, the company that has had the most consistent set of submissions, producing results every iteration, in every benchmark test, and by multiple partners, has been Nvidia. Nvidia and its partners appear to have invested enormous resources in running and publishing every relevant MLCommons benchmark. No other vendor can match that claim. The recent batch of inference benchmark submissions include Nvidia Jetson Orin SoCs for edge servers and the Ampere-based A100 GPUs for data centers.

Recently, Qualcomm and its partners have been posting more data center MLPerf benchmarks for the company’s Cloud AI 100 platform and more mobile MLPerf benchmarks for Snapdragon processors. Qualcomm’s latest silicon has proved to be very power efficient in data center ML tests, which may give it an edge on power-constrained edge server applications.

Many of the submitters are system vendors using processors and accelerators from silicon vendors like AMD, Andes, Ampere, Intel, Nvidia, Qualcomm, and Samsung. But many of the AI startups have been absent. As one consulting company, Krai, put it: “Potential submitters, especially ML hardware startups, are understandably wary of committing precious engineering resources to optimizing industry benchmarks instead of actual customer workloads.

The MLPerf Tiny benchmark is designed for very low power applications such as keyword spotting, visual wake words, image classification, and anomaly detection. In this case we see results from a mix of small companies like Andes, Plumeria, and Syntiant, as well as established companies like Alibaba, Renesas, Silicon Labs, and STMicroeletronics.While IBM didn’t participate in MLCommons benchmarks, the company takes ML seriously.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 318. in EDUCATİON

Education Education Latest News, Education Education Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

OpenAI's DALL-E 2 produces fantastical images of most anything you can imagine | EngadgetOn Wednesday, the OpenAI consortium unveiled DALL-E 2, a higher-resolution and lower-latency sequel to its image-generation-from-a-text-description machine learning system..
Source: engadget - 🏆 276. / 63 Read more »

LYT CEO & Former Tesla Engineer Talks Using AI To Solve Traffic FlowLYT is a cloud-based software platform using vehicle and machine learning technologies to solve the problem of traffic flow.
Source: cleantechnica - 🏆 565. / 51 Read more »