HomeScience & TechnologyGoogle uses MLPerf to demonstrate performance on giants ...

Google uses MLPerf to demonstrate performance on a giant version of BERT


Google utilizes MLPerf competitors to demonstrate overall performance in the giant product version of the BERT language (Bidirectional Encoder Representations from Transformers is a transformer-based machine learning technique for natural language processing training developed by Google).

The deep learning world of artificial intelligence is obsessed with size.

Deep learning programs, such as OpenAI's GPT-3, continue to use more and more GPU chips from Nvidia and AMD to build ever-growing software programs. The accuracy of the programs increases with the size, the researchers claim.

See also: Google search: Tricks that make aνyour requests

Google BERT MLPerf

This obsession with size was detailed Wednesday in the latest industry benchmark reported by MLCommons (a global, open-ended nonprofit dedicated to improving machine learning for all), which sets the standard for how fast computer chips can break deep learning code.

Google has decided not to submit to any standard deep learning reference test, which consists of programs that are well established in the field but relatively outdated. Instead, Google engineers presented a version of Google's natural language program BERT that no other vendor used.

MLPerf, the reference suite used to measure competitive performance, reports results for two segments: the standard "Closed" segment, where most vendors compete on established networks such as ResNet-50, and the "Open" segment, the which allows vendors to try non-standard approaches.

See also: Google: Cryptocurrency miners hack Cloud accounts

In the Open section, Google showed a computer that uses 2.048 Google TPU chips, version 4. This machine was able to deploy the BERT program in about 19 hours.

BERT, a 481 billion-parameter neural network, had not been discovered before. It is over three orders of magnitude larger than the standard BERT version known as the "BERT Large", which has just 340 million parameters. Having many more parameters usually requires much more computing power.

MLPerf

Google said the new submission reflects the growing importance of scale in artificial intelligence.

The MLPerf test suite is the creation of MLCommons, an industry consortium that issues multiple annual comparative benchmarking assessments for both parts of machine learning.: the so-called training, where a neural network is created by improving its settings in multiple experiments and the so-called conclusion, where the integrated neural network makes predictions as it receives new data.

Wednesday's report is the final reference test for the coaching phase. Following is the previous June report.

The full results of MLPerf were discussed in a press release on the MLCommons website. The full details of the submissions can be found in the tables posted on the website.

Google BERT MLPerf

Google's Selvan said MLCommons should consider including more large models. Older, smaller networks like ResNet-50 "give us only one proxy" for large-scale training performance, he said.

What is missing, Selvan said, is the so-called efficiency of systems as they grow older. Google has managed to execute its giant BERT model with 63% efficiency, he told ZDNet, as measured by the number of floating-point operations performed per second relative to a theoretical capacity. That, he said, was better than Nvidia's next highest-scoring 52% in the development of the Megatron-Turing language model developed with Microsoft.

David Kanter, Executive Director MLCommons, said that the decision for large models should be left to the members of the Commons to decide collectively. He pointed out, however, that the use of small neural networks as tests makes the competition accessible to more places.

See also: How to upload files and folders to Google Drive?

Instead, standard MLPerf testing, the code of which is available to everyone, is a resource that any artificial intelligence researcher can use to repeat the tests, Kanter said.

Google does not intend to release the new BERT model, Selvan told ZDNet in an email, describing it as "something we only did for MLPerf". The program is similar to the plans described in a Google search earlier this year on highly parallel neural networks, Selvan said.

Despite the BERT program's 481 billion parameters, it is representative of real-world work because it is based on real-world code.

Source of information: zdnet.com

Teo Ehchttps://www.secnews.gr
Be the limited edition.
spot_img

LIVE NEWS