
Microsoft recently published specifications on an extensive neural network model that it has been deploying in production to increase the relevance of Bing searches. The approach, called a “sparse” neural network, complements current massive Transformer-based networks like OpenAI’s GPT-3, according to the business.
In the realm of machine learning, transformer-based models have garnered a lot of attention. As Microsoft earlier stated, these models excel at identifying semantic linkages and have been used to improve Bing search.
That’s where the new Make Every Feature Binary (MEB) paradigm from Microsoft comes in. The sparse, large-scale model comprises 135 billion parameters — machine learning model pieces learned from historical training data.
To Read More: Venture Beat
Check Out The New Enterprisetalk Podcast. For more such updates follow us on Google News Enterprisetalk News.