By Megana Natarajan - October 22, 2020 3 Mins Read
GPT-3, the latest technology in the AI world, has generated a lot of hype as being the best at content creation with a language structure compared to other technologies
The new addition to the AI world, GPT-3, has resulted in high excitement in the enterprise world. The tech is touted to be the latest AI that is better than any existing tech at creating content with a language structure. It includes both machine and human language. OpenAI created the tech and is described to be the most useful and important innovation in AI since the start.
Read More: IMSI Encryption – How Subscribers Can Enhance Privacy in 5G
Generative Pre-Trained Transformer 3 is the third version of the tool. It functions by generating text via algorithms that are pre-trained. The tool has data that it needs for carrying out the tasks already in it.
It has been input with nearly 570GB of test data, as an accessible dataset – CommonCrawl. It also additional texts collected by OpenAI, which includes Wikipedia texts. CIOs say that the tool is the biggest artificial neural network created developed till date, capable of providing the most relevant answer for any query posted to it..
GPT-3 can write essays, translate languages, memos taking, create computer code, answer questions, and summarize long texts. The tool is quite revolutionary is expected to be useful and usable in the long-run.
Read More: IT leaders mix and match solutions based on the requirement
It will have a significant impact on apps and software development process in the future. The code is yet to be disclosed to the general public and is accessible to only certain developers via an API maintained exclusively by the OpenAI.
Enterprise leaders say that the tool fits well within the broad segments of AI applications and is a language prediction model. It is an algorithmic architecture created to use the input piece of language and convert it into a format that the tool forecasts to be the most useful piece of language for the end-user.
This is possible via the training analysis carried out on the massive volume of text implemented to pre-train it. Compared to previous algorithms that are not trained in their raw stage, OpenAI has used a huge volume of compute resources required for GPT-3 to analyze how languages are structured and work.
While CIOs are excited about GPT-3’s capability to produce languages, they are still wary about a few points. The biggest challenge is that it is quite expensive for small to medium enterprises. The cost is mainly due to the high level of compute power required to carry out its tasks.
The tool functions as a black-box or closed system but its complete official working details are yet to be announced by OpenAI. Organizations looking forward to implementing the tool to generate resolutions or create products for them are quite unsure about its true algorithm.
Output can still be perfected further. GPT-3 can easily handle the creation of basic applications or short texts but produces less than useful output/ gibberish when asked for complex applications or longer texts.
Read More: Understanding pay gaps and diversity in Data and Analytics Industry
Enterprise leaders agree that these disadvantages will be handled with time, as price reductions happen in compute power, AI platforms get standardized, and algorithms are being fine-tuned and streamlined to handle higher data volumes. The tool is leap-years ahead of what has been implemented in the AI world currently.
Megana Natarajan is a Global News Correspondent with OnDOt Media. She has experience in content creation and has previously created content for agriculture, travel, fashion, energy and markets. She has 3.9 years’ experience as a SAP consultant and is an Engineering graduate.
A Peer Knowledge Resource – By the CXO, For the CXO.
Expert inputs on challenges, triumphs and innovative solutions from corporate Movers and Shakers in global Leadership space to add value to business decision making.
Media@EnterpriseTalk.com