OpenAI Signals that the AI Underpinning GitHub’s Copilot Could be Biased

OpenAI Signals that the AI Underpinning GitHub's Copilot Could be Biased

Copilot, a tool developed by GitHub and OpenAI, makes ideas for entire lines of code within programming environments like Microsoft Visual Studio. Copilot is powered by an AI model, known as Codex that’s trained on billions of lines of public code, and the organizations claim Copilot works with a comprehensive set of frameworks and languages and adjusts to the edits developers make, meeting their coding styles.

However, according to a new article published by OpenAI, Copilot may have substantial flaws, such as biases and sample inefficiencies. While the study only covers early Codex models, whose descendants power Copilot and, soon, the OpenAI API’s Codex models.

To Read More: Venture Beat

Check Out The New Enterprisetalk Podcast. For more such updates follow us on Google News Enterprisetalk News.

Previous articleGood4work celebrates Microsoft’s VIVA pioneering position in the employee experience market by joining the AppSource
Next articleCentrical for Enterprise Learning Solutions Now an SAP Endorsed App Available on SAP® Store