By ET Bureau - August 20, 2020 1 Mins Read
Adobe beta launched new functionalities of Adobe Character Animator fueled by AI. The latest features enable the desktop software to integrate live motion, capturing with a recording tech to manipulate 2D puppets created on Illustrator or Photoshop. Tools like Lip Sync and Speech-Aware Animation, are powered by Adobe’s cross-platform ML tech Sensei. It then deploys algorithms to produce an animation from pre-recorded speech and accurately align mouth actions for audio parts.
Last year’s Adobe Max conference saw reviewing of Speech-Aware Animation as Project SweetTalk. It helps generate eyebrow and head movements relevant to the recorded animation character. The latest feature Lip Sync boosts auto lip-syncing and mouth shapes timing, also referred to as “visemes”.
The Enterprise talk Bureau has five well-trained writers and journalists, well versed in B2B enterprise technology industry, and constantly in touch with industry leaders for the latest trends, opinions, and other inputs- to bring you the best and latest in the domain.
A Peer Knowledge Resource – By the CXO, For the CXO.
Expert inputs on challenges, triumphs and innovative solutions from corporate Movers and Shakers in global Leadership space to add value to business decision making.Media@EnterpriseTalk.com