close_game
close_game

Meta is talking about the metaverse, again

Feb 24, 2022 04:45 PM IST

Meta is making a Builder Bot tool that will allow users to create 3D objects and places in the metaverse simply by describing what they would like to see

Meta has made it quite clear, time and again, that it will be investing heavily in the metaverse vision. The latest step towards that comes with the detailing of several projects that are ambitious in nature and rely heavily on artificial intelligence (AI).

Meta is making a Builder Bot tool that’ll allow users to create 3D objects and places in the metaverse just by describing what they would like to see. (HT photo)
Meta is making a Builder Bot tool that’ll allow users to create 3D objects and places in the metaverse just by describing what they would like to see. (HT photo)

In fact, Meta is making a Builder Bot tool that’ll allow users to create 3D objects and places in the metaverse just by describing what they would like to see. So, if you say, “Let’s go the beach,” for example, then that’s exactly what will be in store for you in the metaverse.

This is being worked on alongside projects that’ll develop a new conversational AI system that will include virtual assistants, a universal language translator, making AI more explainable and a new open-source library to develop AI for recommendations.

Meta also says they’ll be working with professors at universities and a demographic of students, to make the machine learning curriculum available to more students. There is a specific reference for reaching out to ‘underrepresented groups’.

Say it, create it: Can it really be that simple?

The Builder Bot could be the most exciting project for consumers because that’s a more visible development of the metaverse vision. It all starts with a clean slate in the metaverse (or a clean grid, since things work a bit differently in the new web) and you as a user can simply say things to create a virtual world around you.

“Let’s go to a park” replaces the cold white grids with the serene peace of a virtual park. And on a whim, you can change your mind and go to a beach instead. Can you do that in the real world?

Builder Bot’s ability to add 3D objects around the user can be a big push for machine-generated art – while these features are more accessible than before, most still are restricted to 2D. A lot of refinement beckons, though – the Builder Bot demo did exhibit a lot of rough edges.

Also Read: Are ‘metaverse’ weddings the new normal in India?

For instance, in one frame, the two friends on the virtual beach are standing in the sand, in the next frame they are in the water and then back on the sand again. And there’s the small matter of all the limbs being in place. Half a human – arcade graphics from a decade ago, did better on that front.

It remains to be seen whether Builder Bot picks from a library of models that’ll be created and replenished by humans, or AI will be able to create objects based on what it learns. “You’ll be able to create nuanced worlds to explore and share experiences with others with just your voice,” is how Meta CEO, Mark Zuckerberg, envisions things.

While speaking with HT, Antoine Bordes who is managing director for AI at Meta confirmed that the exact contours are still being worked out in terms of the quantum of human-generated models that the AI will pick up and run with. “That is why this is still in the lab and not a final product,” he says.

Translation tool to leave behind challenges?

Meta is also working on a speech-to-speech instant translator, called Universal Speech Translator, which will use AI – but Meta also points out three potential challenges. As things are, AI translation systems can’t handle thousands of languages globally and cannot provide speech-to-speech translation in real time. There is the need to acquire more training data in more languages, in addition to what’s already there.

“We’ll also need to overcome the modelling challenges that arise as models grow to serve many more languages. And we will need to find new ways to evaluate and improve on their results,” says Sergey Edunov, Research Engineer Manager at Meta.

A future where conversations are contextual

Conversational AI developments will extend to assistants, or virtual assistants as we know them. Meta now has the Project CAIRoke in place, which has already developed a neural model for contextual and personalised conversations. Meta says this is now available to users of the Portal smart displays and will soon be integrated in virtual reality devices for use in immersive interaction scenarios. This will have a direct impact on how you communicate with assistants in the virtual worlds too.

“Researchers and engineers across the industry agree that good conversational systems need a solid understanding layer powered by AI models. But many feel interaction is an engineering problem, rather than an AI problem,” Alborz Geramifard, who is a Research Scientist at Meta, points out.

Why AI does what it does: Now you’ll know

There has often been a question about transparency in the decisions AI makes. Such as how it determines what you see on your Facebook or Instagram feed, for instance. Meta says they are publishing a AI System Card tool, which will give a better explanation about an AI’s architecture and workings. At this time, the why-it-does-what-it-does explanation is available for the Instagram feed ranking.

The details of how the ranking works suggests that the system starts by gathering potential posts from accounts you follow (these include friends and creators but excludes advertisements at this stage) to sort of any reported violations.

Then the machine learning models attempt to predict how likely you are to interact with a post from the shortlisted ones – how often you’ve interacted with similar posts or with the author, have a bearing. At this stage, each likelihood is given a numerical score.

Meta says the same three steps are then repeated for posts that include shopping, videos, Reels and hashtags. Then a fact-checking layer comes into play, for misinformation and repeat offenders.

Diversity in AI education

Meta also talks about the new Artificial Intelligence Learning Alliance (AILA), which is seen as an attempt to be more inclusive. Meta has worked with the Georgia Institute of Technology to develop a deep learning course curriculum – this was made available in Fall 2020 and Meta indicates more than 2,400 students have been part of the online course.

“Now, we are making the course content available free to all and are working with professors at historically Black colleges and universities (HBCUs), Hispanic-serving institutions (HSIs), and Asian-American and Native American Pacific Islander-serving institutions (AANAPISIs) in our newly established consortium to further develop and teach the curriculum,” says Denise Hernandez, Meta AI programme manager.

This curriculum will now be offered at more educational institutions, including University of California Irvine, North Carolina A&T State University and Morgan State University.

Read breaking news, latest updates from US, UK, Pakistan and other countries across the world on topics related to politics,crime, and national affairs. along with Operation Sindoor Live Updates
Read breaking news, latest updates from US, UK, Pakistan and other countries across the world on topics related to politics,crime, and national affairs. along with Operation Sindoor Live Updates
SHARE THIS ARTICLE ON
SHARE
Story Saved
Live Score
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Friday, May 09, 2025
Follow Us On