close_game
close_game

AI companies can be innovative and responsible: Markham Erickson, Google executive

By, New Delhi
Dec 07, 2023 06:24 AM IST

Regulating artificial intelligence (AI) does not need to be a binary choice between self-regulation by companies and strict legal guardrails prescribed by governments, and AI companies can be innovative while behaving responsibly, a top Google executive said on Wednesday

Regulating artificial intelligence (AI) does not need to be a binary choice between self-regulation by companies and strict legal guardrails prescribed by governments, and AI companies can be innovative while behaving responsibly, a top Google executive said on Wednesday. In an interview with HT, Markham Erickson, vice president, government affairs and public policy at Google’s Centers for Excellence, shared the company’s learnings and approach to keeping AI safe, including efforts underway to help differentiate between synthetic, computer-generated text and media, and those created by humans. Edited excerpts:

Markham Erickson
Markham Erickson

Approaches to tech regulation have varied between voluntary mechanisms on one extreme, and hardcoded legal protections on the other. When it comes to AI, which model would work best?

I think it’s both. It’s not going to be binary. There’ll be a spectrum. And I think it depends on the issue. It should be risk-based and proportionate as we think about what the governance structure of any part of the AI ecosystem should be. I think there is an understanding that it’s probably a combination of self regulation, self regulation codes, voluntary codes of conduct, technology standards that are interoperable, then hard regulation. Four and a half years ago, our CEO, Sundar Pichai, announced that we were an AI first company. He thereafter also said AI is too important not to be regulated and not to be regulated well. It needs to still incentivise innovation. We’ve learned you can be innovative and be responsible at the same time.

There have to be rules that ensure their safety, and those will be risk-based in proportional system. So, I’d like to say that if you have a smartphone, then you have AI in your pocket. And there we engage with AI apps every day —whether it’s Gmail, which is recommending a response to an email, or our map product that suggests the most eco-friendly route. We probably don’t need a heavy regulatory structure over that. But there’s things that are high risk, that have implications for health and safety, for example.

One challenge is, how do you start regulating AI, especially the most advanced, or as they are known frontier AI models. AI and LLMs (large language models) are often within a proprietary black box. How do you give policymakers, regulators and independent researchers visibility into how these systems work and the challenges they pose?

I am encouraged by the global conversation on this — UK had their AI safety summit a number of weeks ago and a big topic of conversation was this very issue, which is, how do we ensure that governments are aware of the capabilities of highly capable frontier model AI, and that they’re in a position to be aware of those capabilities, and then to ensure that their citizens are protected against negative externalities? And we’ve leaned into those conversations. Earlier this summer, we created a Frontier Model Forum, which are the big large language model companies — Anthropic, Google DeepMind, Google, OpenAI and others. The purpose is to have a place where companies, technologists, academics and governments can have a conversation about this exact issue. I think there’s widespread agreement that we should protect IP, protect citizens and ensure governments have awareness and visibility — those are not mutually exclusive.

Can you tell us a little about how some regulation approaches can look like? One model you discussed earlier was the hub-and-spoke approach.

The OECD, I think, was the first organisation that suggested a hub-and-spoke approach to regulating AI. We think that’s a good framework because it is efficient to ensure the government has some central expertise on AI, rather than try to have duplicate AI expertise at every agency. So, a central hub can be of assistance to the various government regulators. And on the other end of the telescope, the government, that central team doesn’t have to have the expertise in health care, or transportation safety; those expert agencies will continue to have their expertise. And so, it’s a way to get the most out of government without creating redundancy and inefficiencies.

Frontier models tend to be the most advanced ones, and potentially the most dangerous , in terms of societal harm in the form of discrimination. So, is there an incentive emerging for companies to not really say that their advanced model is actually frontier model because that may attract greater regulation?

It’s early days and I haven’t seen that from at least the LLM companies. I think there tends to be agreement about working with governments in a very constructive way. I think this is where there will be some awareness about the capabilities of frontier models, and we have to have the definitions that are in place. I haven’t seen that yet but it’s still early days.

One of the discussions that’s been happening in India about AI regulation is a question of responsibility and allocation of liability. How should the responsibility for what a model or an AI application does be allocated?

I think the developing consensus here is that entities that are in the best position to control the use of any particular AI application should generally be responsible for the uses of that application. There are going to be foundation model APIs and there are going to be a lot of companies with their own applications through that API. So, we want to be responsible for the fact that our products and services like Bard, are capable of being inaccurate, of having hallucinations. And we want the feedback when that happens. And then we want to mitigate that. So examples where people highlight issues with an LLM is helpful for us so that we can improve it.

I’m also pretty positive that if someone [a user] did something harmful, however, they did it, the government is still going to want to hold that person accountable for what they do.

When it comes to AI policy, YouTube, for instance, announced that any synthetic imagery created using AI has to be disclosed. How effective do you find disclosure-based compliance from creators or advertisers?

The north star here is to ensure transparency and awareness. So, we want people to know if something has been created is synthetic. We’ve built tools internally so that if people are using our tools to create synthetic content, it can be watermarked. Such technology can be used not just for videos, but for speech. We are working in cross-industry conversations to have interoperability so that other companies that are also providing [AI] tools and content screen [for synthetic media]. And if users come across it on our system, then they will be able to have the same kind of awareness [that it is synthetic media]. And so, there’ll be some interoperability. And then there’s going to have to be efforts for those entities that don’t want to participate in that kind of sharing to identify synthetically created content. And that technology is developing and is going to have to develop further.

Which other companies are involved in this cross-industry collaboration?

There’s a number of companies involved in the conversation, and there are some forums. When we started doing the work here, we did reach out to other companies to start having a conversation about have interoperability and many companies at that point, said, we’re not ready to have the conversation yet, because we’re not far enough along on our work. But many of those same companies now are have progressed to the place that they’re joining these kinds of industry conversations.

SHARE THIS ARTICLE ON
SHARE
Story Saved
Live Score
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Thursday, May 08, 2025
Follow Us On