How EY is using generative AI in its business

News

HomeHome / News / How EY is using generative AI in its business

Aug 08, 2023

How EY is using generative AI in its business

Generative artificial intelligence is all anyone can talk about these days. And chief innovation officers are certainly keeping a close eye on it. Jeff Wong, the global chief innovation officer at EY,

Generative artificial intelligence is all anyone can talk about these days. And chief innovation officers are certainly keeping a close eye on it.

Jeff Wong, the global chief innovation officer at EY, advises CEOs and EY’s own employees on understanding emerging technologies and how to implement them into business processes. Wong is also on the board of AI4ALL, a nonprofit organization that aims to increase diversity and inclusion in AI.

Like many companies, EY is experimenting with generative AI and fine-tuning datasets for specific business tasks. The giant accounting and consulting firm started building on top of OpenAI’s GPT technology last year. But EY, which has a quarter of a million clients and works with most of the Fortune Global 500, says it is taking a cautious, deliberate approach to the novel technology.

Wong recently spoke with Quartz about how EY is using generative AI, how AI is changing jobs, ethical considerations for companies using the technology, and how to spot hype versus reality in the tech world. The conversation has been lightly edited for clarity and length.

Jeff Wong: Well, one is we’re using it for our own purposes. And the second is, there’s obviously a series of clients around the world that are very, very interested in the space, and so we’re building generative AI tools for them.

Internally, there are hundreds of different projects that are being proposed. We’ve been very careful about which ones we’re moving forward with because we want to make sure we’re doing this in a secure and thoughtful way. One example is our EY payroll. There’s a whole series of questions that people ask about payroll, particularly when people are overseas and have questions about which tax filings they should file. So we loaded all the payroll tax laws into a generative AI system sitting on top of one of the mega-platforms. Then we’re able to query those payroll questions directly in a chat function like ChatGPT instead of answering with people who are not working 7-by-24. That is still an early beta experiment for us, but the time returns and the accuracy increases are quite significant.

So, you know, this actually is not a generative AI question. This is a data policy question. And we have a data policy [allowing for] a customer clause written into our contracts that allows us to use customer data in a privatized way for things like training our AI systems. We have explicit permission to do that, not implicit. The second thing is, we have a rigorous information security review process team, so we have a full technical and legal understanding of the actual approach [vis-a-vis] the data, information, security, and privacy.

From an ethical standpoint, we have said we will not do things where we are uncomfortable with the bias associated with the outcome. So if we cannot understand how we’re controlling for bias, we will not undertake that effort so early on AI. When it comes to generative AI, you can ask objective questions like: Does this law apply in this circumstance? That’s an objective question; bias is less of a concern there. But if bias was more of a concern, we would hesitate to implement the project.

We have a close relationship with all the regulators so far because we are a regulated business. And what’s been fantastic with advanced technologies, including AI, is we’ve been asked by governments around the world to comment on what our thoughts and theories are around how advanced technologies, in general, should be regulated. So we do this for blockchain, we do this for quantum, we also do it for AI.

For us, the regulatory frameworks are not optional. They are required. We have teams who are keeping track of regulatory frameworks around the world, including advanced technology, including artificial intelligence. So we follow it very closely everywhere.

We think this has an amazing opportunity to automate a lot of the tasks and job functions that we have around the world, and we expect and plan to use it to the fullest extent of the technology.

We have applied AI and automation technologies over my entire time here at EY. Every single time we do it, our teams are extraordinarily excited and welcoming of these technologies. And when it’s not coming fast enough, they ask and demand it. The same thing is happening in generative AI. We’re getting so many requests from around the world from people saying, “Hey, we think we can do our part of the business better, faster; we think we can use it in this way to help answer better questions for our clients.”

Is there a risk that [AI] causes disruption in terms of job changes and job loss? We haven’t seen it yet. But do we understand the risks associated with that? Yes, we understand, which is why we always had an obligation to our employees to make sure that we’re giving them the training tools to constantly up-skill themselves. So that’s everything from our badges system, which is a way that they can learn about AI at multiple levels, blockchain at multiple levels, quantum at multiple levels, and we also offer a few different master’s degrees for free.

I think companies around the world need to be thoughtful about how they approach this. And frankly, I think governments around the world should be thinking about this as well. How do they consistently up-skill, and provide opportunities to up-skill their citizens around the world?

Yes, there needs to be more diversity in important technologies, inclusive of and probably especially for AI. We’ve seen in our own recruiting efforts, it is very difficult to recruit a diverse skill set of engineers, product people, policy, and people who are steeped in artificial intelligence to understand and have that context.

We recognized that in order to make this pipeline work, we had to work our way all the way back to high school in order to make sure that the pipeline was robust for having a diverse set of voices entering the field. Without that diversity, we’ve seen some of the challenges of the past where folks just don’t recognize when their algorithm has an output that’s biased. We have to find a way to encourage people of different backgrounds and ethnicities and genders, and different ways that people identify themselves, to be participants in these world-defining technologies. Otherwise, we risk having some of the inherent biases in our history be repeated.

I think a year from now, there will be certain activities and things that work amazingly well—things like systems that can listen to this conversation here, take notes, pull out the key points, and summarize it for you so you don’t have to do it on your own. Some of the other things that we’ve seen—where I can verbally tell my machine to pull all the anomalies in this dataset, show me where revenue fell off, or explain to me why—those examples need more work. So I think we’ll see some remarkably cool tools that work for us in a year, but not necessarily the broad-based promise and change.

I will also emphasize: This is really normal. The hype of what it can do is ahead of the reality, which is almost always the case—and that’s fine. I just think it’ll take more than a year to come together. Five years from now or eight years from now will probably be remarkably different in the office, where we’re able to ask AI to write and respond to emails really competently. Or maybe your email system and my email system have an entire exchange on their own, where your system knows, “Hey, there’s something in your article—you’re missing a point, and you should ask this question,” and it asks my system. And they go back and forth several times, and maybe we only review the end negotiation. You could see that advancement, but I think the time frame will be more in the five-year horizon than in the one-year.

That’s actually very straightforward. The hype curve is associated with people’s imaginations about the promise of what’s possible, and it’s amazing. I love things at the hype level. The reality curve is the products that I see and that I have touched. The great benefit of where I work is we can see almost anything in the world, as people like to show us stuff. And so I see I can see what’s real.

The promise is overhyped, but it is really, really cool. The reality is also really cool.

Under-hyped. Overhyped over time, but today under-hyped.

Oof, this is a good one. Overhyped in the sense of, I don’t know if we’ll see them for a while. But we can do them now.

Under-hyped. Portugal is awesome.

Under-hyped. Failure is so integral to the learning process. Everything needs to be about constant learning and growing, and if you’re not willing to risk failure in that, I don’t know how much you can grow.

Oh, this is a good one. It’s under-discussed for the implications. So I would say under-hyped. In a world where 1% of all your encryption is broken, that’s a catastrophic change. So if you’re not thinking about it now, I think you’re not understanding the risks associated with these black swan catastrophic change events, and these are things that you need to be thinking about.

Our free, fast, and fun briefing on the global economy, delivered every weekday morning.

Quartz: How is EY currently using generative AI, and what are the next uses you see for it?Jeff Wong:How does EY ensure data privacy and ethical use of generative AI in its operations and client services?What is EY’s stance on AI regulation? How do you keep up with the evolving regulatory landscape?Is there a valid concern that generative AI could automate white-collar jobs and cause significant workforce disruption?You’re on the advisory board of a nonprofit that’s trying to increase diversity and inclusion in AI. How do you assess D&I efforts in AI to date? And do the recent attacks on D&I in education and elsewhere mean that these efforts to build D&I into AI risk getting derailed?Let’s picture what working in an office will look like in a year from now, when apps like Microsoft Office and Slack have been revamped with generative AI.How do you distinguish real value versus mere hype in generative AI products?Now, let’s do a quick lightning round of—hopefully!—fun questions. Tell me whether you think the following things are under-hyped, overhyped, or appropriately hyped, starting with AI chatbotsCryptocurrenciesDelivery dronesPortugal (as a destination for sabbaticals, vacations, or relocations)Embracing failureQuantum computing