Experian’s tech chief defends credit scores: ‘We’re not Palantir’

Today, I’m talking with Alex Lintner, who is the CEO of technology and software solutions at Experian, the credit reporting company. Experian is one of those multinationals that’s so big and convoluted that it has multiple CEOs all over the world, so Alex and I spent quite a lot of time talking through the Decoder questions just so I could understand how Experian is structured, how it functions, and how the kinds of decisions Alex makes actually work in practice.

There’s a lot there, especially since Alex is in charge of the company’s entire tech arm. That means he oversees big operations like security and privacy, and now, of course, AI — all of which is always important, but is even more critical when you factor in what kind of information Experian collects and stores about, well, literally everyone.

See, if you want to participate in the economy in the way the vast majority of us do — renting an apartment, buying a car, getting a job, or applying for a mortgage or a student loan — you’re part of Experian’s ecosystem, whether you like it or not. You’ll hear Alex talk about “consent” a whole lot in this episode, and he’ll argue that you can opt out, but the reality is, interacting with Experian is pretty much non-negotiable in the economy we live in today. It’s hard to do basically anything involving money without a credit score.

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.

That’s really the tension at the heart of a company like Experian: Credit scores dominate so many aspects of our lives, and they are controlled and calculated in ways that it feels like we have very little direct influence over. At its heart, Experian’s core service is data — data about people, about their money and what they do with it, about the decisions they make, the bills they pay or don’t pay. And this extremely valuable data weirdly makes Experian a part of your life — a life that becomes much smoother if the data the company collects about you tells a good story. So Alex and I spent a good chunk of time talking about the responsibility Experian feels toward the people it serves, not just on a security and privacy level, but also a moral one.

A lot of people don’t like the power Experian has, and by extension, they don’t like the company, either. I asked Alex pretty directly about that, and I found his answer to be a little surprising. Maybe one of the most memorable answers we’ve ever gotten on Decoder, really.

I also asked Alex pretty directly about the other big, messy question taking up the room: generative AI, and how exactly we can trust nondeterministic systems when they start interacting with really sensitive data.

You’ll hear Alex talk a lot about AI oversight, and how it’s being woven into the systems Experian uses for everything from risk assessment to predictive financial modeling. But the AI systems themselves are inherently risky — they get things wrong, they hallucinate, they might make incomplete or incorrect conclusions about very real human beings in ways that drastically affect lives.

So I really dug into how Experian sees AI technology being used internally and within the broader scope of credit reporting. And I also pressed Alex on the capability gap between what AI might be able to do today, what we think it can do or what AI executives tell us it can do, and then the reality of what it actually does and how well it does it.

The stakes for this stuff are very, very high at a company like Experian, and more than just its reputation relies on people thinking it’s being a responsible steward of their personal data and that the institutions it hands that data over to are using it to make responsible, fair decisions.

This was a really in-depth conversation about a really knotty set of subjects, and I really appreciated Alex’s willingness to get into the complexity with me.

Okay: Experian CEO of technology and software Alex Lintner. Here we go.

This interview has been lightly edited for length and clarity.

Alex Lintner, you are the CEO of Software and Technology at Experian. Welcome to Decoder.

Thank you, Nilay, for having me.

I am very excited to talk to you. There’s a lot to talk about. Experian is a fascinating company. A lot of people have a lot of feelings about Experian, which I want to talk to you about. Every company that comes on the show lately, every executive tells me that they’re an AI company. I think Experian wants to be known as an AI company. We’re going to get into that. Why don’t you tell me what you think Experian is today and what it has been and what you think it should be in the future?

Experian is a global data and technology company. We help consumers and businesses to make financial decisions and protect their data and identities. On the B2B [business-to-business] side, we have four verticals: financial services, healthcare, automotive, and marketing services. On the D2C [direct-to-consumer] side, we provide consumers with information that helps them understand, protect, and manage their financial lives. So we help them build credit and qualify for their next desired loan.

My favorite example is they’re getting their first mortgage, which is a hard thing to do in America, but a major wealth builder for Americans. We give them access to comparing financial products so they can lower their borrowing costs. We protect them from fraud and identity theft, like I mentioned earlier, and we help them save when they buy car insurance. So that’s Experian to me.

This is going to be very reductive, and I’m saying it on purpose because I’m curious if it really is this simple or if there’s more complexity there. That sounds like Experian maintains a big database of information about people, mostly about their credit.

When you say it protects that information, that’s because having all that data is very important and very powerful and very valuable, but it’s also the information that mortgage lenders use. It’s the information that car insurance brokers use. How do you think about the core product? Is it just a database or do you think about it differently?

Maybe we should back up a little bit. If AI is a platform capability, it’s not a feature. We use AI primarily to help embed governance, help explain ability — which is required by law and desired by the consumer — and to actually facilitate human oversight.

When you then back up into where we came from and your question at the core with all the data that we hold, from a technology perspective — and I’m the tech guy, so I’m going to talk about the technology — that means that we apply data analytics and AI into the hands of decision-makers. And those can be in businesses, financial institutions, and mortgage companies like you just said, but we also supply it to the consumer directly.

And the objective is the same. The objective is to turn complex data, complex information into easy-to-understand, actionable guidance so that either the lender or the consumer can make a confident decision. That’s the objective. You need the same data for that and both sides need to see it because the data is the objective truth and then the consumer can make a decision and the lender can make a decision, if you’re talking about financial services in particular, which you’re examining.

I’m way at the bottom, I’m at the primitives here. The main thing is a big database of financial information about consumers and their credit history and their ability to pay for things. Is that the main thing or is there another core element of the product?

It’s a core element. I think you’re overemphasizing the financial information. Financial services is one of the sectors, but like I said earlier, we have a lot of other information that is useful, that has nothing to do with the core lending information that we have. For example, the history of people’s lending behaviors. And the other information is just as useful.

If you look in the automotive vertical, for example, we have an equivalent to CARFAX called AutoCheck. It has vehicle history, ownership history, maintenance and repair history, accident history. So there is a lot of other information that is actually relevant for these decisions. It’s not only the financial information that we have about people.

And by the way, when people say “financial information,” often it’s interpreted as we have account numbers, et cetera. We do need account numbers to match the accounts to people, but it never goes out. And it’s double encrypted, so super protected. We don’t use any of that information for any of the services that we provide, except for the pinning so that we can match it to a person.

Can I offer you my feature suggestion for AutoCheck? I do a lot of idly shopping for cars I’m never going to buy, and I love feeding the AutoCheck report into ChatGPT, and then ChatGPT tells you a little story about the car. If you find a particularly sketchy AutoCheck report, it tells you a story about how the car was obviously stolen and is being laundered.

You should just put it in the product.

I’ve got to try that. That sounds like fun.

It’s a good time. If you’re holding a crying baby and you’re like, “I’ve got to sit here for another hour,” it’s a very good way to spend the time.

I just had my third grandchild and she’s two weeks old, so that’s actually very, very current. I love holding her and now I know what to do while I do that.

I’ll send you some specific models of cars where it’s like all of them are stolen for some reason. It’s very good.

The reason I keep asking about the database is that I have a thesis for 2026, that maybe what we’re all discovering is that all of our lives are captured in databases, that there are these huge stores of information held by various companies, held by various governments, held by various agencies inside the government.

Maybe what AI is going to do is make those databases more legible. And maybe what it’s also going to do is make the holders of those databases far more powerful, right? Because you suddenly have more access to the data, you can use it in different ways, you can connect all these databases in different ways.

I hear this pitch from a lot of people. You have the biggest database, right? Experian is one of the most powerful databases in American life. So there’s a reason I’m starting with that. I’m curious how you think about that power, as it becomes easier to express that power, it becomes easier to share the contents of that database with people, and it becomes easier to query that database. How do you think about that responsibility at Experian?

It’s a giant responsibility and we take it very seriously. There are a couple of aspects to that. Our business is based on consumer trust. Once the consumer starts losing trust, the brand goes nowhere. Investors start losing faith and everything goes down the drain. So if we don’t do that part of our business well, there is all the other stuff that I could talk about — and then maybe we’ll talk about that in a little while — it goes away.

You talk about it as a database. Nilay, the way I would talk about it is that our largest businesses are on modern cloud-native and AI-enabled platforms. And these platforms that let us securely ingest massive amounts of data, like you’re saying, in real time, then apply advanced analytics and machine learning while we keep privacy, consent, and security at the center. That’s how I think about it. The database as a function has morphed into data lakes and then now I would refer to it as a platform.

I start with the last part that I talked about. So keeping privacy, consent, and security at the center. What you really need to think about is, how do you do that? And how do you do that better than anybody else? And how do you do that in light of the fact that the bad actors know everything that you just said? What you just said is we’re one of the largest data companies in the world and therefore we got a lot of information and bad guys like information. So to keep it secure, you need to have a — I’m going to call it a bulletproof setup, from front to back, of every application.

Most people talk only about encryption, but it goes way beyond that. It goes to access rights. I named that consent earlier. It goes to, how do you store the information? You can shorten it, which I really like. Break it up. So when people find Nilay’s information, they find maybe only your first name, not your last name. They maybe find your street address stored somewhere else and your account information stored again in another place. In other words, if you break it up into 25 shards, they’d have to break 25 encryption keys, know how to pin it back together to one individual in order to really understand Nilay. And that’s complicated.

So the game is we need to have security systems that stay ahead of the bad guy. And we need to have at the core of our mission, the core of our purpose as a company, that every employee needs to act to a purpose that says what I now say for the third time: keep privacy, consent, and security at the center of everything we do.

Let me ask you an existential question about that: What if I don’t want you to know me? I mean, what we’re talking about is you’re collecting a massive amount of data on regular people. I think I hear from our audience every day, “Why is this happening to me? Why do these companies already know so much about me? How come when I use my loyalty card at the grocery store that gets meshed up with a bunch of purchase data on Instagram that gets combined with a bunch of data? Why is my phone listening to me?” That is basically the end result of that.

I’m like, I don’t know that it’s listening to you. I think there’s a lot of data about you that makes it appear that the phone is listening to you and that is more scary and less legible than “the phone is listening to you.” Have we opted into Experian? Do you think about that level of, maybe we should ask everybody if we want to be tracked in this way or track as many people as we do?

I have two answers to that. So the first answer is privacy laws are such that you can opt out anytime. So if you, Nilay, don’t want your information stored, you can do that. You could do it on your phone so it doesn’t listen to it and you can do it with us. The bigger answer I have is the following. And this is based on research, this is an absolute truth proven over many decades, and that is that prosperity in an economy, prosperity for a family and prosperity for an individual is strongly linked to access to credit. In other words, you can look at countries that don’t have access to credit like we have here in America, and you can see that their economic evolution lags behind that of the United States.

You can look at families where maybe the parents didn’t have access to credit and therefore they couldn’t do what now their children can do who have access to credit. Or you can look at an individual on how fast they advance because credit allows them to pay forward their earning power and their ability to repay a loan and therefore make investments that then can be accretive to their wealth.

So put in another way, if a lender would not have information about an individual, Alex or Nilay, they cannot make a decision about whether they’re going to lend money for you. And let’s be clear, lending is one of the riskiest businesses there are. Let me describe it in the following way. I look at you, Nilay, and I ask you a couple of questions about, “Hey, have you had a couple of loans before? What do you want to do with the money? How are you going to pay me back?” And then I decide whether you’re a good guy, worthy of getting this loan or not.

If I give you the money, at that point, I’m at risk because the money leaves my account, the lender’s account, goes into your account, and you can do with the money as you please. It’s a very high risk business. The lender needs to have the information in order to make the decision. You, the consumer, need access to credit because it will advance your standard of living, your quality of life, and your wealth creation. So privacy laws allow you to opt out, but it is actually in your and the consumer’s interest that you make the information available for lending.

One of the questions I have about that — and I think, again, this is going to be a theme of 2026 in our coverage — is that AI enables these things to happen at a different kind of scale. Because you can automate the systems in different way, you can query the systems in different way, you can extract value from the data in different ways. And I wonder… I agree with you, right? Lenders need to mitigate their risk in some way. They need to know who they’re lending to. They need to manage whether or not they think they’re going to get paid back.

But being able to do that at scale and saying all of these should be centralized stores of information and not more local… It’s my local bank and my local community that needs to evaluate my risk profile. There’s something about that scale that feels different, and obviously Experian enables massive scale. Do you think your responsibility is different with scale?

That’s a really interesting question, but I’m the tech guy here, and from a technology perspective, I don’t want to make a sort of macroeconomic or regulatory statement. From a technology perspective, it’s definitely true because if you have scale, you hold more information, and as you hold in more information, you need to deal with it responsibly. And again, it gets me back to those three tenets. We need to protect privacy, consent, and security. And if you have more information, you better do it really, really, really well.

To get back to your local or national or global scale — first of all, there are very few global financial players. So let’s start there. We can probably count them on two hands. And even then, I know those companies from the inside, they don’t always act globally. They often act locally. I don’t want to name any names, but large international banks born in Europe, large international banks born in New York City where you’re at, they have an American business strategy and then they have a British and Australian business strategy. It’s actually different and our models are different and lending criteria are different and the lending products are different. Global presence is rare.

Now, let’s talk locally versus nationally or super regionally. In North America, we have the good luck that we have 7,000 financial institutions. That is a model that’s unique in the world. We don’t have that anywhere else. And if you go all the way from the top to the bottom, at the bottom, you would find these credit unions. And credit unions are typically very local, though there are now large ones like Navy Federal Credit Union. You’re familiar with all of those that serve the armed forces everywhere, or USAA which serves members and relatives of armed forces members for insurance and banking around the country.

There are some exceptions, but largely credit unions are very local. They do not have access to capital like the large super regional or national lenders have. And access to capital is important because it is a volume game. The more you buy capital as a reseller, which is what a bank is, the better the terms you get. And therefore you have the potential of offering better terms to your borrowers, to the consumer. And therefore, I think the mix of local and national is a good mix. It has worked here in the US.

It’s definitely worked to make the cost of capital come down in various ways, although who knows what’s going on right now. Every day it could be different, but I think my question is—

It’s less predictable than it’s been in a long time.

But I wonder if the trade-off is a feeling of disempowerment for the actual consumer, right? And that’s one of those hard trade-offs. Yes, there are some privacy laws in the United States. There are not very many, right? Yes, there’s some recourse against a financial institution or if there’s a data breach, but there are not many. And so I’m just interested in that trade-off and your perspective in that trade-off.

I will say that you have perfectly teed up the Decoder questions because you described the structure of multinational banks, because Experian’s org chart to me from the outside is bananas. Straightforwardly, there’s Brian Cassin, who is the CEO of Experian, then there are CEOs of regions. So there’s a CEO of Latin America, one for North America, and then there’s you, and you are the CEO of technology and software. So explain to me how that all works.

All right, let’s get into that. So the way we work is that we are a federated system, and it’s not unusual. Maybe our titles aren’t super intuitive and don’t explain it, but let me try to explain it. You have central functions where everything is the same regardless of where you are in the world. Think of finance, think of HR, and think of technology. So you want to have technology standards, you want to have security standards that you apply everywhere in the world.

There are economic reasons for that. You don’t want to have a slew of vendors. You want to have golden pathways. That is what keeps everybody secure, and that’s how you manage consent, and that’s how you manage privacy. And all of that should be done in the same way so that we have control over it. Our governance can look at it. Auditors can look at it because we are auditable by the SEC, and they all can say, okay, we apply those standards the same way, regardless of where you are in the world.

Now, if you look at the context — call it the economic context, call it the socioeconomic context, or how much do people make, et cetera, et cetera. That differs everywhere in the world. It differs whether you’re in the United States or my native Germany or India or Australia. We’re active in all these countries and the context is different. And therefore our go-to-market oriented business units, they have CEOs that look over the region, understand that context real well, and then the product is applied appropriately for that country.

And by the way, regulation varies. We do have to adjust some of our security and privacy dials to comply with the country-specific regulations, and that’s why we have the matrix function. Some central functions look at achieving scale, look at achieving clear governance, doing everything the same way, and market specific to the consumer needs. The context is specific to a specific country. Our region, that’s how you should think about it.

How many people are at Experian overall?

And how many are in your division?

I have a direct reporting line of 4,000. We have 11,000 technologists. So think of my function, 4,000 on my direct reporting line. They roll up to me.

I was going to say that 4,000 direct reports is a little over the guidelines, I think.

In my direct reporting line. So all the way down.

[Laughs] Yeah, I was joking.

I have seven direct reports and then it goes down. The other 7,000 are in technology organizations. I still set the standards and the policies, our technology policy that everybody needs to work by, but they’re not in my direct reporting line.

And is that structured so it meets the needs of the regions or how does that work?

We’re trying to walk that fine line exactly like I explained. My job is to build a backend that is superb, make our platforms the most secure and least expensive way for us to deploy software to our customers. And the regions and the business unit’s job is to build products that respond to consumer needs. And there are functional needs depending on the use case — that’s the business unit — and there are regional needs that are based on the context that I just talked about that can vary by country.

I’m fascinated by the structure.

It’s working well enough, but we’re evolving. We’re growing as a company, which is a nice thing to do. And I would say — I’ve worked at other large corporations, as you know — the pendulum swings. Sometimes you do things a little more centrally, sometimes you do a little more locally, and you always reevaluate and see what’s working. In the AI world, I would tell you doing more centrally is probably a good idea because like I said earlier, I think about AI as a platform capability, not a feature, and therefore you have to have that capability everywhere and you have to allow reuse of models and you have to govern it very carefully. And I think doing that once rather than 23 times in 23 countries is a good idea.

It does seem that whenever there’s a technology shift, the push towards centralization appears. “We need to get a hold of this. We need to understand how to use it and we can spread it back out to the divisions.” I’m just curious, you describe yourself as a provider of backend solutions. That’s your job. Your title is CEO. Do you think of yourself as the CEO of an infrastructure provider inside of Experian? Are you a vendor to the other divisions?

Well, think about it this way. My title is CEO for Experian Software and Technology. The Software stands for all the software we sell to our clients. On that side, I’m in charge of what the product looks like. Is it evolving the way it is? Do we have competitive advantage versus everybody who competes with us? And the product needs to be the best. Certainly we try to always be the first or the most innovative, the first, best, and, in some cases, only product that can do what our products do. And that’s how we make money, that’s how we grow those businesses. It’s a typical market-going role.

The other part of my title, Technology, stands for our technology infrastructure, and that’s a little bit of what we have talked about so far. That’s empowering all the business units with all the services that they need. And we do have platform builds. The way I think about it is, we want to apply data, analytics, and AI into the hands of all of the business units that build our product. So the question is, what can I build centrally that enables them to do that faster so that we can stay innovative and they can stay innovative?

So you have shared data foundations and shared backend services. You have modular services that people can use. And then you have AI models that could also be reused if they access the same type of information. Typically, that’s appropriate when it’s depersonalized information, not personalized information. And that saves us then from building — if you put the three together, so there is a shared data foundation, backend services, modular services, and AI models, you then don’t have to build one-off apps anymore, but you can reuse a lot and focus on the feature functionality that’s specific to that industry, to that vertical or to that country.

One more question here, and then I want to ask the other Decoder question. You mentioned the divisions making products. Do they have their own engineers, designers, or is that all in your group?

So 4,000 plus 7,000 equals 11,000. Of the 23,000 employees that we have at Experian, 11,000 work in technology organizations. 4,000 work in the central group that’s mine, and the other 7,000 work in the business units.

So how do you align those roadmaps? You can very quickly see how you might have one division working in one product that another division is also working on, and that is redundancy you might not need, or you might decide actually they need to be more different than similar. How do you align that?

I mean, that’s the work every day, Nilay. It’s not always easy. People think, “Oh, my division can build it better or faster or differently and therefore we should.” So we communicate. We have what we call a technology executive board, which I run. I’m the chair of that. All the CTOs sit on that and we disclose roadmaps. We talk about standards and make sure that once we have a standard defined, there is no rebuilding, then it’s all about reuse. So that’s our governance model in order to coordinate everybody, the technology executive board.

Tell me about that meeting. Just take me inside that room. Very, very few people will ever be in that room, right? Who makes the agenda? Is it you? How does that work?

I have a right-hand person, a group CTO, Rodrigo [Rodigues]. He works with the CTOs to say, “What do you think we should talk about?” And then he makes a decision on what’s on the agenda. It gets to me, call it a week before the meeting. I say, “Yeah, I like it,” or “I don’t want to talk about this. I want to talk about that.” He goes back out and then it’s sent to them so that everybody can prepare.

Everybody dials in. It’s a worldwide meeting; you know how complicated that makes early mornings for me because I sit here in Colorado and just because time differences go both ways, we try to do this in the early morning hours for California, 6:00 AM, 7:00 AM my time, and then everybody dials in. Altogether, I think we have 20 people dialing in. There are 10 CTOs and CIOs dialing in. And then there’s our CISO on that meeting, our risk officer is on that meeting.

We have some people who drive specific topics. So for example, the person who drives our AI initiatives and coordinates it across the company, et cetera, et cetera. When we did the cloud migration, we are at the tail end of that. There was a person on the call who was responsible for the cloud migration. They’re all high-level people that I’m going to call it an expensive meeting with real decision-makers. The meeting lasts about three hours and we have it monthly.

This is going to lead right into the next question. Tell me what the spiciest thing that you had to make a decision on was in that meeting?

Well, there are so many. Spicy is when it comes to enforcing a standard where people need to maybe decommission a tool that they love, decommission a tool that their developers love, decommission a tool that’s embedded in all the customers. And then adopting the standard means of migration at a minimum for our internal technology teams and maybe even for the clients, because it becomes an effort that takes time. It becomes an effort that costs money. It becomes an effort that clients don’t like.

And therefore making such a decision is long contemplated and requires detailed plans, because you don’t only need to think about, well, is it the right standard or not, but what are the consequences, the secondary and tertiary consequences of the decision? That gets spicy. And we’re not an autocratic organization, so we err on the side of letting everybody speak their piece and hearing everybody out. And if that takes several meetings, then we let that happen. But at the end, we all align, even those who would have preferred a different decision. Those are the spiciest of all decisions. And there are many examples.

And it’s always migrations. It’s never anything but migrations. It’s lurking in the background of every company. This is the other question I ask everybody who comes on Decoder: You’re describing the kinds of decisions you make and the manner in which you make them. How do you make decisions? What’s your framework?

I think God has given us two ears and one mouth because we should listen twice as much as we talk. So as a leader, what you need to do is hire world-class teams and people who are better at what they do than you are, and then you need to let them do their work and you need to let them speak. At the end of the day, I try to surround myself with people who can scrutinize what people have in their brains and what’s being shared. And if they come to a consensus, I usually go with the consensus. You can probably count on a couple of fingers how often in a year I will go against what that group of CTOs would want to do. And if that happens, it is usually because I refer to a principle that they did not take into account and I try to be a principle-based leader.

I have a clear hierarchy of how I make decisions. I talked earlier about privacy, consent, security: those are at the top of my list and it’s not always the most economic decision, and therefore my CTOs might suggest something that makes more sense from an economic perspective, but maybe isn’t as tight from a security perspective. And then I veto it and I say, “Well, we’re going to pay the extra money and we’re going to do it anyway.” But it happens very, very rarely because people know the principles that we work by.

So if you have clear principles, you listen to people, you surround yourself with strong people, you make room for a debate that is open, transparent, and very inclusive. Everybody can speak. There is no hierarchy in the room. You take your time for it and then you make the best call you can with the information available.

Let’s put this into practice. Let’s talk about how AI might be changing your business and what you’re doing. The foundation here is that even the idea of the credit score is relatively recent. This is a creation of basically the late 1980s and a lot of people can have a lot of feelings about their credit scores. I would say Experian, TransUnion, Equifax, you can have a lot of feelings about whether or not those companies are responsive to you if you have feelings about your credit score and where they come from.

In a world of AI, you have vastly more opportunity to make something richer in the data because you can query it differently. You have vastly more opportunity to collect information because you can ingest more unstructured information and provide predictions. And then you have vastly more risk because the models might hallucinate the data or they might reflect some underlying bias in the data set as a whole. Or you might have huge security problems as we build out how the AI models might talk to each other in databases. How do you evaluate all of that risk and still be trusted as Experian? Because that seems like an awful amount of new risk as the technology shifts.

Nilay, a great question and really perfectly articulated. Let me give you two answers to that. One is just explaining how we think about the credit score. You called it relatively recent from the ‘80s. So if it’s okay, I’m going to provide a different perspective to that. And then I’m going to talk about just how we apply AI.

Let me start first with the history of our company. We have a guy in our history, his name was Sy Ramo, an Indian immigrant into the United Kingdom and he ran a large merchant store. He sold everything between Nottingham and Birmingham there in the Midlands of England. And he had a big heart. And one of the things that he did was, when there were people who he knew well, he did give them drugs, pharmaceuticals on loan. They came and said, “Look, I’m sick. I have this. I can’t pay for it. Can I just have the medicine so I can get better and I’ll pay you in the future?” And he trusted and did that.

Then his immediate relatives, people he knew well, told other people, “Hey, Sy Ramo does this.” And then people started coming who he knew less well. And he said, “Well, who are you? I know your brother or your employer or this or that person.” And he expanded it. Fast forward a bit, there was a line outside of his general merchandising store with people who he didn’t know anymore. People coming to him because he had a big heart. He gave away pharmaceuticals, drugs without any securitization. And he was a smart man. And so he started writing down on paper what the attributes were of those people who he gave drugs and pharmaceuticals to, who paid him back and who didn’t.

And that, to me, to us, was the beginning of credit scores. He just looked at how people behaved and what people had in common who were good loan risks, because he gave away the pharmaceuticals without having money in his hand, and who were bad lending risks. That is part of how our company started and that’s still how we practice our business. If you understand how people behave, you don’t have to know their age, their gender, their ethnic background, their sexual preferences, all the stuff that’s written down in law anyway. We should all think about that and our business should work like that.

There’s plenty of regulation that stipulates that it is. That’s our very heritage. You look at people’s behavior. What we do with the data is that usually the data is depersonalized, because what I just described, you can do that without knowing it’s Nilay, it’s Alex. You don’t have to know you live in New York, I live in Colorado, you don’t have to know your background, my background, you just look at how we behave. It’s depersonalized data on which all those services are provided.

Then let me move to the second part of the question, which was about AI. And you implied in how you asked the question that there is access to that data. So let me first say, our data is not accessible by any public AI or gen AI models. And we currently don’t see a way that we’re going to go there. What we use AI for primarily is to make sure that governance is done correctly, explainability is provided, and human oversight is better than it was before. Let me give you an example. The way that financial services are creating their products is basically through a model. The model says, “I have this loan product and I think the acceptable risk is this type of person that behaves in the following way.”

Our data feeds that. It can be the credit score. It can be where you reside, if it’s a local or a regional bank, it can be your lending history. Do you have the capacity to take on another car loan? It can be your income. Has it increased over time and therefore is it projected to continue to increase? Et cetera, et cetera. There’s a whole bunch of data that goes into those models, none of which need to know whether it’s Nilay or Alex or who specifically we are. It’s all about, do we fit that model? The lenders need to file what those models look like and how the models are supposed to behave, meaning what kind of person qualifies, how many loans do they think they have, what would the loan losses be with the regulator?

So they do that, they develop the model, the lending product goes out, people start applying, the bank starts paying out the loans, and then loan losses start coming in. People start missing payments. That’s the model behavior, because there’s a prediction of how much of that will there be. If those variables come off, the industry term for that is “model drift.” Maybe the loan losses are higher. Maybe we’re not getting as many people of those age groups. Maybe late payments are more than we thought. All those kind of metrics, it’s called model drift if it comes off. We use artificial intelligence when those models drift to prompt the person who has created the model, or the oversight department in the financial institution, that there is model drift.

Not only do we tell them that there is model drift, we also tell them what variables in their model are the reason for the drift. You’re missing a data element, you set it too low, you set it too high, you need to open your funnel to people with lower credit scores, and then we allow them to adjust the model so that it behaves the way that they had filed it with the regulator. What I’m trying to tell you is that it’s not that we use AI to access all of the personal information of people. We use AI to look at outcomes, derive the data, and interpret that, and then make it available to humans so that they can use it in the way that it needs to be done in the example, so the human oversight of model performance.

By the way, that happens today, but it happens with slews of people, not automated, not real time, not as accurate as AI can do. And so we think there’s a real improvement of the process there because it makes lending fairer, more accurate. It allows the lending products to behave the way that the regulator intends them to behave, and therefore it’s AI for good, just like we try to make data available for good. And that’s important for people to understand. A data company like ours, like I said, currently I cannot see that we make our data accessible to any public AI provider and therefore let them build their large language model based on our data. By the way, the large language models are much better at text than they are at math.

This was going to be my next question. You’re describing a lot of math. My experience with every LLM is they’re pretty bad at math. Are you using LLMs? When you say AI here, are you using a different kind of AI?

No, LLMs. We’ve built our own large language model, we built SLMs, small language models for smaller tasks. We have about 200 agents built into our products already now. There are different ways in which we use AI, but yeah, we built an LLM based on information we have.

But when you’re calculating model drift, that’s an LLM doing it, or what kind of technology is doing it?

Yeah, that would be a small language model because basically what the model does is, it reports out what’s happening and just one number is smaller than the other. That’s not math. It doesn’t do the calculation, it just recognizes it.

We wrote an entire story about how ChatGPT can’t tell time. Sometimes one number is bigger than the other, it’s actually quite difficult for these models. Or increments are actually quite difficult for these models. You think that’s trustworthy? I’m asking you very directly because the problems of hallucination here compound, right? They get exponentially worse as you add more and more AI tools to the system. The problems of reflecting biases in the data get exponentially worse as you add scale, as we’ve talked about. How are you making sure the AI systems aren’t either hallucinating or reflecting an underlying bias that you can’t see?

Human oversight through data scientists. I think we’re too early in the journey that we can let it run on its own. We need to all practice responsible use. For a data company, it means we lean on some of the strongest human assets that we have, and those are our data scientists. They need to look at the output and they need to look at whether it’s accurate or not. And if it’s not accurate, we turn it off and we fix it. Or if it’s not fixable, we would throw it away. We haven’t run across that, by the way, but we would do that.

Have you run into this situation yet where the data scientists have said, “We can’t use this tool yet”?

Oh yeah, because we test everything before we put it in production. So it happens all the time. Nothing goes into production without going through that kind of process. We have synthetic data and we have depersonalized data that we use for testing new models, new agents, and we don’t put anything into production until we know it works. Nobody should, right?

That makes sense to me. What’s been the biggest gap between a capability you want an AI system to have and the one that you tested? I’ll give you this example. I think of Siri and Alexa and Google Assistant, right? Everyone knows what they want them to do.

And then I’m watching all of these companies try to add AI into the mix with their voice assistants and they’re not there. They just can’t quite do it. And Apple had to start over and Google is pushing out in stages and however that’s working, it’s working. What’s been your experience of, “Okay, we’re going to ship an AI tool and we want it to work this way, but it’s not quite good enough”?

I think it has to do with the interaction of AI with humans. The way I look at AI, and I think a lot of people do, is it’s a digital teammate or a digital workforce. So if it is that, then that teammate or that team would perform a certain task and it would contribute to the work of the overall team.

So we assume, hey, if we provide the following information to a team, to a person as an assistant in their workflow, they’re going to use it that way and therefore it’s a good thing. Well, we’re not always right. I sometimes compare it with — I drive a Mercedes car and I can talk to the car and it has a map that I can talk to and say, “Hey, Mercedes, tonight I’m seeing the Colorado Avalanche play hockey. Take me to the Ball Arena in Denver.” And so it will put in the directions from where I’m at and I will be taken there. I got so used to the tool that I now listen to the tool all the time. Though I know the area really well and sometimes it doesn’t give me the right route.

Are you describing a good outcome?

No, it’s not a good outcome and that is the outcome you want to avoid. It’s the answer to your question. If you trust AI to the point where you blindly trust it and always follow it, and you don’t check yourself through the data scientist in the example that we discussed a couple minutes ago, it bears risk. So the real job that we have is to make sure that doesn’t happen and the interaction with the human still happens. You can force it in rather than AI automatically doing what it does.

We’ve had [CEO] Ola [Källenius] from Mercedes on the show and I think he’ll be happy to know that you’re the single customer of the Hey Mercedes voice assistant in his cars, because I’ve been dying to know who else is using this thing.

In my car, I can turn on the lights, I can turn on the radio, I can switch radio stations, I can turn on my seat heater. You tell him I like it.

The next time he’s on, we’ll be like, “We found one.”

Let me ask you, this is going to be the hardest question. When I hear from our readers, when I hear from people about what AI might do, the idea that a company like Experian can make decisions that affect their lives using AI is terrifying. There’s not a lot of hope when people think about this outcome, that there’s an all-knowing AI that can generate scores about you based on your behavior and allow other people to make decisions.

And we see this in countries like China, where there are reputation scores, there are other kinds of centralized data providers that very directly affect people’s lives. You’re in the position to do that. So I’m going to ask this question in two parts. First, do you think people like Experian today? Do you think you have the foundation to build this next generation of products?

First of all, we’re not Palantir, so we don’t do reputation scores. We are very much in, like I said earlier, financial services, healthcare, automotive, and digital marketing. So that’s where we play. And I think I answered that question earlier. Why is it in the interest of people that their data gets used? It’s so that they get access to credit, access to healthcare, so that they know the vehicle history of the car they’re going to purchase, et cetera, et cetera. We try to use data for good. We do not make decisions. You used this phrase, “do you think people are comfortable that Experian can make decisions?” We don’t do that. We provide information.

You provide the tools that allow others to make decisions. Sure, I understand.

That’s right. To lenders, yes. They will make a decision anyway, won’t they? I told you the story about Sy Ramo, who is long gone. He made decisions. People will make decisions about you and about whether they lend to you. And the more you have to do that at scale — In North America, we have 247 million Americans. If you want the economy to blossom, if you want people to have access to credit, you need a scalable model.

I’m not saying that our system’s perfect, but you can draw a worldwide comparison and you still have to say it is the best credit economy in the world. It really is. And there’s lots of stochastic data around it. We are part of that connected ecosystem. We’re not all of it. We are part of that and we try to perform our role within that connected ecosystem responsibly and the best we can. If somebody has an idea on how to make it better, we’ll be first in line.

Sure. But let me just try it again. If the answer to the question, “Do you think people like Experian?” is, “We’re not Palantir,” that sets a very low floor in a very specific way. You’ve talked a lot about trust. I’m saying right now, the way individual Americans encounter the brand of Experian is not always positive. And in many cases, it’s a faceless entity that controls a downstream decision that yes, a financial institution’s making, but the recourse is low.

This is the trade-off we’ve been talking about with scale this whole time. AI might allow you to change the amount of recourse people have. It also might allow, I don’t know, a bunch of bad guys to launch ever more sophisticated attacks and get that data out. There’s more trade-offs here than not. So I’m just asking about the foundation of trust that you’re working with to begin with. Do you think enough people like and trust Experian for you to build this next set of capabilities, which might make you even more powerful?

I think enough people do. Let me maybe answer the question not with one sentence, but be a little more granular. I could point to data of the consumers who give us their data. So we have a direct-to-consumer business, and in the various countries that we are active with our direct-to-consumer business, we have hundreds of millions of consumers who proactively make their data available to us. We protect their identity. We do everything that I described earlier. We give them access to comparing financial products so they can lower their cost of borrowing. We give them access to lower cost car insurance, et cetera. And those consumers like us. And I know that because we ask them and we get a net promoter score and we look at that religiously every month to see how we are doing. Are we doing right by all these people? Et cetera, et cetera.

Now there is another population that may not have that relationship with us that have, through life’s circumstances, a bad credit score, and those people sometimes don’t like us. And I want to make it really personal, Nilay, Alex was one of them. I’m an immigrant. I came here just about exactly 30 years ago and when you’re an immigrant, you don’t have a credit score. You don’t have access to credit. Life’s really hard. Really, really hard for us immigrants in the beginning years. And I wish there were a system that the law would allow to make life easier for people like us, but there isn’t. And my life became difficult because I wanted to stay here. I went to school here, that’s initially how I came here, and then I wanted to stay here and get a job and all of that. And if you don’t have credit, you’re riding public transportation to work, et cetera, et cetera.

I had an hour and a half commute for years and years because I couldn’t afford a car, couldn’t buy the car because I didn’t have enough cash. Life’s hard. And in those situations, there are much worse stories than my personal story, but I just want you to know I’ve felt it before. What we try to do is we try to do away with people having low credit scores by giving them tools to improve their credit score. The way that the initial formula was written, it allowed for all recurring financial transactions to become part of the score.

I don’t want to pick on our competition, so I’ll phrase it this way. We’re the only ones who allow that. Credit bureaus, other credit bureaus, they only take lending history. So have you had a loan before taken into account? Well, there are other recurring financial payments, your streaming service, your cell phone bill, et cetera, et cetera. There are so many payments that you make, your utility bills, that you make every month and if you make it reliably every month, that should be part of your score and therefore increase your score.

We’ve created a system called Boost, Experian Boost, where people can upload that information and their credit score goes up. So they don’t have to go through that period that I did because I did rent an apartment, I did pay all my utilities, et cetera, et cetera, and I wanted to have access to credit. So we tried to lower the hurdle and therefore have fewer of those people who are impacted by life circumstances. I don’t think people don’t like Experian. They don’t like what that score expresses at the time. And if we have issued it to whatever lender they talk to, then the finger gets pointed at us.

Sure. I just think there’s a feeling of helplessness that comes with that score sometimes, right? There’s a feeling of lack of recourse, particularly if you feel that score is wrong, right? And that’s where I think a lot of the —

But that’s why Boost, right?

Well, sure. But Boost is like an interesting set of incentives, right? For you, it’s a product you sell. It might help some under-banked or low-credit people immediately —

We don’t sell it. It’s free. We don’t sell it. We provide it for free because it’s the right thing to do. It’s free for the consumer, it’s free for the bank.

I didn’t realize it was also free for the bank. I assumed the bank paid. So there’s no economic incentives for Boost at all?

No, no, no. It’s the right thing to do because what you’re pushing on, Nilay, is you are expressing in your own words what kind of company we are. I would probably express it differently, but directionally, you’re describing it right. When you’re in that business, you need to have a really clear, ethical compass on how you conduct business. We have that at Experian. Boost is an expression of that. Let’s help the consumer get it right. Let’s help the consumer fix their score if the score is wrong. It’s not okay if the score is wrong because it makes life really difficult. And therefore we have provided the mechanism to do that.

By the way, for that, you need a real-time bureau. We’re the only real-time bureau in the world. Nobody else is real-time. Your delay at other companies is 30 days. So if they had a functionality like that, our competition, you put in your information, 30 days later, you get your score updated. It’s useless. We built it in real time. You put your data and it changes right then and you can go back in the door, not that people still go to the branches, but back in the door, talk to the lending office and say, “Hey, take a look at my score. It’s not what it was 10 minutes ago.”

I’m curious about that because again, there are the trade-offs as you attract more scale, as you provide more products, as you use AI to build even more scale. Down at the bottom, the individual consumer, the thing I’m pushing on is, will they feel more recourse or more control or less? And over time, I would say increasing centralization and scale in the economy has led to feeling less empowered.

I’ll slightly change the subject here because I want to end on security. You have a lot of data. I know you’re moving a bunch of your data to AWS, you’re moving to the cloud that will help you with security and in some ways it’ll help you with AI in other ways.

Sometimes the only way people hear about companies like Experian is because of data breaches. Your competitor, Equifax, had a massive data breach. How do you think about that? “As we collect more data, we’re a much richer target, and then the bad guys are going to use AI to launch automated attacks”? We’ve seen the studies from the frontier labs already. It’s like this is going to start happening.

That’s another place where the consumer basically has to trust you, right? That’s just how it’s going to be.

How do you think about the cost of mitigating against the increased attack surface of your scale, the increased capability of the attackers, and all of the products that you want to provide to people?

It’s the first dollar we should spend. If we don’t do that well, we don’t have a reason for existing because a bad actor will go in. Just to say it for a second, I’ve been here 10 years. The last time we had a breach occurred two weeks into my tenure at Experian, so 10 years ago. We are in a business where we actually protect the identities of people whose identities were stolen because we have access to the dark web, we know how to clean it up. When Equifax had their breach, they paid us to protect the consumers whose information was stolen.

So I’m not saying we’re perfect at it, but we’re pretty darn good at it, so good that even our competitors give us their business. It’s job number one, Nilay. There are no two ways about it. That is the biggest risk in this sector. That is the biggest risk for anybody who has a business similar to us. It’s the biggest risk for us and therefore it’s the first dollar we’re going to spend.

When you say first dollar we’re going to spend, do you think about that in terms of return on the investment specifically or just “This is the enabling cost of all of the other investments we’re going to make”?

This is the enabling cost of all the other investments we’re going to make. So I’m going to buy all the tooling, I’m going to hire all the people that we need to keep us safe, we’re going to deploy the technologies that do that the best, and we’re going to try to stay ahead of the bad actors who do deploy AI, who do, now, as you said, use bots to get in. We bought a company called Neuro-ID, which detects bots in a much better way than anything else that we have seen and banks are eating it up. There’s an economic incentive, by the way, to do that well because it’s a service we provide and we got to stay on it.

Experian’s a public company, obviously there’s some amount of pressure to deliver increasing profits. Enabling costs, especially big enabling costs, can come under pressure. Is that just you who has to defend it? Is it the ethos of the company? How does that work?

It’s the ethos of the company, for sure.

So if you show up and say “We need to double the cost of security,” it’s just going to be fine? Because I hear from our listeners who have similar situations as you that the incremental cost of security sometimes is hard to defend.

Not at Experian. I don’t know who you’re referring to, but not at Experian. And I will tell you this. The good thing about the business model that we have, it’s a scale model. We talked about scale a lot and you talked about the risk of scale, but the benefit of scale is, as you scale, there are some costs that are fixed, that are then distributed over a greater amount of business, and therefore you actually have natural scale benefits, meaning your fixed costs are a larger part of your total cost, the variable costs are a lower part of your variable costs.

So when it comes to security, what does that mean? That means if today we have 200 million consumers that give us their information and tomorrow we have 300 million, there’s not a 50% increase in security costs, even if I buy the leading-edge technology. And therefore our scale I think actually allows us to buy all the best tools, hire all the best brains in the industry to defend against bad actors.

Let me wrap up by just trying to tie all this together. I’ve talked a lot about the individual consumer. That’s a lot of our audience, people who build things, people who think about the kinds of products AI might help them build, the kinds of scale that you might operate at. Some people who just want the kind of scale that you might operate at, right? That’s the ambition.

As you see us go into this next era where there is more legibility of data — that’s what I would call it, right? That’s really what the AI that you’re describing will provide to financial institutions — how do you make sure that Experian actually empowers consumers, not just in access to credit, which is what you’ve come back to over and over again, but increases the feeling that our agency as individuals in the economy is going up instead of down? Because I would say right now, a lot of people feel like their agency in the economy is actually going down.

I don’t want to make any political statements, but that is… Unfortunately, I would say you’re correct with that. We try to have our own compass of what’s right and what’s wrong, and we try to empower consumers. So opting out needs to be easy. Opting back in needs to be easy. We have several ways of doing that. I was going to call it stages. So more severe [ways], like a credit freeze, that’s harder to undo, or to create a lock, [which is] easier to do and undo, depending on what happened to you, identity theft or not, or just as a precaution or just because you don’t like it.

So we allow you to lock your data away and we should make that easy. We should make that easy in whichever way you want to contact us, whether you want to do it online — which is economically better for us, it costs us less per interaction with the consumer — whether you want to call us. We have a call center with thousands of people. It is a US-based call center. A lot of people complain about, “Oh, I talked to a person in country X in an accent I couldn’t understand.” We don’t do anything like that because we want to do right by the consumer. We are, even in our B2B business, really it’s a B2B2C business, because at the end we affect our consumer, which is what you keep emphasizing. And we are very conscious of that responsibility and try to show it in how we continue to evolve our services.

Alex, this has been great. Thank you for being so candid. Thank you for being on Decoder. We’ll have to have you back soon.

Nilay, thank you so much for the invite. It’s good to talk to you.

Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!

Decoder with Nilay Patel

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.