opinion

Artificial Intelligence Treats Risk Like Cancer

An embarrassing thing happened to me in Amsterdam. I’d just finished dinner with a new partner at a nice restaurant. OK … more expensive than nice, but you know what I mean. I grade food in Amsterdam on a curve.

We were getting to know each other, talking about where we came from and where we’re going. After the dessert the waiter brought the check. We split the bill: 167.35 euros for me, 167.35 euros for him. His card worked. Mine didn’t. WTF!

We’ve been investing deeply in our risk AI for over three years and we’re still learning a lot. It’s a very long learning curve.

Bear in mind there was wine with each course … so I wasn’t at my sharpest when the bill arrived. I checked my balance on my bank’s mobile app. There was plenty of money in the account. Whatever. Not one of those euros were helping me.

I gave the waiter my Amex. It went through because … it always goes through.

It’s probably happened to you, too. A risk system prevents you from making a purchase. You go from enjoying yourself into rapid problem solving mode. Not fun.

One of the biggest complaints I hear from our new partners is, “My old biller was scrubbing too hard!” In other words, the biller was stopping good transactions and preventing sales. It can happen. It was the reason my card wasn’t accepted at the restaurant in Amsterdam.

This summer Visa changed its rules. If “scrubbing too hard” to be under a 2 percent limit used to be annoying, then scrubbing to be under 1 percent can kill your business. How does a biller know which transactions to accept and which to block?

The early approach involved looking for patterns in data. Specialists would look at their data and come up with ideas to identify risk. “It looks like people in France chargeback a lot.” Programmers would query databases to find patterns. “Yes, it’s true. People in France chargeback more than average.” Then the programmers would write algorithms to identify and block those transactions.

Large billers also have risk analysts who manually review transactions looking for suspicious signs. Perhaps they could see that the same IP had been used to make 10 transactions with different cards in a short period of time. Then they could check to see if those users had opened the confirmation email with the login data. If they had not been opened the risk analyst could cancel those transactions.

The goal of a risk department is this: Find the smallest group with the highest percentage of bad guys. That may not make immediate sense. Risk wants to block as few transactions as possible. Ideally risk systems find all of the risky transactions in less than 1 percent of the total. Then they would not be blocking any good, non-risky transactions. Ideal. No one has reached that ideal but the best risk teams are moving closer towards it each day.

One of our partners at Vendo comes from a long line of innovative doctors. His great-grandfather invented a dye that surgeons use to identify cancerous cells during an operation. It’s called “Terry's Polychrome Methylene Blue.” Before this dye, doctors would start cutting and they would cut out too much healthy tissue … just to be sure they had removed all of the cancer. Once they applied the dye, however, the cancer cells would identify themselves by changing color. The surgeon could make sure that he only cut out the cancer leaving as much of the healthy body as possible. That’s what risk is trying to do. Only cut out the cancer.

A false positive is identifying a good transaction as risky and either blocking it or refunding. You want to do that as little as possible. That’s me in Amsterdam not being able to buy with my regular card and switching to Amex. That’s the surgeon before the dye. That’s the biller that is doing risk the old way in a world that has changed completely.

A friend of mine died of cancer a few years ago. Her doctor told me that we don’t yet understand the disease. He said, “Once we do then we will be able to write down the cure on a single sheet of paper.” Today we have lots of treatments for risk. Many different approaches. But we don’t really understand it well enough to write the solution on one sheet of paper. Or do we?

Perhaps we do have a way of managing it that is as inexplicable and difficult to understand as the thing itself. A large insurance company recently spent tens of millions of dollars, hundreds of thousands of man hours and not a small amount of computing power to find a better way of evaluating medical risks and setting prices for their customers. A machine learning technique produced 20 percent better results than the next best approach.

In the end they went with the second best approach. Why? Because they wanted to be able to understand their model and they couldn’t understand what the machine was doing. It used a kind of alien intelligence. The humans couldn’t figure it out. So they destroyed the machine they feared. In the process they turned their backs on a 20 percent increase that would have made them the market leader.

How does artificial intelligence (AI) become intelligent? How does machine learning learn?

Just like a child. It senses its environment and tries to get what it wants. A baby wants food. It cries. It gets food. It learns that crying brings food.

In contrast, AI doesn’t want anything naturally. It has to be told what to want. You could think of this like instilling values in a child. We teach kids the golden rule, “Do unto others as you would have them do unto you.”

We tell the risk AI that it should maximize revenue within constraints. Low reversals (refunds, chargebacks, stolen card alerts, etc.) and high throughput of good transactions. It learns by trying different approaches. When it finds one that works it does more of it.

What are some of the ways we trained the risk AI to perform risk tasks?

We started with linear regression. This one is familiar to anyone who has sold their home. A linear regression model compares your house with recent homes that have been sold. It gives you the value of your house based on its features.

If your house has three bedrooms, was built less than 10 years ago and you have recently renovated your kitchen then your house would be worth X. Improving your landscaping would increase the price of your house by $20,000. If it only costs $10,000 you would do it. If you add a fourth bedroom it would add $30,000 to the value but the cost would be $50,000. Linear regression tells you not to do it.

The primary advantage of the linear regression model is that it is understandable. However, the results weren’t that good when we tried the algorithm on past data. There were too many clean transactions that were seen as risky. When we used linear regression on 18 months of transactions it found 50 percent of risk in 30 percent of transactions.

That means that if you had a chargeback ratio of 1.4 percent (over the limit) and wanted to be at 0.7 percent (comfortably under the limit) then linear regression would cut your sales by 30 percent. Do you have a 100 sales a day? With this approach you would be left with only have 70 sales a day. No, that wasn’t going to work. The results on historical data were so bad we never even tested it on live transactions. We had to keep looking for smarter solutions.

We tried gradient boosting machines. Here’s Wikipedia on gradient boosting: “Gradient boosting is a machine learning technique for regression and classification problems which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function.”

Sounds good — and complicated — (it is!) but it still didn’t produce the results we wanted.

Next we tried random forest. This also uses a collection of decision trees. You’ve seen decision trees before. They have goofy ones in the back of every issue of Wired magazine. Your customer support people use them to decide when to give a refund or escalate. Here’s Wikipedia’s definition: A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm.

The “random” part is designed to avoid overfitting data, that is, making an algorithm work really well for past data but not street smart. We want a system that is constantly learning and random forest looks at the results of collections of different decision trees to be more flexible in dealing with the changing reality of risk.

A decision tree relies on patterns that a human can spot. Having large numbers of decision trees that are built by the machine enables the AI to identify patterns that no human could ever see. This is the approach to AI we use today. However, it competes with other approaches and will certainly be replaced with new, improved AI driven solutions in the future. It’s a never ending process. We’ve been investing deeply in our risk AI for over three years and we’re still learning a lot. It’s a very long learning curve.

It is very costly to build a system that goes beyond human intelligence. There are three upfront costs. You have to gather large amounts of relevant data. You have to build teams that can work with it. You have to create tools and access tremendous amounts of computing power. All of those costs can be understood upfront, before starting the project. However, there is a fourth cost that is hidden. It is the cost of ignorance, of giving up control.

But how much conscious control do we exercise generally? Our brains perform a massive amount of unconscious calculations each day. When we are driving a car we look at oncoming traffic and decide whether to enter the lane. We measure the speed of oncoming cars, we estimate our car’s ability to accelerate, etc. We do all of this unconsciously. A self-driving car also does millions of calculations before deciding to enter traffic. We can’t explain the information we are processing fully… neither can the AI driving the self-driving car.

No one fully understands how AI makes each decision. We can’t understand it because it is beyond human understanding. We design it we feed it data and we measure the results it produces. What happens inside the servers where the AI lives is a black box, literally and figuratively.

It’s nerve wracking. We would much rather work with a system that we can understand fully. Other billers have simpler systems that they can understand. However, those systems produce inferior results. In today’s world of tighter risk restrictions we cannot afford the comfort of old ways.

Google has gone through a similar transition. They used to rely on algorithms that they could understand. In recent years they switched to AI. Why? Search results were 15 percent better. The choice was clear. Switch to AI or no longer be the king of search, de-throned by an AI upstart.

Why do we feel comfortable sharing our hard won intellectual property? Because there’s little risk in sharing. Billers always keep their risk rules close to their chest. Have you wondered why? Because they don’t want fraudsters figuring their rules out and going around them to defraud clients.

Our head of analytics is French. He lives in Barcelona. Recently he had to make a payment for his Grench mobile phone account. He tried from Barcelona with a French credit card and was blocked. He used a proxy so that he would appear to be in France, re-attempted the transaction, and it was successful. Clearly the risk algorithm used by his French mobile carrier checked for card/location mismatch but not for proxy. That is exactly the kind of thing that billers don’t want you to know.

An AI doesn’t have fixed rules so we’re happy to talk about it. We used to have those rules. Back then we kept our mouths shut about what we were doing, for obvious reasons. Fraudsters focus their energies on systems they can reverse engineer. That’s only possible with simple, understandable risk systems. The best way for our industry to advance is with cutting-edge treatments for maximum health.

Thierry Arrondo is the managing director of Vendo, which develops artificial intelligence systems that allow merchants to dynamically set prices for each unique shopper.

Related:  

Copyright © 2024 Adnet Media. All Rights Reserved. XBIZ is a trademark of Adnet Media.
Reproduction in whole or in part in any form or medium without express written permission is prohibited.

More Articles

opinion

Why Cyber Insurance Is Crucial for Adult Businesses

From streaming services and interactive platforms to ecommerce and virtual reality experiences, the adult industry has long stood at the forefront of online innovation. However, the same technology-forward approach that has enabled adult businesses to deliver unique and personalized content to consumers worldwide also exposes them to myriad risks.

Corey D. Silverstein ·
opinion

Best Practices for Payment Gateway Security

Securing digital payment transactions is critical for all businesses, but especially those in high-risk industries. Payment gateways are a core component of the digital payment ecosystem, and therefore must follow best practices to keep customer data safe.

Jonathan Corona ·
opinion

Ready for New Visa Acquirer Changes?

Next spring, Visa will roll out the U.S. version of its new Visa Acquirer Monitoring Program (VAMP), which goes into effect April 1, 2025. This follows Visa Europe, which rolled out VAMP back in June. VAMP charts a new path for acquirers to manage fraud and chargeback ratios.

Cathy Beardsley ·
opinion

How to Halt Hackers as Fraud Attacks Rise

For hackers, it’s often a game of trial and error. Bad actors will perform enumeration and account testing, repeating the same test on a system to look for vulnerabilities — and if you are not equipped with the proper tools, your merchant account could be the next target.

Cathy Beardsley ·
profile

VerifyMy Seeks to Provide Frictionless Online Safety, Compliance Solutions

Before founding VerifyMy, Ryan Shaw was simply looking for an age verification solution for his previous business. The ones he found, however, were too expensive, too difficult to integrate with, or failed to take into account the needs of either the businesses implementing them or the end users who would be required to interact with them.

Alejandro Freixes ·
opinion

How Adult Website Operators Can Cash in on the 'Interchange' Class Action

The Payment Card Interchange Fee Settlement resulted from a landmark antitrust lawsuit involving Visa, Mastercard and several major banks. The case centered around the interchange fees charged to merchants for processing credit and debit card transactions. These fees are set by card networks and are paid by merchants to the banks that issue the cards.

Jonathan Corona ·
opinion

It's Time to Rock the Vote and Make Your Voice Heard

When I worked to defeat California’s Proposition 60 in 2016, our opposition campaign was outspent nearly 10 to 1. Nevertheless, our community came together and garnered enough support and awareness to defeat that harmful, misguided piece of proposed legislation — by more than a million votes.

Siouxsie Q ·
opinion

Staying Compliant to Avoid the Takedown Shakedown

Dealing with complaints is an everyday part of doing business — and a crucial one, since not dealing with them properly can haunt your business in multiple ways. Card brand regulations require every merchant doing business online to have in place a complaint process for reporting content that may be illegal or that violates the card brand rules.

Cathy Beardsley ·
profile

WIA Profile: Patricia Ucros

Born in Bogota, Colombia, Ucros graduated from college with a degree in education. She spent three years teaching third grade, which she enjoyed a lot, before heeding her father’s advice and moving to South Florida.

Women In Adult ·
opinion

Creating Payment Redundancies to Maximize Payout Uptime

During the global CrowdStrike outage that took place toward the end of July, a flawed software update brought air travel and electronic commerce to a grinding halt worldwide. This dramatically underscores the importance of having a backup plan in place for critical infrastructure.

Jonathan Corona ·
Show More