Table of Contents
It’s not an overstatement to say that detecting risk is a tough task, even with the best technology available. The changing face of fraud, supplier risks, cyber crimes, and a host of other factors risk leaders need to track means that the approach to detecting them must be equally multi-faceted. However, we seem to be operating on the assumption that if it ain’t broke, don’t fix it — a valid statement in some cases, but a woefully incorrect one in this case.
Why? Risk isn’t a static concept. In 1961, the Merchandise National Bank of Chicago made a promotional film touting their newest cutting-edge tool against fraud: a computer that could scan and process checks in seconds to check for fraud, as well as process nearly 30,000 transactions in a “speedy” three to four hours. In the 60s, the ability to scan checks automatically was the height of fraud detection.
It’s safe to say our computer systems have come a long way, offering processing capacities that are entire orders of magnitude greater than those 1960s predecessors, and literally in a fraction of the space. We’ve also built risk and security systems that can successfully detect many of the emerging types of fraud — many, but not all.
Rules-based risk detection models today scan through millions of possible instances of fraud per second and make decisions that can impact a person’s ability to get credit, to make a deposit, or to get insurance. These models check a transaction against a list of pre-determined rules — different indicators and red flags — to ensure that they’re not fraudulent. The system works well, but it isn’t without limitations, as we’ll see.
The real question is, could it be better? Continuing to rely on and tinker with a system that’s good but not great may sound like a smart course of action, but it could be putting banks and financial institutions behind the eight-ball when it comes to fraud prevention.
Risk models aren’t broken, but they could be better
To be fair, rules-based models are not a bad idea. Fraud detection systems today are capable of spotting and mitigating risk at a great rate, but they come with some downsides. One of the biggest being that while rules-based systems are great at spotting the outward signs of fraud and risk, they don’t always spot more subtle signs. This means that unless a specific instance of fraud shows clear signs according to the model, it may be passed as a normal transaction. To counter this, many risk leaders will add more rules for detection in an attempt to cover as many bases as they can, and here’s where we run into issue number two.
Rules-based systems aren’t fast, and adding more layers and things to check tends to add more time to the process. When it comes to banks and financial services providers that are dealing with transaction rates in the millions per day, creating bottlenecks is a sure way of also creating unhappy customers. No one wants to wait three hours for a deposit to clear, or to have to wait two days for a check to be processed.
Additionally, one complaint about rules-based models that are still widely in use today is that they tend to produce a high number of false positives, which is understandable given their baked-in rigidity. Because they’re only concerned with whether a transaction matches a rule or not, such a model will not differentiate between instances of similar transactions that may be distinct based on other factors — all that matters is that they raised an alert.
It may seem that this is a price worth paying if it gets the job done, but that argument lacks some nuance. The unseen cost of false positives — and especially if a system raises them too often — is that it impacts the overall customer experience. Too many false positives create bottlenecks as banks must manually review transactions, interact with customers, and resolve issues one at a time.
The problem is that resolving these issues isn’t as simple as removing a rule — after all, they were put in for a reason. Instead, new rules, which add more layers, complications, and time to the detection process, must be added. This solves one problem but creates another. Now, the rules-based system can distinguish between transactions, but it adds more time to the process and creates a bottleneck if there’s suddenly a higher volume of transactions.
As the current situation wreaks havoc on existing banking risk detection systems and raises the popularity of rules-based models once again, it’s worth remembering that just because it worked before, we shouldn’t be so quick to run back to the past. Just because banking is facing a tough time right now, doesn’t mean fraudsters are giving the sector a breather. Indeed, to keep up with the ever-changing face of fraud and risk, financial services providers need to think more dynamically about how to approach fraud. We like to think that our financial services are great at preventing crime and functioning efficiently, but at some point, with the systems we use, something’s gotta give. So, what can we do about it?
How do we fix it? Dynamic models hold the key
It’s unfair to claim this is the result of intransigence on the part of risk leaders — they spend their days fighting a battle against forces that are constantly changing, and they have led the way on innovations throughout banking’s history. However, when it comes to arguing for wholesale changes to systems that are working, the discussion is a little tougher to move forward. Adapting existing systems that work — warts and all — to become more dynamic through potentially costly initiatives is a tricky conversation to have. However, to keep up with increasingly sophisticated crime, it’s a necessary one.
Today, most fraud attempts aren’t highly overt. Money laundering won’t show obvious signs if it’s done right. Insurance fraud and claims fraud have become highly more sophisticated as the ability to mimic people online becomes easier.
Establishing long, complex rules systems eventually becomes inefficient, so we must think of risk detection more dynamically. The answer isn’t more rules, but rather to reconceptualize our models as smarter, more flexible systems that can look for more than just established metrics. Instead of checking for telltale signs, dynamic models should scan transactions to pick up patterns and smaller indicators that something may be amiss. What we’re looking for isn’t necessarily a smoking gun, but a trail of crumbs that could lead to bigger, undetected crime.
The idea behind dynamic risk models, especially anomaly detection and anti-money-laundering models, is that many times crime doesn’t show the clear signs we’ve come to expect from it. A money launderer today won’t simply try to shove through a million-dollar or even thousand-dollar transaction. Instead, they’ll filter in their cash slowly — a few hundred dollars here, a few hundred dollars there. A rules-based system may think everything’s fine — after all, these transactions aren’t suspicious by the rules established.
However, a dynamic system might spot an interesting pattern. These seemingly normal transactions are all a little too exact. For a business that deals in cents and dollars, six $400 transactions a week, every week, might be enough to flag it for observation by human eyes. Similarly, transactions that jump through a lot of banks and come from anonymous bank accounts may not technically break any rules, but they could show signs of suspicious behavior.
Keeping up with the (criminal) Joneses
Financial services love to tout their devotion to the cutting edge of technology. Most crimes, after all, have a large financial component. However, that passionate commitment can sometimes be a lot of sizzle, but without a lot of substance behind it. It’s easy to become complacent when we have risk models in place that are good, even if they aren’t great. After all, why risk change when what we have isn’t broken?
The problem is that unlike the good guys, criminals aren’t so comfortable resting on their laurels. They’re constantly looking for better ways to commit crimes that will go undetected, and the scary truth is that they’re surprisingly good at it. To keep up, risk leaders must be unafraid of taking the technological leap and looking for ways to scan even the most innocuous parts of their operations. Using the right technology, and pairing it with the right data sources, can give organizations a much better chance to find fraud and stop it before it happens.