By Nick Cook
For several years, experts have heralded network-based defense as a potential game-changer in the fight against financial crime. By promoting greater collaboration and information sharing between financial institutions, law enforcement, regulators, and governments, the theory goes, fraud and other crime could be detected and mitigated across the entire financial ecosystem, rather than within the silos of individual firms. The promise of such a system is clear: by pooling data, sharing intelligence, and leveraging advanced technologies, institutions could address systemic fraud and AML threats before they spiral out of control.
Yet despite these high hopes and numerous efforts to foster intelligence-sharing between financial institutions, between public and private sector entities and across sectors, meaningful progress has been slow. While some initiatives have emerged, they remain largely piecemeal, failing to deliver the comprehensive solutions initially envisioned. This lag is not due to a lack of technological capability—after all, the necessary infrastructure and software systems have matured significantly in recent years—but rather due to the absence of sufficient incentives and regulatory accommodations that would support the necessary level of experimentation and innovation.
For a network-based defense system to reach its potential, stakeholders need both the freedom and the motivation to experiment, collaborate, and innovate. This requires a concerted effort to address the barriers currently standing in the way: regulatory constraints, organizational silos, and a lack of alignment between the interests of financial institutions, regulators, and consumers.
The underlying rationale for network-based defense is compelling: financial crime schemes do not respect institutional boundaries. Fraudsters and other criminals exploit weak points across multiple organizations, often moving funds through a labyrinth of accounts and institutions to obscure their trail. A collaborative, network-based system would allow institutions to better identify and share real-time insights into suspicious patterns and emerging threats, creating a more unified defense.
However, despite the logical appeal of this approach, the reality has been far more complex. Several factors have contributed to the limited adoption and effectiveness of network-based financial crime defenses:
The slow adoption of network-based defense systems stems, in large part, from a lack of appropriate incentives across key stakeholders—financial institutions, regulators, and consumers. For this model to succeed, all parties must have clear motivations to engage and contribute meaningfully to a more integrated and efficient financial crime prevention system.
Financial institutions face a delicate balance when it comes to adopting new detection systems. While the benefits of greater collaboration and data sharing are clear and few (if any) would argue that the sharing of high-quality, relevant and timely information and insight will not result in observable and marked improvements in crime threat response, , institutions remain wary of the potential downsides, particularly the overload of alerts that would inevitably result from a more effective network-based system.
Institutions currently spend billions on systems and employ thousands of compliance staff to manage the flow of alerts generated by their siloed systems. If a network-based system were to suddenly provide a flood of new alerts, even the most well-resourced firms might struggle to manage the additional investigative load. This creates a fundamental disincentive for firms to embrace such systems without the right safeguards in place. If an institution is legally required to investigate and validate alerts within a specific timeframe, a significant increase in alerts from a new network-based detection system could overwhelm its capacity to respond. This surge would almost inevitably result in the institution failing to process all alerts on time, leading to immediate regulatory or legal noncompliance.
To encourage firms to adopt network-based defenses, regulators must offer safe harbor provisions—legal protections that allow institutions to experiment with new models without the immediate threat of regulatory penalties for increased alerts or historically missed cases. Additionally, cost-sharing mechanisms between firms and regulators could offset the financial burden of building and maintaining these collaborative systems, incentivizing participation by reducing the upfront costs.
Finally, regulators could offer penalty reductions or other financial incentives for institutions that adopt network-based systems and demonstrate a proactive and progressive approach to financial crime detection. By aligning compliance costs with participation in a shared defense infrastructure, regulators can create a more compelling case for institutions to collaborate.
One of the most significant barriers to network-based defense adoption lies with regulators themselves. As the architects of the existing systems (either through intent or as a result of institutions complying with policy), regulators are understandably reluctant to acknowledge that the models they designed are flawed or ineffective. Admitting that existing financial crime prevention frameworks are outdated or insufficient requires a level of institutional humility that is often difficult to achieve. This situation is only exacerbated by the lopsided appraisal that regulators are subject to by government, the media, and industry and society at large.
Regulators are rarely, if ever, applauded or celebrated when they succeed. But they are routinely, publicly and often viciously critiqued, lambasted and maligned when they err or fail. Any organization persisting in such a one-sided performance evaluation environment will surely trend towards fear of failure, organizational inertia and a tendency not to openly admit its errors.
To make matters worse, regulators rarely face formal objective evaluations of their own performance. While they regularly assess risk management strategies within the institutions they oversee, there are insufficient and few mechanisms for regulators to evaluate the success—or failure—of their regulatory frameworks. This paucity of evidence-based accountability creates little incentive for regulators to innovate or to actively support new models like network-based defense.
To address this, there must be a push for external objective performance evaluations of regulatory frameworks. By measuring the effectiveness of regulatory systems and policy in reducing financial crime, improving financial stability, and fostering innovation, regulators can be incentivized to support new models and reward experimentation within the institutions they supervise.
Regulators also need to be given a more active role as facilitators of innovation, helping firms navigate the complexities of data sharing, privacy, and system integration. By creating effective sandboxes for experimentation—where institutions can test new financial crime detection models without fear of regulatory reprisals—regulators can foster a more dynamic and collaborative approach to prevention.
Consumers have long been another weak link in financial crime prevention, particularly in relation to fraud and scams. While institutions and regulators can develop sophisticated detection systems, the final point of contact—where fraudsters often exploit human vulnerabilities—remains difficult to protect. Consumers, despite decades of financial education campaigns, often fail to recognize fraudulent schemes or to take the necessary precautions to protect themselves.
Initiatives like the UK’s Authorized Push Payment (APP) fraud compensation model will shift liability from consumers to financial institutions, arguably creating a more consumer-friendly framework for handling fraud losses – albeit the pain, shame, inconvenience and loss of security through fraud will still be felt intensely by consumers, and the costs of fraud will ultimately continue to be born by consumers through higher fees and/or denial of access on the basis of “de-risking”. Additionally, this shift in short-term liability may also lead to consumer complacency, as individuals no longer feel a strong personal responsibility for preventing fraud.
To counteract this complacency, fraud prevention systems must offer tangible incentives for consumers to engage actively in their own protection. One potential solution is the introduction of micro-financial rewards for consumers who take proactive steps to avoid fraud. For example, if a consumer-facing tool flags a potentially suspicious transaction and the consumer chooses not to proceed, they could receive a modest financial reward from their bank. This creates a positive feedback loop, where consumers are encouraged to adopt responsible behaviors through financial reinforcement.
Beyond financial incentives, systems could also leverage moral and ethical appeals to engage consumers. Intelligent fraud prevention tools could remind consumers of the broader societal impact of their actions, for example by stating, “You are about to authorize a payment that is highly likely to be fraud. You will not receive the product or service you are seeking to purchase. This type of fraud is often linked to human trafficking or terrorist financing.” By connecting individual behavior to larger societal consequences, these tools can encourage consumers to act in the best interests of society, not just their own financial well-being.
If network-based defense is to deliver on its promise, institutions need to move beyond sharing plain-text narratives of fraud typologies and risk scores. While these forms of data have been useful in creating broad categories of fraud risk, they lack the granularity and specificity needed to create actionable insights in real time.
Plain-text descriptions of fraud typologies are difficult to translate into structured, machine-readable formats. This limits their utility in automated fraud detection systems, which rely on more precise data to identify emerging patterns. Additionally, broad fraud typologies fail to capture the complex relationships and subtleties that exist within modern fraud schemes, often leading to a focus on low-hanging fruit (the so-called “stupid criminals”).
To overcome these limitations, institutions need to embrace more sophisticated forms of data sharing, such as graph representations of fraud networks and labeled synthetic data. These approaches offer several key benefits:
As we look toward the future of fraud prevention, the role of consumer-facing tools becomes increasingly important. Many fraud schemes are designed specifically to exploit consumer’s trust, emotional responses, or lack of awareness. However, with the rise of Generative AI (GenAI), there are growing fears that fraudsters could use this technology to launch hyper-scale, highly-personalized fraud schemes, targeting individuals with low-cost, persuasive communications.
The threat is real. With AI, fraudsters could generate tailored messages that manipulate specific vulnerabilities, based on an individual’s online behavior, preferences, or personal data. Imagine, instead of an email notifying you of an unexpected inheritance from a previously unknown relative, you receive a video message from your child asking for money to help pay for recovery of their vehicle after a breakdown on a busy road with unreliable network connectivity. How many more of us would fall victim to such attempts to part us from our money? The combination of personalization and automation has the potential to significantly increase the effectiveness of these scams, overwhelming existing consumer protections.
But this same technology—Generative AI—may be harnessed for good. If fraudsters can use AI to manipulate and persuade consumers, so too can we leverage these tools to create meaningful, personalized interventions that guide consumers toward better decision-making. The key lies in the ability of AI to adapt to each individual and provide tailored advice and persuasion.
It’s important to recognize that not every consumer will respond to the same type of intervention. Some consumers may indeed be motivated by social impact—the idea that their actions can prevent fraud that funds human trafficking or other crimes. For others, financial incentives, such as micro-rewards for avoiding fraudulent transactions, will be more effective. Others may simply want to protect their own financial interests or avoid the inconvenience of dealing with fraud investigations and compensation processes.
Rather than adopting a one-size-fits-all approach, AI tools should leverage their ability to understand individual needs, wants, and preferences. By analyzing a consumer’s behavior over time, AI systems can develop a nuanced understanding of what motivates each person and use that insight to deliver tailored interventions.
The ability to adapt messaging based on the individual’s unique profile increases the likelihood that interventions are not just effective, but also persuasive. This personalized approach can help ensure that hyper-targeted fraud prevention can outmaneuver hyper-targeted fraud attempts.
AI tools embedded in consumers’ financial ecosystems could act as real-time fraud prevention advocates, intervening at the point of transaction. These tools would analyze a range of signals—transaction history, device data, behavioral patterns, website history, open-source intelligence—to flag potential risks, offering personalized advice on how to proceed.
For example, if a consumer attempts a transaction that mirrors known fraud patterns, the AI bot could intervene, offering a tailored message such as: “This transaction resembles a scam pattern often linked to stolen credentials. Proceeding could put your account at risk.” Alternatively, for the socially motivated consumer, the message might read: “Transactions like this often fund illegal activities such as human trafficking. Avoiding this transaction helps reduce the impact of organized crime.”
The AI bot would operate in real-time, offering dynamic interventions that reflect not only the consumer’s preferences but also the latest fraud trends and typologies emerging from the broader network-based defense system. This creates a powerful feedback loop: as the AI learns from each consumer interaction, it refines its ability to persuade and protect, using the same tools fraudsters might leverage but in service of fraud prevention.
By positioning AI bots as advocates for the consumer, these systems can create meaningful, actionable experiences that go beyond generic fraud alerts, engaging consumers with contextual, personalized insights.
While the potential benefits of a network-based defense system are clear, implementation poses several significant challenges. Addressing these challenges will be critical to ensuring that such a system can be both effective and sustainable.
One of the foremost challenges in creating a collaborative, network-based defense system is the issue of data privacy. Sharing sensitive information—such as transaction details, customer data, and device fingerprints—across institutions requires rigorous protections. Financial institutions must comply with data protection laws like GDPR or CCPA, which impose strict requirements on how customer data is handled, shared, and stored.
Encryption and data obfuscation techniques can help mitigate some of these concerns by ensuring that information is shared in a privacy-preserving manner. For instance, adopting privacy-preserving computation methods, such as homomorphic encryption, could allow institutions to collaborate on financial crime detection while keeping sensitive data secure. However, implementing such solutions can be costly and technically complex, which creates a further barrier to widespread adoption.
Financial systems across different countries and even within the same jurisdiction are often fragmented, with institutions using different technology stacks, standards, and APIs. The lack of a common framework for data sharing and financial crime detection makes it difficult to create a unified, cross-institutional defense system.
A network-based defense model will likely require a degree of standardization of data-sharing protocols, with clear guidelines on how financial crime-related data is formatted, transmitted, and used. Additionally, regulators may need to step in to coordinate efforts between financial institutions, ensuring that collaboration doesn’t break down due to competing interests or incompatible systems.
While the benefits of a network-based defense system are clear in theory, institutional resistance remains a major hurdle. As mentioned earlier, financial institutions are concerned about the potential for increased financial crime alerts, which could overwhelm compliance teams. While one should expect to see a profound reduction in false positives which in due course should offset the increase in true positives, it will take time for the system’s performance (and institutions’ processes) to be optimised. Moreover, sharing data and intelligence across institutions may lead to concerns about competitive advantage or reputational risk.
To overcome this, regulators must establish clear incentives for institutions to participate, such as safe harbor provisions, reduced penalties, or shared infrastructure costs. Equally important is creating an environment where institutions feel comfortable experimenting with new models without the fear of regulatory reprisals if early iterations are imperfect.
Many financial institutions operate on legacy systems that are not easily adaptable to new detection models. Integrating a network-based defense system with these older infrastructures could prove time-consuming and expensive. Upgrading systems to support real-time data exchange, graph analysis, and synthetic data simulations may require significant investment in technology and personnel.
Governments and regulators could play a role in mitigating this challenge by offering grants or financial incentives for institutions that modernize their systems and infrastructure in support of a more collaborative approach to financial crime detection.
The growing sophistication of fraud and other financial crime schemes demands an equally sophisticated response. Traditional, siloed prevention systems are not sufficient to protect consumers and institutions from the threat landscape. A network-based defense model, augmented by personalized AI tools for consumers, presents a powerful solution to this problem—one that fosters collaboration and leverages cutting-edge technology.
However, realizing this vision will require overcoming significant challenges – not least the issue of inadequate incentives for innovation. Regulators must provide the legal and financial frameworks that enable institutions to share data without fear of regulatory backlash. Institutions, in turn, must embrace a mindset of collaboration, recognizing that only through pooled resources and shared intelligence can they stay ahead of increasingly sophisticated criminals.
Consumers, too, have a critical role to play. By leveraging personalized AI-driven tools that are tailored to their needs, preferences, and behaviors, they can take a more active role in fraud prevention. These tools must go beyond simple fraud alerts, delivering interventions that persuade consumers to act in their best interests and, when appropriate, for the benefit of society.
The future of financial crime prevention lies in creating a multi-layered defense that integrates institutions, regulators, and consumers into a unified ecosystem—one that is agile, data-driven, and capable of responding in real time to the threats we face today and those we cannot yet foresee. By aligning incentives, embracing innovation, and creating space for experimentation and collaboration, we can build a more resilient financial system and ensure that criminals find fewer cracks to exploit.
Stay informed by joining our mailing list