Intelligent graph computing approaches are at the fore of numerous mission-critical financial services use cases — and with good reason. With the surplus of data banks have at their disposal related to customers, market forces, and industry trends, relationship-savvy graph techniques are ideal for determining patterns in them that easily elude other approaches.
Some of these applications are revenue-generating ones, like blue ocean opportunities for decreasing credit risk for loans and mortgages, resulting in millions of dollars for graph-aware financiers. Others pertain to conventional risk mitigation needs like fraud detection, a vast area of concern including payment fraud and other subsets.
Graph technologies are also invaluable for assisting with Anti-Money Laundering (AML) use cases. This application is notable because it’s just one example of how graph computing helps with the litany of regulations with which operators in this space must comply.
Each of these use cases is characterized by relationship-sensitive computations at which graph methods excel. They also occur at enterprise scale, at high velocity, and on variegated data well suited for graph structures.
However, not all graph solutions are equally adept at mastering these applications. Top ones utilize a single platform for all these applications, which maximizes operational efficiency and effectiveness. They do so with a pipeline for implementing different computational workloads, including graph query, graph mining, graph analytics, and graph AI, which combine to facilitate graph intelligence.
Finally, it’s necessary for such a solution to perform all tasks for these applications, from starting out with raw data to training and deploying AI models or using inference techniques. By natively integrating with the most ubiquitous tools for everything from data engineering to data science, the overall graph intelligence of this pipeline easily addresses the above use cases to master fraud detection, AML, and credit risk.
The general consensus in finance is graph-based methods present some of the best approaches for managing fraud detection. This use case epitomizes the pipeline approach of transitioning between the graph intelligence pillars of graph query, graph analytics, and graph AI — which is universally applicable for deployments across verticals. Typically, fraud detection requires query capabilities to filter relevant data, analytics to determine feature engineering, and AI to devise machine learning models to detect, for example, payment fraud. Since there are different mechanisms for exploring data and understanding it in relation to fraud detection, this use case relies on a data flow approach to move data between tools and computational needs for querying, analytics, and AI.
Nevertheless, the data flow pipeline underpinning this methodology actually starts before graph query. It natively integrates with a rich array of data management toolsets that enhance the extensibility and utility of leveraging a single graph solution for all aspects of fraud detection. For example, integration with a distributed processing framework like Dask is critical for preprocessing data and putting it into the right format for this use case. Dask is a distributed execution of Python’s User Defined Functions (UDF); one can implement it on the same compute resources as the aforementioned graph intelligence platform’s.
Another advantage of managing fraud detection with a single pipeline for graph query, graph analytics, and graph AI is the flexibility for performing the feature generation process. The pipeline itself needn’t be real-time to provide real-time functionality. Users can generate features offline and implement them in real-time fraud detection systems. For instance, firms can detect credit card fraud by generating large-scale features on huge datasets with this data flow pipeline, then push the features to real-time AI models as needed: whether hourly, daily, or weekly.
As previously indicated, financial institutions must adhere to a plethora of regulations, more than most other verticals do. Although it’s technically an aspect of fraud detection, AML is a universal mandate financial organizations must follow. Specifically, AML compels these firms to monitor, detect, and report money laundering activities — both domestically and internationally. Fulfilling this demand requires scrutinizing transaction data at scale while evaluating individual transactions and their relationships to others. These graph applications typically focus on pattern recognition of events, actions, and actors — both in relation to individuals and potential organizations they represent. Most financiers have their own internally developed algorithms they deploy for AML, which is less of a real-time use case than one assessed weekly or monthly.
Still, AML deployments illustrate the multitude of advantages of accessing a comprehensive graph intelligence pipeline to shift between workloads. The pipeline supports Python’s UDF so companies can use their own proprietary algorithms or use pre-built algorithms. This characteristic reveals the flexibility of this approach due to the solution’s native integrations with external tools users can engage with within the graph platform. The result is data movement is minimized, while the data flow pipeline is extended. Additionally, AML involves complex, nuanced pattern detection based on relationship awareness between nodes. For example, banks might identify a complicated money laundering scheme via a bipartite graph involving myriad parties, accounts, and countries. It’s impossible to understand these relationships and patterns with relational methods or even traditional machine learning.
The final consideration is that since AML is predicated on transactional data, the scale of these deployments is astronomical. Firms must literally assess billions of transactions, which is why opportunities for parallelism — one of the by-products of a data flow architecture — and High-Performance Computing are foundational to pipelines processing these workloads.
The emergence of graph technologies for credit risk applications is a relatively new development in finance. As such, it’s still regarded as a blue ocean deployment. Consequently, there are far greater adoption rates of graph computing for fraud detection and AML than there are for credit risk. However, firms availing themselves of graph approaches for credit risk frequently see immediate, undeniable business value. Progressive organizations are improving credit scoring models with Graph Neural Networks (GNNs) to launch processes yielding multi-million dollar impacts on revenues. Credit risk applications are the current vanguard of graph technologies in finance. Significantly, data flow methods are pioneering the way for this use case.
Even a casual review of the benefits of GNNs for this use case demonstrates that adoption rates will likely increase soon. These advanced analytics models, which are frequently deployed with the computing power of deep learning, improve the accuracy of predicting credit risk. Greater model accuracy for this use case directly increases the amount of loan originations in two ways. On the one hand, they reduce the number of false positives. On the other, they’re able to reduce credit expenses by reducing false negatives. Thus, they provide a greater throughput for financiers for this application while simultaneously delivering more accurate credit decisions for loan or mortgage applicants, too. The combination of mounting revenues with declining losses is one no credit lender would overlook and is attributed to the use of graph computing — and GNNs, in particular — for this important financial services application.
Finance is one of the biggest adopters of graph computing technologies for its core use cases. Subsequently, it’s also one of the biggest winners of this approach, particularly when it’s employed within the graph intelligence framework of a data flow system with parallel processing and HPC. The merit of graph technologies to applications of credit risk, fraud detection, and AML is beyond dispute, as these use cases allow companies to respectively generate profits, mitigate risk, and ensure regulatory compliance.
Moreover, these benefits are accessed by one platform that intelligently positions data between different workflow components as needed. There’s also the added boon of extending the platform to external tooling that’s deployed within this overall graph framework, so data only moves when it has to. The result is a broadening of the data flow pipeline to include some of the most popular solutions for preparing data or analytics, within the familiar confines of a dynamic workflow engine that consistently benefits mission-critical finance use cases.