Product

How Inscribe uses LLMs to reduce false positives & improve document automation

minute read

Here at Inscribe, we’ve been busy building state-of-the-art AI Fraud Analysts and helping our customers integrate them into their workflows.

Our AI Fraud Analyst, and AI Agents in general, have been made possible by huge advances in large language models (LLMs) trained and provided by companies like OpenAI, Meta, and Anthropic. By now, you’ve probably heard all about our AI Agents, but we’d love to share with you how else Inscribe is using LLMs to fight fraud and automate workflows!

Before we get into the details, you may be wondering about the difference between traditional machine learning models and LLMs. Traditional models and LLMs differ mainly in complexity and their approach to handling data. Traditional models, like decision trees or logistic regression, are designed for specific tasks and require structured input data, often needing manual feature engineering to get good results. In contrast, LLMs, like GPT, are more generalized and trained on vast amounts of unstructured data, like text, without needing hand-crafted features. LLMs can handle more complex tasks like understanding context and generating text, which makes them particularly well suited to taking on tasks generally performed by humans. See more on how we make the decision to use LLMs.

Reducing false positives

Inscribe is able to uncover fraud which is impossible to detect with the human eye. We do this in part by analyzing huge amounts of metadata and comparing it to our vast database of similar documents to identify any aberrations. However, as with all systems, there are sometimes false positives. Previously, these could be reviewed by one of your human analysts and dismissed; however, by leveraging LLMs, Inscribe can now do this check for you, before we alert you. We have eliminated an estimated 25% of our false positives in this way. Higher accuracy, less noise!

Improved parsing coverage

Leading fintechs and banks rely on Inscribe to parse important information from customer documents, including pay stubs and bank statements, to help them make safe and fair underwriting decisions. Traditional parsing models work well for common document formats that are clearly structured and labeled. For smaller issuers or those formatting their documents in an unusual way, the model will struggle. Enter LLMs! For any documents our traditional model can’t handle, we can now use LLMs to review them like a human would and pick out those crucial details that may previously have been missed.

Enriching transactions

Finding transaction risk signals is tedious, time-consuming work. But Inscribe categorizes transactions so you can easily identify income, NSFs, withdrawals, self-transfers, loan repayments, and other transaction types in just seconds. Inscribe also provides merchant names to help you understand who your customers are already doing business with. This is made possible by using LLMs, which, having been trained on such a phenomenally broad set of data, can work out what categories transactions relate to faster and more reliably than a human. By leveraging the power of LLMs, Inscribe can accurately categorize 90% of bank statement transactions.

Want to learn more about these updates? Speak to one of our AI experts today.

  • About the author

    Andy Bernard is a Senior Product Manager at Inscribe AI. He has previously served as the product lead for financial crime compliance at Checkout.com, in addition to roles with Hotels.com and HSBC. Andy has his Bachelors from University of Warwick and has a professional degree in banking practice and management from IFS School of Finance. He currently resides in London, England.

Deploy an AI Risk Agent today

Book a demo to see how Inscribe can help you unlock superhuman performance with AI Risk Agents and Risk Models.