Australian dollar banknotes fanned out on a table
Fintech

APRA warns mortgage brokers on AI risk as CBA scales fraud-detection agent

Yusra Ahmadi
Yusra Ahmadi
3 min read

APRA has told lenders and brokers AI-driven fraud is a board-level prudential risk, after Commonwealth Bank scaled an in-house fraud-detection agent and industry mortgage fraud hit about $A3 billion.

The Australian Prudential Regulation Authority has put mortgage lenders and brokers on notice that AI-driven fraud is now a board-level prudential risk. The warning came as Commonwealth Bank scaled out an in-house AI agent and the industry's mortgage-fraud tally hit about $A3 billion.

"AI adoption is accelerating, but governance, oversight, and risk management frameworks are not keeping pace," APRA member Therese McCarthy Hockey wrote in a letter to regulated entities.

The regulator stopped short of issuing AI-specific rules. Instead, it told lenders existing prudential and consumer-protection obligations apply in full, including the best-interests duty for brokers and information-verification standards for lenders.

CBA scales fraud agent

Commonwealth Bank, Australia's largest mortgage lender, said an in-house AI agent now monitors more than 80 million signals each day across transactions, card payments, online activity and digital banking. The agent flags emerging fraud patterns, then proposes new detection rules for review by CBA's fraud analytics team.

CBA said the system has contributed to a more than 20 per cent reduction in fraud losses in the first half of 2026 against the year-earlier period. Executive general manager James Roberts said the work was part of a $A1 billion fraud and security commitment.

The bank has separately disclosed it is investigating more than $A1 billion in suspected home-loan fraud across its book, with industry-wide mortgage fraud estimated at about $A3 billion, the Australian Financial Review reported. CBA has confirmed it is reviewing the conduct of about 10 mortgage brokers.

What brokers face

The AI tools brokers use day to day now include lender-policy search engines, automated compliance assistants, and document-summarisation tools. APRA said many of those tools sit outside approved frameworks at the brokerages running them.

The regulator's headline question for brokers: who is accountable when an AI tool returns the wrong answer? APRA's position is unambiguous. Professional and legal responsibility for client outcomes stays with the human broker.

For lenders, the same logic applies on receipt of AI-assisted applications. APRA wants verification frameworks that catch synthetic payslips, deepfake video know-your-customer sessions and AI-generated supporting documents, all of which have been spotted by Australian lenders in the past 12 months.

The Mortgage and Finance Association of Australia has not commented publicly on the APRA letter.

What happens next

APRA said it would not initially mandate AI-specific reporting but would test governance during routine prudential reviews. Banks that cannot show clear oversight of AI tools used in lending decisions face follow-up.

CBA's agentic detection system, the first of its kind disclosed by an Australian major bank, is likely to be matched within months. NAB, ANZ and Westpac have all signalled increased fraud-AI investment in recent investor presentations. None has disclosed a fully autonomous detection agent in production.

regulationaifintechapracbamortgage fraud
Yusra Ahmadi

Yusra Ahmadi

Fintech reporter on neobanks, payments rails, Stripe AU, and the crypto regs catching up. Reports from Sydney.