The US financial sector faces a “significant” data gap between big and small banks as they deploy AI to fight fraud, the Treasury Department said Wednesday, noting that smaller institutions are disadvantaged.
While big banks have more internal data to develop AI models to prevent fraud, this is not the case for smaller ones, the Treasury said.
There is a need to narrow this divide, the Treasury said, pointing out there is “insufficient data sharing among firms.”
The latest recommendations and report come after President Joe Biden unveiled an executive order in October on regulating AI, with the Treasury now taking steps to identify risks and challenges.
The order directed federal agencies to set new safety standards for AI systems, while requiring developers to share their safety test results and other critical information with the US government.
“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector,” said Nellie Liang, Treasury under secretary for domestic finance.
Liang added that the Treasury’s report sets out a vision for how financial institutions can “safely map out their business lines and disrupt rapidly evolving AI-driven fraud.”
The report said cybersecurity information sharing had matured but “little progress has been made to enhance data sharing related to fraud.”
It said the US government could help contribute to a “data lake of fraud data” that would be available to train AI.
The department also called for “labels” allowing the sector to clearly identify what data was used to train models for vendor-provided AI systems and where they originated.
Other steps the Treasury identified included “explainability solutions” for advanced machine learning models, and more consistency in defining what artificial intelligence is.