The U.S. Treasury Department has begun releasing guidance meant to help financial services companies securely use artificial intelligence while adhering to regulatory obligations.
“Treasury will release a series of six resources developed in partnership with industry and federal and state regulatory partners to enable secure and resilient AI across the U.S. financial system,” the agency said on Wednesday.
On Thursday, Treasury released the first two resources in the series: an AI lexicon defining key terms — “with a focus on frequently used terms that have a specific meaning in the context of AI use in the financial sector” — and a version of the National Institute of Standards and Technology’s AI Risk Management Framework focused on the financial-services sector. The latter publication contains a questionnaire to gauge companies’ AI maturity, a matrix linking AI-related risks to available security controls and guidance for implementing those controls.
Treasury said the resources are the result of consultations between state and federal regulators, financial executives and “other key stakeholders” as part of the department’s Artificial Intelligence Executive Oversight Group (AIEOG). The financial-services sector’s coordinating council partnered with a similar body to create the group.
Cory Wilson, Treasury’s deputy assistant secretary for cybersecurity and critical infrastructure protection, said the resources would focus on helping small and medium-sized financial institutions “harness the power of AI to strengthen cyber defenses and deploy AI more securely.”
AIEOG members tackled several different work streams — including governance, fraud prevention, identity management and transparency — that will inform Treasury’s guidance.
“By focusing on practical implementation rather than prescriptive requirements,” Treasury said, “the resources are intended to help financial institutions adopt AI more confidently and securely, strengthening resilience and cybersecurity while supporting innovation across the sector.”
Treasury’s resources are intended to meet the financial-services sector’s growing hunger for AI automation.
Banks want to use AI to improve fraud prevention, insurers want to use it to evaluate risk and securities markets want to use it to analyze transactions. Roughly one-third of the work that capital markets, insurers and banks perform “has high potential to be fully automated” by AI, according to a January 2025 World Economic Forum report.
But the technology also poses serious dangers. Faulty AI models could leak sensitive financial data, and biased models could perpetuate systemic discrimination. AI aberrations could also quickly destabilize fast-paced, interconnected markets. The trend of many institutions using the same AI models could produce “synchronized market movements and amplified volatility patterns that extend beyond traditional algorithmic trading risks,” the RAND Corporation said in a September 2025 report. “The situation becomes more complex as AI systems advance in sophistication, potentially developing behaviors that prove difficult to predict or effectively audit.”
The RAND report noted that “regulators face the challenging task of monitoring and assessing these systems’ collective behavior.” But, according to an October 2025 report from the G20’s Financial Stability Board, few regulators are aggressively overseeing financial industries’ use of AI — in many cases because they lack the capacity or expertise to do so.
