Both regulators and the wide swath of municipal issuers, around 40,000 in total across these United States, will likely have to decide in the coming months and years how much or how little a role artificial intelligence will play in the development of standards mandated by the Federal Data Transparency Act.
The FDTA was passed in December 2022 with the intent of creating uniform standards for how municipal issuers submit machine readable information to the Municipal Securities Rulemaking Board’s EMMA system and the Securities and Exchange Commission is tasked with developing standards and rulemaking related to the Act. Joint standards developed alongside other Financial Stability Oversight Committee members such as Treasury, the Federal Reserve and other banking regulators are expected by December 2024 and specific rulemaking will be developed by December 2026.
Regulators have so far been quick to dismiss any of the fears about its implementation, as there are no disclosure requirements, standards or timelines on the horizon, and the Commission is still gathering information from industry stakeholders on how this should work. But the SEC and Internal Revenue Service have already boasted about their use of AI to aid in enforcement, and many market participants are already beginning to consider how AI can help to make some of the challenges with implementation easier.
“We at the SEC also could benefit from staff making greater use of AI in their market surveillance, disclosure review, exams, enforcement and economic analysis,” SEC chairman Gary Gensler said during a speech in July. “AI opens up tremendous opportunities for humanity, from healthcare to science to finance. As machines take on pattern recognition, particularly when done at scale, this can create great efficiencies across the economy.”
Gensler also said he believes AI is the most transformative technology of our time, on par with the internet and mass production of automobiles, and that the Commission is already putting it to use.
“We already do, in some market surveillance and enforcement actions, to look at patterns in the market,” Gensler said in response to a question by Sen. Catherine Cortez Masto about how he envisioned the regulator’s use of the technology. “It’s one of the reasons why we’ve asked Congress for greater funding this year, in 2024, to help build up our technology budget for the emerging technologies.”
The Commission’s budget for 2024 is $2.4 billion, $194 million more than what was enacted for 2023, but Gensler remains wary about certain aspects of the technologies’ future, noting that the technology, due his assertion that the United States will likely have two or three foundational models for AI, will be at the center of future financial crises.
The IRS is also leaning on AI in its efforts to “restore fairness to the tax system with the Inflation Reduction Act,” the regulator said, and the technology will “help IRS compliance teams better detect tax cheating, identify emerging compliance threats and improve case selection tools to avoid burdening taxpayers with needless ‘no change’ audits.”
No specific references have been made in how regulators will apply AI to the muni market, but they have encouraged the market to make comments and advice on how they’d like FDTA to be handled. Plus, the muni market isn’t exactly averse to deploying the technology: the Government Finance Officers Association recently announced a partnership with Rutgers University that seeks to leverage AI to extract select data from local government financial reports.
“Through advanced natural language processing algorithms and machine learning techniques, the project will test AI technologies to extract specific financial data from local government financial reports, such as revenues, expenditures, budget variances, and debt levels,” GFOA said, also noting that the technology will reduce human error. “GFOA and Rutgers will identify approximately ten county governments in a single state to work with the team to test whether data can be extracted and to participate in extensions of this work, such as integrating information from the county into a quarriable large language model. The work will focus on extracting a small number of the most critical pieces of information, including quantified and qualitative information. The project will also consider the possible integration of non-financial and financial information.”
Many market participants have so far noted that AI could be beneficial for helping state and local governments comply with the uniform standards coming out of the FDTA but there first needs to be a taxonomy created in order to provide a reference for the technology to point to.
“If you’re going to feed PDFs into some kind of intelligent reader, there has to be a target output format, and that’s what a taxonomy will do for us and give us a target that the input data in the form of the PDF has to be structured into,” said Marc Joffe, federalism and state policy analyst at the Cato Institute.
In a comment letter sent to the SEC in October, National Federation of Municipal Analysts chair Mark Capell outlined how NFMA wants to help develop that taxonomy, and made some recommendations as to what should be included in it.
“The taxonomy should encompass all financial statement line items for analyzing a municipal issuer/Obligated Person using generally accepted accounting principles (GAAP) as promulgated by the Governmental Accounting Standards Board (GASB) supplemented as necessary to encompass cash-basis, regulatory, and other non-GAAP frameworks,” the NFMA letter said. “The taxonomy should fully incorporate management’s discussion and analysis, basic financial statements, and notes, required supplementary information, and supplementary information. No less than what is currently provided as PDF documents should be provided in machine readable format.”
There has yet to be significant progress on the development of a taxonomy, but Joel Black, chair of the Governmental Accounting Standards Board said that they are beginning to develop one, and that they’re currently in search of a taxonomy specialist to help with it. The taxonomy will be needed before any other movement on AI can begin.
“The hardest part of dealing with the FDTA is the creation of a broad taxonomy in the muni space and that, from my perspective, is not an AI challenge. That’s a challenge that has to do with the complexity of our market,” said Gregg Bienstock, senior vice president, group head, municipal market at SOLVE, formerly Lumesis, which has used AI for pricing information. “The challenge of creating a taxonomy where you’re comparing apples to apples to me is most significant,” he added. “Once that is done, the ability to use machine reading and to teach the machines how to read and how to extract, I think that’s where the opportunity is for AI to be very valuable in terms of getting the data.”
Inconsistencies in the market often center around the different labels issuers use to refer to certain numbers of a financial report. The SEC’s Division of Economic and Risk Analysis recently conducted an analysis into how filers are tagging reported items over multiple periods, noting how inconsistency of element labeling undermines the comparability of data across reporting periods.
Whether the taxonomy uses XBRL, a language used for tagging financial data, or another language, the process is still largely manual. An auto-tagging technology that uses machine learning, a form of AI, to automate the tagging process, has been introduced by Iris Business Services in a product called Carbon, and could help with that added labor required by state and local governments.
But even within that product, accuracy remains a major concern. When Carbon is not 100% sure of the tags automatically populated, it will instead provide a set of suggestions. The lack of accuracy within AI, a major concern generally around the technology, could also undermine the entire effort.
“I’ve worked with AI technology before, specifically for the purposes of trying to automatically process annual comprehensive financial reports and I found that there was a high error rate,” Joffe said. “We have this concept called hallucinating, where you enter a question and it provides a highly confident answer. But then when you research that answer, it turns out sometimes that it’s just making it up and that’s not really going to work for financial data. We have to have a high sense of assurance that the financial data that’s coming out of a piece of software is correct.”
This process is slated to continue into 2026, and many still believe it will take longer than that to create a uniformity around how all 40,000 issuers in the U.S. present information. 18 states currently require specific formats for how all municipal issuers within those states present information, and learning from them, and sharing those experiences with regulators, could be one path forward.