SEC’s AI Rule Could ‘Weaken’ Advisors’ Fiduciary Duty, IAA Attorney Argues
by Patrick Donachie | As seen on wealthmanagement.com…
The advocacy group for advisors argued the rule is overly broad. In contrast, the outgoing director of the SEC’s Investment Management Division said the degree of risk in AI’s proliferation is “obvious.”
The SEC’s proposed new AI rule threatens to weaken advisors’ fiduciary duty, according to a head attorney for the Investment Adviser Association.
The danger of the new rule is the proposal of a “brand new framework for handling conflicts” in connection with technology tools, IAA General Counsel Gail Bernstein told WealthManagement.com during the association’s annual compliance conference this week.
“What’s going to be very challenging is that everyone understands what the fiduciary framework means, and by creating a new rule that overlays something on top of it, I think they’re potentially weakening the fiduciary duty,” she said. “It’s almost like you’re proposing a rule for the sake of proposing the rule, as opposed to, ‘Is there a gap and do we need to fill it?’”
SEC officials contend the proposed rule would limit conflicts of interest arising when brokerage firms or asset managers use AI tools to make investment recommendations or trading decisions. SEC Chair Gary Gensler has argued that investors desperately need the rule for a world where they can be micro-targeted with products and services.
However, the IAA argued the solution to the problem was far too broad. In an unusual step for the organization, the IAA recommended that the commission scrap the rule.
A final version of the rule is expected to be released this spring.
In a discussion at the conference with Bernstein in his last week as the director of the SEC’s Division of Investment Management, William Birdthistle said regulators should not wait until a crisis arrives before responding.
“If anyone here is a parent, you don’t wait until the child is in the street. You can act beforehand if you see what’s coming very well,” Birdthistle said. “Clairvoyance and prognostication are difficult, and no one gets it right all the time. But this is one where I think the degree of risk is very obvious.”
As evidence, Klass cited existing regulations and guidance impacting advisors’ use of AI, including their fiduciary duty, 2017 staff guidance on robo advisors and the marketing rule, among others.
Examiners are also looking into firms’ disclosure and marketing procedures regarding AI, as well as policies and procedures for compliance and conflicts. In her final week as deputy director of the IA/IC Exam Program in the SEC’s Examination Division, Natasha Vij Greiner noted that many advisors were “getting it wrong” when it came to AI-related disclosures (Greiner will succeed Birdthistle at the helm of the Investment Management Division).
Bernstein said even if an SEC regulation focused on the actual technology of generative AI, they’d want to see more analysis before proposing a rule. Instead, Bernstein believed they could support guidance detailing the need for a principles-based risk governance framework.
“Our view is if this is about conflicts, you don’t need a rule,” she said. “If you feel like advisors need to understand better how to think about conflicts with certain frontier technology, think about giving guidance.”
Birdthistle acknowledged whether or not the commission withdrew or changed the rule, the problem would remain. As evidence, he cited the “conundrum” he faced following meetings with AI engineers about their products.
“I ask, ‘How does it work?’” he said. “‘Stuff goes in, ‘box’ does magic, stuff comes out.’ That’s not a reassuring answer.”
But while some in the industry believed that disclosures help soothe situations like this, Birdwhistle had trouble imagining disclosure alone could solve the issue raised in that meeting.
“What are you disclosing? You can’t disclose that, that the algorithm performs in ways unknown to its engineers,” he said. “That doesn’t sound like meaningful disclosure.”