Editor-in-Chief Sarah Wheeler sits down with Brooke Anderson-Tompkins to discuss responsible AI and the scope of implementing it in the enterprise. Anderson Tompkins served as president first priority mortgage Served for 15 years and is a former Chairman of the Commission American Community Mortgage Agency (now American Community Home Lenders). She is now the company’s founder and CEO bridge consulting and will speak at HousingWire’s Artificial Intelligence Summit.
This interview has been edited for length and clarity.
Sarah Wheeler: You went from leading a mortgage company to starting an AI consulting firm. What prompted you to enter this new field?
Brooke Anderson-Tompkins: Jumping is a great way to express it! The short answer to your question is that I am driven by a passion for innovation and a desire to leverage artificial intelligence to create impactful solutions for industry.
It’s definitely not hype surrounding the title, but it tells me that when I see something I’m passionate about, the opportunity to incorporate artificial intelligence into the mortgage ecosystem definitely gravitates toward me. What I call “the human heart” definitely caught my attention.
SW: How does your background influence how you approach BridgeAIvisory’s customers?
bat: Having spent almost two decades in real estate related fields, I have first-hand reference to the drivers – starting on the real estate side and then cascading through mortgages and collective core servicing. Then I worked in New York, so I spent a fair amount of time on the regulatory and compliance side, and over time that translated into spending a lot of years in the advocacy part of Washington, D.C. All of these things largely equate to business.
Many business elements can span business types. So, especially in artificial intelligence, the components therein have broad impact and can be easily applied to general business in about 80% of cases. Remaining engaged in advocacy efforts is a key component. We don’t want another Dodd-Frank bill and the associated cost impacts that come with it. The BridgeAIvisory approach is very similar in many ways because I don’t think AI is a panacea.
It does have huge potential when thought from a strategic perspective, implemented, trained and monitored (regardless of benchmarks or ROI) and when these principles are incorporated from the outset. It has a chance of getting better results.
SW: What conversations are you having right now about artificial intelligence?
bat: In the months since I introduced Bridge AIvisory, it has been fascinating to me that the conversation started coinciding with the Artificial Intelligence Summit expected to start in a few weeks! The first is the level setting of artificial intelligence language. I call it “from the boardroom to the break room.” It’s not enough to have meetings around the language of AI, it’s also about taking that language and incorporating it into developing a comprehensive strategy and determining what value you can bring. This is called the “clean paper process”—a concept that Elizabeth Warren introduced to me a few years ago.
I learned that the same words can have different meanings and different contexts and still be accurate. Therefore, identifying from the beginning what these definitions are for the project at hand, and repeating them often, can be the key to successful execution, as language becomes part of the culture, and culture is a critical component of success.
SW: We are excited that you will be speaking on responsible AI at our AI Summit. What does this word mean?
bat: My answer comes from the training I received at the Mila Institute for Artificial Intelligence in Montreal. Mila is a globally recognized deep learning research institution founded by Yoshua Bengio in 1993.
There is currently no globally accepted definition of responsible artificial intelligence. For BridgeAI, I adopted Mila’s definition: “There is an approach in which the life cycle of an artificial intelligence system must be designed to uphold (if not enhance) a set of fundamental values and principles, including internationally agreed human rights frameworks, as well as ethical principles.” It also goes on to “mention the importance of thinking thoroughly and carefully about the design of any AI system, regardless of its application domain or goals. It is therefore the collection of all implicit or explicit choices in the life cycle design of an AI system that The choice makes it either irresponsible or responsible.
We’re used to, “Okay, that’s the definition. Give me the task and let’s go. But artificial intelligence is a life scenario — it goes far beyond business. We’re used to Dodd-Frank, This affects financial services. We hone in on this and address the issue much bigger than that.
So I think we need to be intentional and keep these things in mind when we create solutions. Ultimately, the good news is that if you look at the definition, you’ll find that the core principles are things we’re all very familiar with: ethics and values, transparency and explainability, accountability and governance. It is security and robustness, privacy and data protection, inclusion and diversity, and environmental sustainability. The good news is that we already do this.
However, I don’t think we necessarily have to consider all of these pieces when working on a particular project. This is part of the responsible AI part, treating these ensembles as part of the foundation of the project.
This is part 1 of this interview. Look for part two next week.