Realizing the promise and avoiding the perils of AI in impact finance

Get a weekly pulse on news and trends in impact investing with our free newsletter.

*I agree to receive marketing emails from ImpactAlpha, its affiliates, and accept our terms of use and privacy policy.
Guest Author

Allie Burns

Guest Author

Jasper van Brakel

Artificial intelligence will permeate every aspect of finance.

The tools can analyze investment decisions, identifying opportunities and risks. AI will become more efficient and consistent than people are at underwriting against a credit policy. It could replace whole departments that focus on analysis. Some larger banks already provide credit lines without applications, based on AI analysis of spending patterns and income. 

To some, this looks like straightforward technological progress, with maybe just a few kinks to work out. But for many of us it raises an array of thorny questions: What will the effects of this technology shift be on people who are disadvantaged by current practices? How can we mitigate the downsides? Can better reasoning and analysis lead to better-quality relationships and outcomes? How do we overcome the systemic biases already being incorporated in AI?

These questions are particularly salient for impact investors and advocates of inclusive finance. Much of impact investing is still in a nascent stage, and there’s little investment history to mine in many of our focus areas—particularly sectors that were heavily nonprofit and now are attracting for-profit enterprises. 

Given this situation, AI adoption may happen more slowly in the impact finance field, creating space to think through how investors, lenders and others can use this technology to advance their mission goals.

Yet we feel a sense of urgency. AI tools are spreading fast in finance broadly. There’s no guarantee that change won’t accelerate, potentially baking in biases that impact finance leaders have been working to overcome.

Screening: The cookie-cutter conundrum

AI’s most obvious, immediate appeal for impact finance—and for finance in general—is as a screening tool for lending, equity funding and accelerator candidates. 

Data-driven decisions on their face seem more equitable than relying on humans: AI can analyze more factors and their interplay faster than a human can; it’s free from personal bias; and it’s consistent. AI will make it easier to make cookie-cutter decisions, but we’ve already seen that if an AI tool is learning from datasets based on the assumptions of humans past and present, it will perpetuate existing inequities.

Lenders using conventional credit-scoring models exclude many creditworthy loan applicants because the models omit the most relevant factors in their financial history. Equity investors often rely too heavily on pattern recognition as a shortcut in making decisions. They’ll look for business and founder types they know have succeeded in the past, which is how the preference for a white guy in a hoodie who dropped out of a top-tier university emerged. And then there’s the overweighting of the entrepreneur as the secret to success, versus the business model, timing, the market and other determinative factors. 

If we want AI to help us make more equitable decisions and more accurately assess risks and opportunities, we’ll need to use or create tools that don’t rely solely on data generated by our current faulty evaluation processes. Individual funders have their own frameworks; we should test them as the basis for widely deployed AI. For example, Village Capital has a framework for reducing bias in the evaluation process based on a two-year study of experimental and control groups across eight accelerators. Turning that into an AI tool and collecting data from its use across hundreds of funds could verify its effectiveness and lead to further improvements. 

Relationships: Can the bot touch be better than the human touch?

Much of the value we deliver to our portfolio companies is relationship based: coaching, advising and making introductions to other advisers, potential partners and contacts who can open doors. It’s hard to imagine AI replacing all that. But we can see some ways in which AI tools not much more capable than those available now could enhance relational work and free up our teams to do the nuanced human-to-human work.

Research suggests that an AI business coach could help entrepreneurs deal with everyday frustrations – but wouldn’t be able to help them fix fundamental issues. Similarly, an AI tool could help with the basics of building a management team or creating an organizational structure. But at some point most entrepreneurs would still need human coaching and intelligence to apply what they’re learning to their situation.

AI’s sheer data-crunching capacity could make it more valuable than human advisers in some situations. For example, a CEO might want to know how many days of inventory a business of a certain size in a certain category should have available. Most people with questions like this ask an adviser or a board member; those people will reply based on their personal experience. An AI could give an answer based on broad industry performance data.

Not everyone will warm up to AI coaching, so we’ll need to calibrate our efficiency expectations to real-world results. A larger concern is the need to avoid magnifying current disadvantages. If we are not careful about how we implement AI business intelligence tools, we could create a technology access gap that affects the same entrepreneurs who are already burdened by a lack of access to capital.

Innovation: The promise of modeling systemic change

Generating and evaluating new ideas is among the most compelling potential uses for AI. One of the ways humans generate new ideas is by focusing an unexpected lens on a problem—the way, say, biomimicry solutions look to plants and animals as inspiration for urban and product designs. Generative AI could help us fully articulate and evaluate system-scale ideas that result from applying principles from one field to the problems of another. For example, we could ask it to apply the principles of regenerative natural systems to finance, investing and the whole economy, and then model the changes that would imply along with the expected results. 

Part of the challenge in a project like that is, how can AI evaluate positive or negative systemic effects given that current information sources largely do not address these effects, and historical data could lead us seriously astray? Again, as long as the knowledge base AI tools use is confined to what “worked” in the past, there will be serious limits on its utility for evaluating new ideas. 

No doubt someone is working on developing an innovation AI. We would love to see researchers focus on how it could help us build an economic system that produces shared prosperity and regenerates resources. 

Impact measurement: Will AI finally crack this hard nut?

When we’re in the business of financing change and changing finance at the same time, how do we define this systemic impact? This could be AI’s most important role in the impact sector if we can define the parameters correctly (always the big if).

It makes sense to train AI on what we view as meaningful social and environmental impact. Those guidelines would have to be quantitative and analytical—and that is not the current state of impact reporting. Assessing aggregate impact for a portfolio of small- to medium-size enterprises across disparate sectors remains difficult and frustrating. Even within a single sector, it’s hard to get resource-strapped smaller companies to collect data to a specific standard, or at all. And we can’t blame them for balking when companies with multiple investors often get requests for different metrics using different reporting systems. Consequently, many investors end up reporting only on the performance of example companies or reporting only on outputs, not on outcomes.

Assuming we can conquer that problem—and we know companies are working on it—the next one is how to measure results that are difficult to quantify. For example, if you’re supporting community wealth building, how do you account for enhanced cultural capital and self-determination? Can AI provide a defensible estimate of how many people benefited from a project or enterprise that has multiple ripple effects? And if relationships and the quality of collaboration matter, how do we use AI in a way that guards against purely mechanical decision-making? We’ll need to be wary of the default toward making decisions based on what we can most easily measure rather than on the full spectrum of factors that matter.

Putting impact at the center of AI

Optimistically, AI tools could enable investors to support more high-impact companies more efficiently and give the whole impact ecosystem a deeper, clearer picture of how our companies and investments are affecting the world. But materializing that vision will require people with roots and expertise in the impact ecosystem to play an active role in shaping AI tools. And those tools must be transparent about their parameters, modeling, datasets, assumptions and limitations. 

When we began thinking about this article, we asked ChatGPT how AI will influence impact investing. Unsurprisingly, we got a sunny list of uncomplicated benefits across categories including enhanced data analysis and decision-making, automated impact assessment, trend and opportunity analysis, risk assessment and mitigation, and portfolio optimization. Those capabilities could indeed improve impact finance—but only if we heed the bot’s vague caution about “ethical considerations.” 

Before we widely adopt AI tools, we need to understand what their consequences might be, lest we end up perpetuating systemic biases or misusing information we’ve being trusted to hold and employ with care.


Allie Burns is CEO of Village Capital, which supports early-stage impact startups through investments and accelerator programs. Jasper van Brakel is CEO of RSF, which provides diverse forms of capital to for-profit and nonprofit social enterprises.