StepStone Group - Thu, 08/08/2024 - 16:13

DO NO HARM - How GPs and LPs can use Responsible AI to build trust

The internet changed everything. Although few actually understood how it works, nearly everyone understood that it would revolutionize life. Fewer still contemplated the negative side effects it would one day pose. Without much in the way of restraint, the internet’s growth was swift and unbridled. As the modern world embraces another game-changing technology, armed with the benefit of hindsight and years of study on the internet’s harmful side effects, society by and large is much more cautious about the development and application of artificial intelligence (AI)—especially generative AI.

Though it is still in its infancy, we seem to have a much greater understanding of the risks inherent in generative AI than we had when the internet was a similarly fledgling technology. Our collective anxieties about where an adaptive and autonomous technology might lead us have steered us to the development of the emerging field of “Responsible AI.” Because most of the existing processes and tools, from code development to risk management, were designed for traditional software systems, they struggle when dealing with generative AI systems, being ineffective at managing emergent risks and preventing harmful outcomes. Responsible AI will be critical in responding to such challenges and delivering trustworthy AI systems.

Because GPs and LPs are involved in both developing and applying AI to the companies and assets they invest in, there is a vested interest in ensuring that AI is developed and deployed responsibly. We recognize the immense benefits these systems could deliver—both financial and otherwise. And while there is huge collective focus on this upside, to ensure this materializes Responsible AI practices need to develop. Today, this is a nascent field. Our firm, through this paper and related efforts, hopes to contribute to its development. Leveraging existing ESG frameworks and expertise in value creation will be helpful to this end.

This paper provides an overview of the scope, history, global initiatives and regulatory developments of AI broadly encompassing generative AI. It introduces the concept of Responsible AI, which seeks to deliver trustworthy AI systems. The risks endemic to these systems are explored as are the newest leading AI risk-management frameworks. This paper seeks to contribute to the nascent practices of Responsible AI in private markets, providing examples and suggested best practices at the GP, LP and asset levels. We explore how ESG practices dovetail with Responsible AI and pay close attention to “high-risk sectors.”

Download Full Article

Sign Up Now for Full Access to Articles and Podcasts!

Unlock full access to our vast content library by registering as an institutional investor .

Create an account

Already have an account ? Sign in