ROBECO - Tue, 10/31/2023 - 14:00

AI, meet SI: Using artificial intelligence for sustainable investing

Artificial intelligence is being hailed as the greatest advancement of the 21st century – but can it help sustainability? Yes, says quant expert Mike Chen, who believes it may turn into a ‘moon landing moment’ for solving the world’s greatest problems.

  • AI can help with climate modeling and developing renewable energy
  • Potential uses for health care, education and meeting the SDGs
  • Regulation needs to counter the downsides for potential misuse

The main drawback with AI meeting SI is that artificial intelligence is essentially a computer algorithm that operates in the digital/virtual space, while most aspects of sustainable investing are physical, such as building wind turbines or protecting biodiversity.

Machine learning is fundamentally about picking up patterns. The computer extrapolates data to spot anomalies, see where efficiencies can be found, and ultimately predict future trends. A simple use of this is in predictive ‘autocomplete’ text used on smartphone messaging that learns from your typing habits, or spam detectors on email programs.

Although these algorithms may be limited to the digital world, they have massive potential applications in SI, says Chen, Head of Alternative Alpha Research at Robeco. It can be used in areas such as climate modeling, spotting deforestation, and in progressing health care, education, and meeting the Sustainable Development Goals (SDGs).

In fact, it has applications across the entire environmental, social, and governance (ESG) spectrum, Chen says. And it’s easy enough to start with ‘E’ for the environment.

E: Saving the environment

“Climate change is one of the biggest challenges facing humanity,” Chen says. “The Apollo program which first landed a man on the moon was a massive challenge that required an army of scientists and engineers working with the government to solve. Climate is the same – you need the government to take the lead but you also need scientists, academics and NGOs to participate.”

“AI can detect complicated patterns in very large data sets in global climate modeling. That’s probably the number one thing it can help us with. It can look at ocean currents and their solar reflection into the atmosphere or jet streams – it’s all a big interconnected system.”

“If you have satellite photos or drone photos of rainforests or other protected areas, AI can also be used to analyze the patterns, spot anomalies, and detect illegal activities.”

Developing renewable energy

Chen believes that AI can also play a behind-the-scenes role in developing renewable energy and may therefore be able to offset its own emissions that occur when training and running AI algorithms. The use of computers is responsible for 2% of all global CO2, or more than the airline industry. Currently, most of AI is actually made possible by burning fossil fuels.

“Machine learning and natural language processing (NLP) cannot literally go out and build a wind turbine,” he says. “But before you build it, you need to decide where to put it, how to build it, what technology to use, and the anticipated benefits and pitfalls. You need to know what kind of generator or magnets to put into those wind turbine motors. That is a pretty complicated analysis that AI can assist humans to optimize.”

Building wind turbines – AI can help maximize their efficiency.

Then there is the issue that wind turbines won’t work when it isn’t windy, and solar power won’t work at night, so storage is needed for spare electricity during downtimes. Building a new grid accounts for up to 30% of the investment needed to reach net zero by 2050.

“We need to create a kind of battery system, because renewable energy is not as reliable as fossil fuels,” Chen says. “So, what are the optimal placements for these batteries? How do you build an optimal battery network? Meeting the energy demands of society and overcoming the sporadic and periodic nature of renewable energy generation are just some of many issues where AI can help.”

Googling it

One practical example of this lies within Google’s data centers, where the giant servers used to process Google products, such as search engines, Gmail and YouTube generate a lot of heat. Likewise, this same equipment needs to be cooled with energy-gobbling chillers. An AI program created by DeepMind, a Google subsidiary, used machine learning in an attempt to mitigate its environmental impact.1

The DeepMind program gathered the historical data collected from thousands of sensors within the data center, such as temperatures and cooling pump speeds, and fed it into an ensemble of neural networks. Energy use factors were then added so that the machine could predict the future temperature of the data center over the next hour. This could then calculate the optimum energy needed to keep everything properly cooled.

As a result, the program achieved a 40% energy reduction for cooling, and a 15% cut in the overall future power usage effectiveness overhead cost. Google has planned future applications of this technology, including reducing semiconductor manufacturing energy use, and savings in water use.

Taking the temperature

At a wider level, DeepMind’s technology can be used in homes, offices, and factories to study occupancy levels to maximize energy efficiency. In the same way that sensors can turn off the lights if nobody is sitting underneath them, machine learning incorporates detected levels of body heat to determine times when the heating or air conditioning needs turning up or down. Smart devices such as Amazon’s Alexa do a similar thing in homes.2

Other uses include assisting the growth of electric cars, where AI-enabled navigation programs learn where new charging stations have been installed to assist the driver in finding the nearest one. Its bigger use has been in developing self-driving cars, both petrol-driven and electric, where the machine learns the local topography, driving conditions, and traffic light installations to navigate through cities safely.

S: Improving health care and education

AI’s use in the social aspect of ESG lies primarily in its applications in health care and education, such as in finding a cure for cancer, molecular engineering, or facilitating personalized education. AI could also lower the costs of health care in parts of the world where it is lacking since a machine can ultimately scale it and make it cheaper.

“I live in Boston, one of the biotech hubs of the world,” Chen says. “People involved in the biotech industry are telling me that they're actually using a lot of AI in their drug designs and therapeutic discovery in things like mRNA. They're using AI to help simulate effects of drugs on various pathologies.”

Powering advances in health care

“So it’s already ingrained in the R&D process in simulating how various biological processes or mechanisms will react and interact with various therapies. There’s a huge amount of applications for it. It’s on the frontier of finding solutions to diseases that have plagued humankind for a long time.”

One verified use of it is in cancer testing that looks for protein markers or indicators of certain abnormalities. If a patient has a certain protein marker in a blood cell, pattern matching algorithms can compare it to other markers to detect abnormalities, and with a higher level of accuracy than human doctors.

Affording access to health care

This can also massively speed up diagnosis times, which for life-threatening diseases such as cancer can be critical in determining whether the tumor is treatable in time. But perhaps the biggest use of AI is in allowing access to health care in the first place.

“The reason why you have to wait so long to see a doctor is because there are not enough of them around, and this is where AI can make a huge difference,” Chen says. “Doctors, for the most part, are basically looking at your symptoms and then using their knowledge to tell you what are the likely causes. They’ll run diagnostics and tests on you, and then they interpret the tests. A lot of that can be fully automated.”

This could prove immensely useful for people living in remote villages, and in developing and frontier countries, where medical care might not be readily available or is too expensive to provide via traditional means. The patient would report his or her symptoms, and the data is then crunched in a nearby city, from where treatment is organized and dispatched.

But would you trust a robot?

But herein lies the first major problem with AI – what if it gets it wrong? And would you trust a robot, or someone in a white coat, with your diagnosis? “AI could get it wrong of course, but then so can human doctors – they make misdiagnoses as well,” Chen notes.

“Of course, we’re conditioned to trust people more than a robot, and that’s the way it should be. I’m not saying that AI can replace humans, but it can at least augment human specialists and do much of the preliminary work.”

And who would you sue if a machine did get it wrong, given that you can’t take legal action against a computer? That’s not an issue, since a patient would simply take action against the health care supplier or insurer, in the same way as before. “There is still an overarching legal framework backed by the health system and government to remedy complaints,” Chen adds. “That doesn’t change even if the diagnostic system does.”

Revolutionizing education

Besides health care, AI could also revolutionize education by improving access for everyone in the same way. “Knowledge is probably our most valuable national resource,” Chen says. “But developing the knowledge of a population through high-quality education is very expensive because it needs individual attention.”

“Every child is different, and every child learns differently. But how do you cater to that? It’s about providing access both in big cities where you need individualization, and in remote places where it's just about basic access.”

“People are worried about misusing AI by getting ChatGPT to write an essay for someone to pass their exams, but I think the upside of AI in education is massive. Private schools can give you more individualized or tailored attention but public schools don’t have the resources. With AI, we can level the playing field and enhance equality.”

Making education available to all

Machines can already create individualized educational programs to teach children at a personalized level, rather than just having a blanket approach, albeit primarily in the developed world. Many schools are now investing in giving children iPads to focus and then personalize learning, rather than relying on trying to recruit more teachers for ever-larger classes.

Overall for the ‘S’ in ESG, the usefulness of AI therefore means it could prove helpful – or even pivotal – in meeting many of the Sustainable Development Goals. These include those targeting the environment such as SDG 13 (Climate action), those trying to improve health care like SDG 3 (Good health and well-being), and goals targeting learning such as SDG 4 (Quality education) or inequality, led by SDG 5 (Gender equality) and SDG 10 (Reduced inequalities).

G: Use in active ownership

So, with this kind of firepower, what about AI’s use in governance? Robeco is developing programs to help wade through billions of data fragments from companies to find patterns when framing active ownership policies.

Active ownership requires analysis of company performance for voting decisions, or to design an engagement strategy if problems are found. This is followed by human interaction with company representatives during the subsequent dialogues.

“The use of AI can therefore support stewardship professionals to become more focused in their analysis and more efficient in their outreach to companies,” says Michiel van Esch, Head of Voting at Robeco. “Increasingly, AI can assist in doing prep work, data collection, making reporting more efficient, and detecting changes in corporate disclosures.”

And since machines are very good at identifying outliers among large volumes of data, AI can also help with corporate compliance and auditing. Its potential for Compliance officers to spot undesirable human traits was recently parodied in a cartoon about Alex, a fictious City of London investment banker who appears in the Daily Telegraph newspaper. 3 loading=

From polygraphs to Chinese slang

This ability to predict patterns in human emotions is not new – it has long been used in polygraph tests to detect if someone is lying. Chen once used it to create an evolving dictionary for Chinese investment slang.

Much of the Chinese A-shares market is sentiment-driven by less sophisticated retail investors. An NLP program was used to read Chinese language investment blogs in which slang is commonly used when speaking in their local vernacular to better understand what stocks were favored, and why.4

“As with the Chinese slang, NLP can tell the sentiment being expressed to interpret whether somebody is being very vague or being very concrete,” Chen says. “We have a project going on right now to detect greenwashing in this way.”

Rise of the machines

But the real issue here is not so much how AI could improve corporate governance, or in detecting greenwashing, but rather how the technology itself is governed. One fear is that it becomes sentient and develops a consciousness that threatens humanity. Could it ever become a lifeform? In March, tech leaders including Elon Musk even called for work on AI to be paused for six months until all the risks are known.5

The notion of AI rising up against its human creators makes for great movies, but it can be safely kept to Hollywood scripts, say both Chen and an acknowledged expert in global defense – the Pentagon. “I guess what we call creating an ‘artificial general intelligence’, or AGI, where it becomes sentient is a possibility, but certainly not right now,” Chen says.

“People have been talking about uploading human consciousness into some kind of computing framework and doing things like turning your dreams into audiovisual for decades. But no one has ever figured out how to do it.”

No Pandora’s box

This view that the machines won’t rise against us is echoed by the US Department of Defense, which has launched ‘Taskforce Lima’ to study the safety of the latest developments in AI in case it does ever become a threat to humanity. But we don’t need to start worrying anytime soon, says Dr. Craig Martell, digital intelligence chief at the Pentagon.

“It’s neither a panacea nor Pandora’s box,” he said during an interview at an arms fair in London in September.6 “The value of that technology is going to be completely dependent upon the amount and quality of the data that we have. A large amount of it is uncurated, unlabeled data. It’s not information, it’s noise.”

The Pentagon – only using AI for writing memos

Martell said that AI was currently too unreliable to be used for anything in the Defense Department “other than writing the first draft of a memo”. Still, Vladimir Putin once declared the nation that achieves dominance in AI “will rule the world”, and China is openly developing military applications for the technology – which has led the US Congress to demand that the Pentagon masters AI before America’s enemies do.

It’s moving at light speed, but we aren’t

We also should not get carried away that AI could usher in the stuff of science fiction such as light speed or time travel, Chen says. “Certain things are a constant – there are universal physical constants,” he says. “I don't think you can beat the laws of physics just yet.”

“What has been surprising though is the light speed at which AI is progressing – there’s a new innovation coming out every other day, it seems. The latest innovation is that AI can read computer code or a spreadsheet and then tell you what it’s doing in words. That is really quite amazing.”

Deepfake – the dark side of the force

Regulation could lie at the heart of moves to stop any threat – real or imagined – that AI poses. One of the known abuses of the technology is in ‘deepfake’ – the creation of videos, images and speech that are frighteningly realistic.

Many people cannot tell the difference between the AI and the real thing. This was done in October 2023 to create a voice that appeared as if the leader of the UK main opposition Labour party, Sir Keir Starmer, was swearing at a colleague. It was widely circulated on social media, gathering 1.5 million hits on the X platform (formerly Twitter), before it was eventually exposed as a deepfake.7

“It’s like anything else: there are positives and negatives,” Chen says. “Take the internet: it can be used for good and bad. Overall, the internet has been a great thing for people. We can communicate much easier, it has lower barriers to access of information and brought us e-commerce. But there are also a lot of bad things, such as e-commerce scams, online predators, abuse on social media, and the instantaneous spread of misinformation.”

“The ability to create convincing deepfakes or alternative facts is undeniably a huge problem. But it is all about the people using it, and I don’t think we will ever stamp out people doing the wrong thing.”

“There’s no black and white – there’s a lot of shades of gray. It depends on culture, context, and different value systems. 3D printing is fantastic until someone uses it to make a gun.”

No nuclear option

Chen says the advent of nuclear weapons, which have never been used since they were dropped on Japan in 1945 – despite their global proliferation – shows that mankind is capable of managing threats that could bring about its own destruction.

“I do think that humanity has the capability to regulate AI to prevent it from going catastrophically wrong,” Chen says. “I am cautiously optimistic that people will figure out a way to be able to regulate this.”

What’s really important, however, is for the world’s governments to work in sync – just like with climate change. “We do need common leadership, which might be a tall order these days, but there needs to be some kind of global framework, such as was seen with the Paris Agreement, in achieving some sort of unified regulatory regime,” he says.

This, too, should have its limits, if the amazing progress so far seen with AI is to continue, so that regulation does not stifle innovation, Chen says. Virtually all the technological progress seen in the last half century, from the advent of smartphones to the more recent development of vaccines against Covid, were led by the profit motive in the private sector.

From here to the moon

And there will always be a backlash. Steam engines replaced horse power, much to the chagrin of blacksmiths, while digital media has killed off newspapers, video tapes and the most of photographic film industry. But technological progress also has its limits: despite the original moon landing moment becoming humanity’s greatest achievement, no one has been back since, and we are certainly nowhere near colonizing other planets.

AI may overall be best described as a transformative technology awaiting a beneficial end use, in the same way that electricity is simply the means to power anything from house lighting to your hairdryer – or any future moonshots.

“What is exciting about AI is not just what it can do by itself, but what it enables and empowers, and what it can achieve in conjunction with other innovations – just like what electricity did,” Chen says.

“The steam engine was ultimately a labor-saving device; AI can take us to the next level by constantly improving itself, as machine learning implies.”

“We just need to make sure we control it… in case it does end up controlling us.”

1 DeepMind AI reduces energy used for cooling Google data centers by 40% (
2 How AI accelerates the energy transition | Open Innovability | Open Innovability (
3 Alex cartoon
4 Teaching Machines to Understand Chinese Investment Slang:
6 US defence chief insists world 'nowhere close' to existential AI threat | Science & Tech News | Sky News


Sign Up Now for Full Access to Articles and Podcasts!

Unlock full access to our vast content library by registering as an institutional investor .

Create an account

Already have an account ? Sign in