November 23, 2024

Rise To Thrive

Investing guide, latest news & videos!

How the collapse of Sam Bankman-Frieds crypto empire has disrupted AI

5 min read

SAN FRANCISCO In April, a San Francisco artificial intelligence lab called Anthropic raised US$580 million (S$781 million) for research involving AI safety.

Few in Silicon Valley had heard of the one-year-old lab, which is building AI systems that generate language. But the amount of money promised to the tiny company dwarfed what venture capitalists were investing in other AI start-ups, including those stocked with some of the most experienced researchers in the field.

The funding round was led by Mr Sam Bankman-Fried, the founder of FTX, the cryptocurrency exchange that filed for bankruptcy in November. After FTXs sudden collapse, a leaked balance sheet showed that Mr Bankman-Fried and his colleagues had fed at least US$500 million into Anthropic.

Their investment was part of a quiet and quixotic effort to explore and mitigate the dangers of AI, which many in Mr Bankman-Frieds circle believed could eventually destroy the world and damage humanity.

Over the past two years, the 30-year-old entrepreneur and his FTX colleagues funnelled more than US$530 million through either grants or investments into more than 70 AI-related companies, academic labs, think-tanks, independent projects and individual researchers to address concerns over the technology, according to a tally by The New York Times.

Now some of these organisations and individuals are unsure whether they can continue to spend that money, said four sources close to the AI efforts who were not authorised to speak publicly.

They said they were worried that Mr Bankman-Frieds fall could cast doubt over their research and undermine their reputations.

And some of the AI start-ups and organisations may eventually find themselves embroiled in FTXs bankruptcy proceedings, with their grants potentially clawed back in court, they said.

The concerns in the AI world are an unexpected fallout from FTXs disintegration, showing how far the ripple effects of the crypto exchanges collapse and Mr Bankman-Frieds vaporising fortune have travelled.

Some might be surprised by the connection between these two emerging fields of technology, said Mr Andrew Burt, a lawyer and visiting fellow at Yale Law School who specialises in the risks of artificial intelligence, of AI and crypto. But under the surface, there are direct links between the two.

Mr Bankman-Fried, who faces investigations into FTXs collapse and who spoke at the Times DealBook conference last Wednesday, declined to comment.

Anthropic declined to comment on his investment in the company.

Mr Bankman-Frieds attempts to influence AI stem from his involvement in effective altruism, a philanthropic movement in which donors seek to maximise the impact of their giving for the long term. Effective altruists are often concerned with what they call catastrophic risks, such as pandemics, bioweapons and nuclear war.

Their interest in AI is particularly acute.

Many effective altruists believe that increasingly powerful AI can do good for the world, but worry that it can cause serious harm if it is not built in a safe way. While AI experts agree that any doomsday scenario is a long way off if it happens at all effective altruists have long argued that such a future is not beyond the realm of possibility and that researchers, companies and governments should prepare for it.

Over the last decade, many effective altruists have worked inside top AI research labs, including DeepMind, which is owned by Googles parent company, and OpenAI, which was founded by Tesla chief executive Elon Musk and others.

They helped create a research field called AI safety, which aims to explore how AI systems might be used to do harm or might unexpectedly malfunction on their own.

Effective altruists have helped drive similar research at Washington think-tanks that shape policy. Georgetown Universitys Centre for Security and Emerging Technology, which studies the impact of AI and other emerging technologies on national security, was largely funded by Open Philanthropy, an effective altruist giving organisation backed by a Facebook co-founder, Mr Dustin Moskovitz. Effective altruists also work as researchers inside these think-tanks.

Mr Bankman-Fried has been a part of the effective altruist movement since 2014. Embracing an approach called earning to give, he told the Times in April that he had deliberately chosen a lucrative career so he could give away much larger amounts of money.

In February, he and several of his FTX colleagues announced the Future Fund, which would support ambitious projects in order to improve humanitys long-term prospects. The fund was led partly by Associate Professor Will MacAskill, a founder of the Centre for Effective Altruism, as well as other key figures in the movement. More On This Topic US authorities asking FTX investors for information on crypto exchange and Bankman-Fried FTX founder denies trying to commit fraud at fallen crypto empire The Future Fund promised US$160 million in grants to a wide range of projects by the beginning of September, including in research involving pandemic preparedness and economic growth. About US$30 million was earmarked for donations to an array of organisations and individuals exploring ideas related to AI.

Among the Future Funds AI-related grants was US$2 million to a little-known company, Lightcone Infrastructure. Lightcone runs the online discussion site LessWrong, which in the mid-2000s began exploring the possibility that AI would one day destroy humanity.

Mr Bankman-Fried and his colleagues also funded several other efforts that were working to mitigate the long-term risks of AI, including US$1.25 million to the Alignment Research Centre, an organisation that aims to align future AI systems with human interests so that the technology does not go rogue. They also gave US$1.5 million for similar research at Cornell University.

The Future Fund also donated nearly US$6 million to three projects involving large language models, an increasingly powerful breed of AI that can write tweets, e-mails and blog posts and even generate computer programs. The grants were intended to help mitigate how the technology might be used to spread disinformation and to reduce unexpected and unwanted behaviour from these systems.

After FTX filed for bankruptcy, Prof MacAskill and others who ran the Future Fund resigned from the project, citing fundamental questions about the legitimacy and integrity of the business operations behind it. Prof MacAskill did not respond to a request for comment.

Beyond the Future Funds grants, Mr Bankman-Fried and his colleagues directly invested in start-ups with the US$500 million financing of Anthropic. The company was founded in 2021 by a group that included a contingent of effective altruists who had left OpenAI. It is working to make AI safer by developing its own language models, which can cost tens of millions of dollars to build.

Some organisations and individuals have already received their funds from Mr Bankman-Fried and his colleagues. Others got only a portion of what was promised to them. Some are unsure whether the grants will have to be returned to FTXs creditors, said the four sources with knowledge of the organisations.

Charities are vulnerable to clawbacks when donors go bankrupt, said Mr Jason Lilien, a partner at the law firm Loeb & Loeb who specialises in charities. Companies that receive venture investments from bankrupt companies may be in a somewhat stronger position than charities, but they are also vulnerable to clawback claims, he said. NYTIMES More On This Topic Blockchains: What are they good for? An online vigilante who exposed a S$1 billion crypto scam