As the world moves more and more into an unchartered era of electronic wizardry and robotic control, one unsettling query continues to arise.
Who is to blame when a robot or artificial intelligence, operating independently of any direct human involvement, goes awry?
Who is to blame for any devastating consequences?
How do you prosecute a case against AI?
An historic court case set down for hearing in England between two waring multi-millionaires may go somewhere towards setting a legal precedent.
The case is being brought by Li Lin–kan, the son of Hong Kong-based property tycoon Samathur Li-Kin-kan, who is linked to extensive property holdings in London’s, Chinatown and Covent Garden.
Lin-kan, 45, is suing wealthy Italian Raffaele Costa, 49 for the return of around $30m he lost by allowing a supercomputer named K1 to control his trading on the London Stock Market.
The association began in 2017 when, meeting for the first time, Costa spoke of the robot hedge fund his London company had developed to manage and invest money on the stock exchange.
The fund relied solely on artificial intelligence for its decisions on U.S stock futures.
AI evaluated social sentiment and media stories to try and anticipate the intentions of public investors.
An initial trial was encouraging, and Li provided more than $30m of his own money plus tens of millions of dollars more from a bank.
The AI computer, which was enabled to send, buy and sell instructions to brokers, began controlling Li’s money in December. In two months it lost money regularly and showed no love at all on February 14 when it lost a horrific $30m in one day, mostly through a stop-loss order a human might not have made.
Li is now taking Costa to court, claiming the computer’s abilities were overstated.
The legal world appears divided about who is to blame and arguments are being raised to support all sides in the dispute.
One obvious strategy is to argue that if no one knew the computer’s intentions – or what it was deciding – it could not be human fault.
AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data.
Human intervention was involved when the computer was programmed but after that, it made its own decisions armed with material and knowledge it gathered from its own sources.
Bloomberg carries a quote from London firm Aspect Capital, which is also seeking ways to make an AI computer to manage hedge funds viable but has mot added clients’ finances to the mix.
He says, once AI develops individual thinking, even the people who gave it life will not be able to predict its decision-making or the reasoning it used.
“You might be in a position where you just can’t explain why you are holding a position (in the market),” he said.
Experts say the world’s fascination with AI started when a computer named Deep Blue, programmed by experts, beat the then world’s chess champion in 1997.
The furore over K1’s misdirection opens vast new channels of debate over exactly where AI is headed and how dangerous it might yet prove to be.
For example if a driverless car is the guilty party in a fatal accident, who is to blame?
Who, if anyone gets charged or convicted?
Bloomberg says US criminal prosecutors “let Uber Technologies off the hook” for the death of a 49-year-old pedestrian killed by one of its autonomous cars earlier this year.
More failed prosecutions may follow unless the courts provide a clear template of how blame is allocated when the perpetrator/driver is a machine.