
[ad_1]
In the great race to win world markets, stock exchanges and tech companies are teaming up. A year ago, the Chicago Mercantile Exchange struck a deal with Google and the Nasdaq agreed to tie up with Amazon Web Services. Now, it’s the turn of the London Stock Exchange (LSE), which this week agreed to make Microsoft its dancing partner for the next 10 years: the LSE will spend £2.3 billion on Microsoft services and, in return, is buying Microsoft. A four per cent stake in the exchange (£1.5 billion).
What all these deals have in common is that they involve financial companies trying to become data companies and, to that end, looking for a tech partner that really knows what it’s doing. This is particularly urgent for the LSE, which spent billions on data company Refinitiv in 2019 and has since struggled to digest it. But essentially all exchanges are trying to do the same thing: move all their data to the cloud, make it all work properly, and then build an artificial intelligence framework around it.
The stock exchange was used only to find the right price of the stock as efficiently as possible. Price was the data point that really mattered. But over the years, humble price discovery mechanisms have expanded and started using and producing a flood of data about market and business behavior, risk, compliance, valuation, etc.
For example, among the nodes stored by Refinitiv are a database of 2.7 million senior company executives, with 24 years of salary and employment history; a platform tracking every project China has built in 70 countries through its $500 billion “Belt and Road” investment scheme; And a record of the purchasing behavior of thousands of American grain elevators (the modern version of a granary), updated daily.
But how well does any of this really work and what on earth to do with all this data? That’s where tech companies come in. They have to create tools that make all of these accessible and useful in real time from remote locations. Simple, right? What could possibly go wrong? Still, let’s not blame them for trying. We need investment and efficiency improvements. This is the kind of deal that generates them.
The race to crunch data is now a feature of almost every modern industry, including medicine. I recently heard Professor Rick Stevens, a computer scientist at the University of Chicago, explain how he is using supercomputers to identify new cancer treatments. Computers at Argonne National Laboratory feed all kinds of data about cancerous tumors and the results of clinical drug trials. From this, scientists are trying to write algorithms that can match specific types of cancer with new drugs or drug-combination treatments that may not have been tried on them. Over the past five years, Professor Stevens told me, more than 100 research papers have been published with a similar approach.
However, there are several serious limiting factors on this work. The first is computing power. Few computers are fast enough to use the amount of data fed into the Argonne, and very few facilities can make the advanced semiconductor-like parts they require. Besides that, there are not many people who are able to build and use such a tool. And if someone could suddenly build thousands of these computers and train millions of people to use them, we’d run into a new limiting factor: the energy they use.
This, however, is the future. Both the US and the EU have announced their intention to fund a multi-billion dollar project to build a new, onshore semiconductor supply chain to reduce dependence on Taiwan. The UK is retarded. We could, in theory, be a powerhouse of computing and engineering talent, given our strong university sector. But related courses seem to be of interest almost exclusively to Chinese students, most of whom go straight home after their degrees. Meanwhile, British students spend as much studying computer science as they cough up for drama and theater studies courses. It shouldn’t take a super-duper advanced algorithm to figure out that this doesn’t make sense.
[ad_2]
Source link