TOB-Tag
Bookmarks

Incentivized Distibuted Computation

Blockchain and AI

Cryptocurrencies distributed on blockchains rely on a form of widely distributed computing, sometimes called parallel computing. These systems rely on a network of computers interacting with one another, sending messages back and forth over the internet following a protocol (set of rules). For crypto the computation being distributed is about the state of the blockchain, i.e. validation of the current block. Parallel computation is a large sub-field of math and computer science and cryptocurrency blockchains provides a live test in the way to most efficiently distribute computation. This goes in hand with crypto systems because they create a built in value mechanism by which to pay the devices involved in the computation. Incentivized distributed computation provides a beautiful platform on which artificial intelligence can thrive.

An AI system consists of varying inner workings depending on the task at hand. A simple number classifier with pytorch can be built by a young child with a laptop and Google’s DeepMind consists of state-of-the-art algorithms that are quite complicated. No matter the innards of the system, all it is doing is running an equation with certain inputs recursively and eventually returning a prediction. AlphaGo, an AI which beat the world champion of the board game Go, was in part a very efficient search algorithm which took the current state of the board and simulated thousands of games with that board, took 50 different moves, one for each board, and did that 50 times. Imagine the computation going on for AlphaGo at any given move. To generate that much data is a huge restraint for any company that wishes to use AI on a similar level for any number of other tasks, as they would insure the massive overhead costs of owning and maintaining the servers which would hold this data. If that data happened to be private, such as medical records for an AI which helps doctors more accurately diagnose disease, storing it on a central server is essentially an invitation to attackers.

It is obvious to those who are familiar to crypto where blockchain comes in here. Incentivized distributed computation can be grafted directly into any AI system and the AI would receive massive benefits. Those benefits come in the form of increased computational efficiency, memory storage cost reduction, and much more robust (even anti-fragile in a causally invariant system) security for data privacy. I will describe each as the article goes on, keep in mind, AI systems are simply lines of code run on a computer. They take input and produce output like every other computer program, the magic is in the inner workings but the details of that magic are unnecessary for understanding the point of this idea.

Faster

Currently, when I run that digit classification AI program on my slightly outdated MacBook, it takes about 20 minutes for the model to ‘converge’, which means get the right prediction consistently. That is again a very small, simple program with only a few thousand parameters. Imagine a future AI which has trillions of weights and needs to compute quantum states in order to simulate super-human intelligence, there’s no way I can run that on my MacBook and benefit from that AI system. However, if that AI system were to distribute its computation needs across a large network of computers who can run the computation in parallel, then there could be an easily created interface through which my old MacBook can now expose me to the huge potential of such an AI as that with super-human intelligence.

Where does this extra speed come from? The task itself of computing a prediction with an AI has a set amount of computation required to occur. In computer science this is known as Big-O time, usually notated as O(t), where t is some positive non-zero integer. It represents the worst-case scenario time an algorithm takes to run, so an algorithm with O(1) (pronounced ‘O of 1’) time is said to be constant, that is it takes exactly the same amount of time no matter the size of the input. An algorithm with O(n) time is linear, that is it increases linearly as n increases. If it takes 5 minutes when n is 1000 it will take 50 when n is 10,000, and so on. With O(1) time, the size of the input has no factor on the total time the algorithm takes.

Let’s say when you run this super-human AI it has constant time, O(1), and when on state-of-the-art computer equipment the process takes 3 hours to generate an output. If you take that computation and break it down into smaller pieces, the computation becomes much less bearing on the machine running it. These broken down equations, when put back together, must result in the same output as when done all by one larger computation. So, let’s say we distribute these smaller equations in packets, say 5000 of them, to random computers all across the world. And let’s assume it only takes 2500 completed equations, so every time there is an excess of 2500 packets we don’t ever need to receive, in case of any Byzantine faults. The first 2500 computers to return an answer with a verifiable proof of the correctness of the answer are rewarded with some amount of cryptocurrency. That O(1) time of 3 hours can be dramatically reduced depending on the size of the network to which the packets get distributed. The number or packets sent up would need to be proportional to the overall size of the network, so the example above would be fitting for a network of size 7500 computers or so. The reduction in time would also be proportional to the size of the network because the number of computations needed to be returned would be fixed, in the example above it is 2500 but it could be any other number so long as it is fixed. As the network grows, more packets are sent out but the same amount is needed, so the amount of time it will take will gradually reduce as the network itself grows.

Parameter Reduction

When an AI is making a computation it is doing some form of large-scale matrix multiplication and addition. These processes occur in ‘layers’ of the network and at any given layer there can be a very large number of parameters which are interacting with one another, however in a system with super-human level intelligence there will likely be a smaller number of parameters than there are in modern AI systems. I make that prediction because humans are smarter than Neanderthals and our brains are actually smaller yet vastly more intelligent. So, possibly in the future this paragraph of the article will be nullified but it will hold true for any version of AI which uses the current module of computation involving weights, activations, and biases which are the names given to those matrices, called tensors in AI, mentioned above.

So at any given layer in an AI there can be hundreds of thousands of values each representing some key portion of the information. This creates a bottle neck in the form of memory storage. I feel as though this is beating a dead horse but I will say again, hosting the amount of data a super-human level AI would create at any single step in its computational process is on the order of magnitude only giants such as IBM, Google, Huawei, the u.s. treasury’s wallet, could handle the costs of. As for myself I would like a world where if the types of AI exist, they are ubiquitously accessible, not just for über rich Americans, Europeans, and Asians.

With a distributed network handling those small packets of computation the time reduces and the amount of parameters is reduced. Not only can you split the AI in packets one layer at a time, you can slice up the layers into sub-packets, creating a reduction in the size of the tensors, and therefore the size of memory it hogs up on a machine.

Distributed Security

The security advantage of distributing information rather than storing it locally has been consistently proven over the last 12 years with crypto. Think of all the times you have heard a big ticket company facing some form of adversity over data security, the list is very long from Facebook to Google to Sony and so on. Now think of all the times you’ve heard of any cryptocurrency which had a meaningful leak in the security of its blockchain, I believe it is zero. One could mention Iota having some difficulties with its algorithm a couple years back however at no point was the network near compromised nor did that end Iota as they went on to secure their algorithm further and make it stronger as a result. As a side note, that is anti-fragile, a negative event occurred when someone found a potential security breach in Iota, and what came out on the other side was an improved product, whereas a fragile crypto would have cracked under pressure.

The reason distributed systems improve security has multiple prongs. One of which is data redundancy. At the 2016 MIT Bitcoin Expo I spoke with David Vorick for a short period of time, and this was at the beginning of my venture into crypto. David created Sia, a file sharing crypto system, and I asked him, what is to keep the people storing my data on Sia from blackmailing me for more money before he returns my data? His answer was simple, a lot of people are storing a lot of fragments of your data, the chances that every single one blackmails you is near zero. Data redundancy disallows malicious actors from holding your data hostage. It also provides a buffer in the case of dropped packets, failed nodes, and many other events with malicious outcomes yet no malicious intention.

David’s answer touched on data redundancy but if you look deeper it also has to do with the outright stripping of power to harm from the data holder. If there are enough nodes in a networks then the network will be secure simply because it wouldn’t be feasible to secure the resources necessary to perform an attack on the network. The sheer size of a network provides a form of security that a single server can’t provide.

The Dichotomy

What we have here is the framework of a beautiful system which highlights the power of distributed networks. It provides a platform on which we can exponentially improve the AI we have now, regardless of how inefficient it may be now in respect to the future efficiency we will see. It levels the playing field so that when a vastly useful AI program is invented, it will be highly accessible, creating social impacts that could completely change the structure of society itself. Imagine a world where there is open access to a resource which could empower anyone with a bright mind or good idea in the world to make a real difference. All done in a secure way in which users can be sure their privacy is being minded at the core of the system itself through cryptography.

To bring the cryptoeconomics into the picture, let’s imagine that system where computers are rewarded for taking part in the process. Currently the human who owns the computer will be the one making the decisions about what to do with their new currency. Now, this currency could become tied in some way to the real world, shops may accept in exchange for goods, services may accept it as payment for their work such as plumbers, drivers, etc. Or that human could go into this new AI network, and employ a specific type of AI to do a specific task, paying the AI to automate something for the user, whether it be marketing for his band online, teaching him 5 star cooking lessons, building him a website application, the possibilities are endless.

And even one step further. Imagine a network of AI’s communicating with one another, sharing results of tasks to improve each other, employing one another for tasks outside of their scope. There will eventually be a network of AI’s very much like this, and there will need to be some way for the AI’s to gauge who is worth helping and who isn’t. Enter cryptocurrencies, the natural way for computer entities such as AI’s to transact, much like the natural way for humans to transact thought the years has been to barter services in return for some form of value. Since our reality consists of physical objects, we have up until recently exclusively traded in the form of objects and actions to represent value, but since zeros and ones are the reality of AI’s they will need a fitting way to represent value, and thankfully cryptocurrencies live in that same dimension.