In relatively recent news, Google’s quantum computer has succeeded in completing a complex calculation that would have taken modern supercomputers more than 10,000 years to solve in just 200 seconds. This novel achievement has only seen light in the last few months, but the research to make this science fantasy a possibility has been a continuous exploration since the 1980s.
A quantum computer, similar in variation to the one developed by Google
Used for complex calculations above the common, daily-life sphere, supercomputers have taken over much of the repetitive calculations required in a professional setting. However, as modern science and mathematics have proven, these supercomputers have also begun reaching their limits. Fundamentally, a computer is a binary model. When it receives information and the command to process it in some manner, the computer breaks this information down into a series of codes, consisted entirety of 0s and 1s. These codes form a list of binary integers, which are numbers expressed in base 2. Each of these digits works as a mini on-and-off sign. For example, the digit 1 may represent an “on” sign for a particular circuit, while alternatively, the digit 0 may represent the “off” sign for the same circuit. Thus, to process an input, the computer must break it down into a series of 0s and 1s, command for each circuit represented by the 1 to be turned on, and for each circuit represented by the 0 to be turned off. The specific set of circuits that are turned on then work to churn out the appropriate output.
For the mundane household desktop or laptop, this format of data processing is no issue. However, for supercomputers required to perform intricate tasks, the conventional binary model poses a hefty problem. For supercomputers to be able to execute multi-step calculations, they must have a very large computer of these on-or-off circuits. To be able to squeeze these many binary routes into a single computer, scientists have worked to decrease the size of computer parts. This effort, however, has reached its physical ceiling, with individual computer parts reaching the size of an atom.
In the average computer, a bit is the smallest unit of information. A bit is a number that can be either 0 or 1, and a series of bits are used as information processors for computers. In quantum computers, however, the bit is replaced by the qubit. The qubit, which works in a similar manner as the bit, has just one major difference; while a bit needs to be set as either a 0 or a 1, a qubit does not. In the quantum world, which often defies the logic that sets the rules in the mundane scientific realm, the qubit does not need to be set as any one of these two values; it can be in any proportions of both states at once, a phenomenon called superposition. As soon as the value of a qubit is measured, however, it defines itself as a fixed value. Therefore, as long as a qubit is unobserved, it remains in a superposition, and retains the potential to be either a 0 or a 1.
This is interesting and all, but why does this particular characteristic of quantum mechanics provide such a remarkable breakthrough in the world of computers? This is because using qubits allows for the number of codes necessary to represent and process certain pieces of information to be reduced exponentially. For instance, a series of five bits would only be able to represent one piece of information, while a group of five qubits would be able to store 2 to the power of 5, or 32, times that much. Increase the number of qubits to a few hundred, and the amount of information that the series of qubits can represent becomes exponentially larger when compared to the amount of information that the same number of bits can represent. Thus, instead of endeavoring to increase computer capacity through making individual computer parts smaller, we can extend the capabilities of computers through utilizing quantum mechanics, which will essentially reduce the amount of space and time that information processing takes up in a computer.
Comments