In anticipation of SEC approval of its filing to become a listed stock on the NYSE, IonQ previously sponsored a virtual symposium for key analysts in the investment community. The objective was to provide the analysts with a better understanding of quantum computing, IonQ’s potential, and its technology and business differentiators.
I previously published part one of a two-part analysis about that event. Part one focused on IonQ’s management structure, funding history, and business strategy. Part one can be read here.
Part two provides an explanation of IonQ’s technical information and architectural strategy.
Stand-out technical strengths
Empowered by Unique Technological Advantages
In addition to marketplace advantages in the quantum computing space, IonQ has strong technical benefits derived from its trapped-ion approach. Peter Chapman’s unique blend of commercial and technical leadership has also benefited IonQ, yielding an accelerated multi-generational strategy for hardware and engineering improvements. Some of the advantages IonQ presented to the analysts included:
- Some companies building quantum computers use manufactured quantum bits (qubits) as the basis of its hardware. Manufactured qubits are prone to inherent variations that introduce computing errors and affect a quantum computer’s performance. Distinct from these manufactured approaches, IonQ’s trapped-ion technology exploits natural quantum systems. It uses charged ions created from atomic isotopes of a rare earth metal called ytterbium. Using natural qubits rather than manufactured qubits means every ion is perfect and identical to every other ion in the system. The same technology is used in humanity’s most accurate timepiece, the atomic clock, because of its stability.
- IonQ’s architecture allows each qubit to communicate with every other qubit in the ion trap directly. This capability gives IonQ advantages in gate efficiency, speed, and scalability. In some different quantum computing architectures, qubits can only communicate with its immediate neighbors. That requires information to be routed around the computer until the relevant qubits are co-located. This results in fewer qubits left to perform logic computations, as they are instead used to perform networking connections. Significantly, eliminating the intermediate steps for a qubit to reach another qubit reduces computational noise, further differentiating the trapped-ion architecture.
- A quantum computer’s ability to complete complex computations depends on several factors. Two of the most important are the length of time a qubit can maintain its quantum state (coherence) and the quality (fidelity) of the qubit. IonQ’s trapped-ion qubits have a longer coherence time and higher fidelity relative to many other qubit types.
Why Do You Need Quantum Error Correction?
Scientists have been unable to build a large fault-tolerant quantum computer because it is not yet possible to implement error correction at scale in quantum computers. Unlike classical computers, which statistically rarely make errors, quantum computers are error-prone in several specific ways. During the presentation to analysts, Dave Bacon explained the significance of these errors and why qubit gate fidelity is so important.
The chart above illustrates the fundamental problem quantum error correction is trying to solve. This diagram contemplates a quantum computation where each gate fails once out of a thousand times (0.1 percent probability). Algorithmic qubits, represented along the chart’s vertical axis, measure the “useful” qubits in a quantum computer. The more algorithmic qubits, the larger the quantum computation one can perform. The chart’s horizontal axis represents the number of actual physical qubits (ions for IonQ’s quantum computer). This chart shows that adding more qubits to a quantum computer eventually limits its power without error correction. For a failure rate of one in a thousand, this saturates at a power of 22 algorithmic qubits.
IonQ has published a algorithmic qubit calculator that demonstrates the relationship between fidelity, physical qubits, and algorithmic qubits.
Error correction – Using many qubits to get one good qubit
Quantum Error Correction
Scientists have researched many error-correction protocols. All use multiple physical qubits to obtain one error-corrected qubit. Depending on the qubit technology employed, error correction could require as many as 1000 or more physical qubits to obtain one error-corrected qubit. This means a quantum computer with 1000 error-corrected qubits would require a million physical qubits. Considering that we might eventually need machines with millions of error-corrected qubits to create new materials, new pharmaceuticals, and fully address climate change, certain approaches might require quantum data centers the size of football fields.
Dave Bacon explained to the virtual audience of analysts an error-correction protocol he co-developed several years ago. It is called the Bacon-Shor code and is well-known within the quantum research community. Although the details are complex, the general idea is simple: use 16 raw physical qubits with 99.9% gate fidelity to produce one error-corrected qubit with a 99.99% gate fidelity. That ten-fold increase in gate fidelity makes many more qubits available for actual logic computations instead of just being “noise.”
Error correction + high fidelity = more qubits
Quantum Error Correction For More Quantum Power
The chart above shows a quantum computer’s computational capability based on the relationship between physical qubits, algorithmic qubits, and gate fidelity.
a. With no error correction and with qubits with 99.9% gate fidelity, the quantum computer cannot use more than 22 algorithmic qubits for computation, regardless of how many qubits are available.
b. By employing a 16:1 error-correction protocol, gate fidelity increases to 99.99%, and allows the use of 65 algorithmic qubits. To produce those 65 usable qubits with a 16:1 protocol would require 1,040 physical qubits.
c.To get to an even higher number of algorithmic qubits, even higher gate fidelity is needed. At 32:1 on IonQ hardware, the gate fidelity can be boosted to 99.999%, allowing 324 algorithmic qubits, but at the cost of 10,368 (324×32) physical qubits.
IonQ’s hardware allows error correction to be introduced as needed, or “just in time,” if you will.
Large chambers in 2016 to small integrated chips by 2023
Smaller Every Generation: Quantum Core
According to Jungsang Kim, achieving indefinite scaling of trapped ion qubits requires developing a quantum computer with a modular architecture. IonQ plans to pave the way to modularity by creating smaller, lighter, and cheaper quantum processing units (QPUs) that can be networked together to form a larger computer. Modular trapped-ion quantum computers also scale better because they are easier to manufacture.
Kim said, “If you want to get to scalable quantum computers, it has to be modular, no matter what physical qubit architecture we use.”
Scaling a classical computer compared to scaling a quantum computer is different. Almost everyone has heard of Moore’s law. Even though it is fast approaching its limits, it allows for an interesting comparison between transistors and qubits.
Moore’s law predicts that a classical computer will double its power and necessarily double the number of chip transistors (in the millions and billions depending on the chip) every two years. On the other hand, the addition of a single qubit doubles the power of a quantum computer. To keep pace with classical machines, a quantum computer need only add one qubit every two years.
The graphic above shows how IonQ’s computers have changed since 2016. Initially, the ion trap was in a large vacuum chamber that looked like a diving helmet. It has progressively gotten smaller. The small unit that can be held in your hand is a working prototype that exists today.
MIT Lincoln Lab recently constructed the first trapped-ion chip system with fully integrated photonics. A similar chip is shown in the 2023 graphic. It integrates all the laser systems needed to trap and cool the ions, and IonQ hopes to perfect it in the future. You can read more about this chip in a Forbes.com article I wrote here. The chip is a proof of concept showing that all the features and functions can be integrated into a single chip today. It integrates all the laser systems needed to trap and cool the ions, and IonQ hopes to perfect it in the future. You can read more about this chip in a Forbes.com article I wrote here. The chip is a proof of concept showing that all the features and functions can be integrated into a single chip today.
Single and multi-core quantum processing units
IonQ’s current quantum computers each consist of a single quantum processing unit (QPU). Much like CPUs for classical computing, a QPU is a single machine that can perform operations independently. It also can be networked together with other QPUs to form an even more powerful machine. The QPU will accommodate hundreds of physical qubits. IonQ plans include a future architecture that chains numerous QPUs together.
Modular Networking of Multiple QPUs
In the early 2000s, when Jungsang Kim worked for Bell Labs, he helped build the world’s largest optical switch, with more than 1000 optical ports. It is no surprise that a large optical switch will be part of a future advanced architecture for a highly scalable quantum computer. Such a switch may provide connectivity between thousands of quantum processing units in a quantum data center one day. In 2007, Chris Monroe proved, for the first time, that it is possible to use photons to communicate via optical switches. Over the past 15 years, Monroe and other researchers have further developed this research to productize it for networking quantum computers. This technology will serve as the basis for high-bandwidth connections between QPU devices.
In the next few years, IonQ plans to integrate this technology with its quantum computers. With full-scale modular connectivity, such a system will allow connections between any qubits inside the system located across any number of QPUs. Moreover, the link will not require more than one hop through the optic photonic network. It will be a fully connected configuration in a modular architecture. This versatile and highly scalable architecture will be extremely powerful for executing large-scale, complex quantum computational problems, regardless of the structure of the problem. It is key to IonQ’s scaling goals.
IonQ year-by-year qubit roadmap
IonQ estimates that quantum computers will need 72 algorithmic qubits to handle computations that exceed the ability of classical computers. (As a reminder, algorithmic qubits are the ones available to perform computational calculations. The actual number of qubits can be much higher and claims about quantum computers that state merely the number of raw qubits at work are not meaningful to describe what a quantum computer can actually do.) The graphic above represents IonQ’s year-by-year roadmap for algorithmic qubit growth. Likely applications are shown above the qubit counts.
In 2025, IonQ is forecasting a big jump from 35 to 64 in algorithmic qubit count because IonQ will begin using 16:1 error correction then. It will require 1024 physical qubits with gate fidelities of 99.9% to create 64 algorithmic qubits with gate fidelities of 99.99%.
- If the 2026 milestone of 256 algorithmic qubits is achieved, it will likely allow us to solve many mysteries of science that exist today. At that point, quantum will begin to disrupt many technologies. It is reasonable to expect quantum computers will have the capability to create some new drugs and materials or to become table-stakes equipment in the finance industry.
- Error correction is a crucial capability for scaling qubits. For that reason, error correction is a hotly researched topic. The Bacon-Shor code for error correction is one of the lower-cost codes in terms of physical qubits. Quantum error correction using Steane’s seven-qubit color code also looks promising. However, it will take more research to incorporate any error correction into future designs that contain higher qubit counts. Based on IonQ timelines, that should not be a problem.
- Many factors contribute to errors in quantum systems, including the environment, noise from internal components, background radiation, cabling, and even noise caused by qubits themselves. Enclosing an ion trap inside a vacuum chamber does help shield qubits from some external influences. But despite advances in error correction, the software and hardware designs in use today have not yet proven it is possible to scale to higher numbers beyond a few hundred qubits.
- Superconducting qubits have higher gate speeds than trapped-ion qubits. Although IonQ has not disclosed if it is researching the use of lighter species to obtain faster gates, it would make sense to see species experimentation in future IonQ architectures.
- Some applications will require an on-premises, room-temperature quantum computer with a small form factor. Still, the majority of quantum computing access in the near term will be via the cloud. For cloud access, a quantum machine with a large footprint will not be a significant disadvantage.
- The trapped-ion chip with a complete set of integrated photonics built by MIT Lincoln Laboratory is a significant step towards proving trapped-ions can move to a modular architecture. Also helpful is the GlobalFoundries and PsiQuantum partnership that recently disclosed it is manufacturing a very sophisticated 25-layer stack silicon photonic wafer. You can read my Forbes article about that chip and the process here.
- It cannot be ignored that in the interim between publication of part one and part two, Honeywell announced it was spinning off Honeywell Quantum Solutions (HQS) so that it could merge with Cambridge Quantum Solutions. This positions HQS to pursue a public offering if desired. IonQ and HQS both use trapped-ion technology but with differing long-term architectures. Two trapped-ion companies with large amounts of funding would not only be interesting, but it would be beneficial for the entire quantum ecosystem as well.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.
Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including 8×8, Advanced Micro Devices, Amazon, Applied Micro, ARM, Aruba Networks, AT&T, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Digital Optics, Dreamchain, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Google (Nest-Revolve), Google Cloud, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Ion VR, Inseego, Infosys, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, MapBox, Marvell, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nuvia, ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Poly, Panasas, Peraso, Pexip, Pixelworks, Plume Design, Poly, Portworx, Pure Storage, Qualcomm, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak, SONY, Springpath, Spirent, Splunk, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity, TensTorrent, Tobii Technology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zebra, Zededa, and Zoho which may be cited in blogs and research.