IBM has announced a 10-year, $100 million initiative with the University of Tokyo and the University of Chicago to develop a quantum-centric supercomputer powered by 100,000 qubits.
Quantum-centric supercomputing is an entirely new – and as of now, unrealised – era of high-performance computing. A 100,000-qubit system would serve as a foundation to address some of the world’s most pressing problems that even the most advanced supercomputers of today may never be able to solve.
Scalable photonic quantum computing architectures require photonic processing devices. Such platforms rely on low-loss, high-speed, reconfigurable circuits and near-deterministic resource state generators. In a new report now published in Science Advances, Patrik Sund and a research team at the center of hybrid quantum networks at the University of Copenhagen, and the University of Münster developed an integrated photonic platform with thin-film lithium niobate. The scientists integrated the platform with deterministic solid-state single photon sources using quantum dots in nanophotonic waveguides.
They processed the generated photons within low-loss circuits at speeds of several gigahertz and experimentally realized a variety of key photonic quantum information processing functionalities on high-speed circuits; with inherent key features to develop a four-mode universal photonic circuit. The results illustrate a promising direction in the development of scalable quantum technologies by merging integrated photonics with solid-state deterministic photon sources.
Quantum technologies have progressively advanced in the past several years to enable quantum hardware to compete with and surpass the capabilities of classical supercomputers. However, it is challenging to regulate quantum systems at scale for a variety of practical applications and also to form fault-tolerant quantum technologies.
China has slipped to number two, but is that intentional?
The US has edged past China when it comes to being home to the world’s fastest supercomputers. The number of machines in the U.S. is now 150, up from 126 last year, while the number of supercomputers from China fell from 162 to 134, Techspot.
Utilizing the computational prowess of one of the world’s top supercomputers, scientists have achieved the most accurate simulation to date of objects consisting of tens of millions of atoms, thanks to the integration of artificial intelligence (AI) techniques. Previous simulations that delved into the behavior and interaction of atoms were limited to small molecules due to the immense computational power required. Although there are methods to simulate larger atom counts over time, they heavily rely on approximations and fail to provide intricate molecular details.
A team led by Boris Kozinsky at Harvard University has developed a tool named Allegro, which leverages AI to perform precise simulations of systems containing tens of millions of atoms. To demonstrate the capabilities of their approach, Kozinsky and his team employed Perlmutter, the world’s eighth most powerful supercomputer, to simulate the complex interplay of 44 million atoms constituting the protein shell of HIV. Additionally, they successfully simulated other vital biological molecules such as cellulose, a protein associated with haemophilia, and a widespread tobacco plant virus.
Kozinsky emphasizes that this methodology can accurately simulate any atom-based object with exceptional precision and scalability. The system’s applications extend beyond biology and can be applied to a wide array of materials science problems, including investigations into batteries, catalysis, and semiconductors.
May 22 (Reuters) — Nvidia Corp (NVDA.O) on Monday said it has worked with the U.K.’s University of Bristol to build a new supercomputer using a new Nvidia chip that would compete with Intel Corp (INTC.O) and Advanced Micro Devices Inc (AMD.O).
Nvidia is the world’s top maker of graphics processing units (GPUs), which are in high demand because they can be used to speed up artificial intelligence work. OpenAI’s ChatGPT, for example, was created with thousands of Nvidia GPUs.
But Nvidia’s GPU chips are typically paired with what is called a central processing unit (CPU), a market that has been dominated by Intel and AMD for decades. This year, Nvidia has started shipping its own competing CPU chip called Grace, which is based on technology from SoftBank Group Corp-owned (9984.T) Arm Ltd.
It was an attempt at a projection of strength from Meta, which historically has been slow to adopt AI-friendly hardware systems — hobbling its ability to keep pace with rivals such as Google and Microsoft.
“Building our own [hardware] capabilities gives us control at every layer of the stack, from datacenter design to training frameworks,” Alexis Bjorlin, VP of Infrastructure at Meta, told TechCrunch. “This level of vertical integration is needed to push the boundaries of AI research at scale.”
No one will ever be able to see a purely mathematical construct such as a perfect sphere. But now, scientists using supercomputer simulations and atomic resolution microscopes have imaged the signatures of electron orbitals, which are defined by mathematical equations of quantum mechanics and predict where an atom’s electron is most likely to be.
Scientists at UT Austin, Princeton University, and ExxonMobil have directly observed the signatures of electron orbitals in two different transition-metal atoms, iron (Fe) and cobalt (Co) present in metal-phthalocyanines. Those signatures are apparent in the forces measured by atomic force microscopes, which often reflect the underlying orbitals and can be so interpreted.
In case anyone is wondering how advances like ChatGPT are possible while Moore’s Law is dramatically slowing down, here’s what is happening:
Nvidia’s latest chip, the H100, can do 34 teraFLOPS of FP64 which is the standard 64-bit standard that supercomputers are ranked at. But this same chip can do 3,958 teraFLOPS of FP8 Tensor Core. FP8 is 8 times less precise than FP64. Also, Tensor Cores accelerate matrix operations, particularly matrix multiplication and accumulation, which are used extensively in deep learning calculations.
So by specializing in operations that AI cares about, the speed of the computer is increased by over 100 times!