The term, supercomputer, is generally applied to the fastest high-performance systems available at any given time. Today's supercomputers are capable of massive computing power.
Common applications for supercomputers are those for which large amounts of data needs to be calculated and processed very quickly, such as testing mathematical models for climate research and weather forecasting, space research, quantum mechanics, cryptology, and the development of new compounds.
In contrast to traditional computers, which have one central processing unit (CPU), supercomputers are likely to have several CPUs, capable of performing trillions of complex calculations per second. To support this extremely high computational speed, supercomputers have to be capable of rapidly retrieving stored data and instructions. This requires very large storage capacity, as well as rapid input/output capability.
Another characteristic of supercomputers is their use of vector arithmetic, which is the ability to operate on pairs of lists of numbers rather than on mere pairs of numbers.
The world's fastest supercomputers, today, are Linux-based systems.
Cited as the first supercomputer, the Livermore Atomic Research Computer (LARC) was built by UNIVAC for the US Navy Research and Development Center in 1960. Another early supercomputer was the IBM 7030 Stretch, which was built by IBM for the Los Alamos National Laboratory in 1961, where it was used in atomic weapons research. The IBM 7950 Harvest was used for cryptanalysis by the US National Security Agency from 1962 to 1976.
The third supercomputer of the early 1960s was the Atlas Computer, developed by the University of Manchester. Atlas was a second-generation computer, as it used transistors rather than vacuum tubes. Two other Atlas computers were built, one for British Petroleum and the University of London, the other for that Atlas Computer Laboratory at Chilton.
The CDC 6500 was designed by Seymour Cray. Completed in 1964, it was the first to use silicon rather than germanium transistors, as silicon ran faster. Refrigeration technology was used to reduce problems with overheating.
In 1972, Cray left Control Data Corporation to form Cray Research, which developed the Cray-1 in 1976, and the Cray-2 in 1985. It was the world's second-fastest supercomputer after the M-13 supercomputer in Moscow.
Developed in the late 1960s, the ILLIAC IV became the first massively parallel computer. Designed at the University of Illinois at Urbana-Champaign and built by the Burroughs Corporation, the ILLIAC had up to 128 parallel processors and was designed for ARPA. After a few years of modifications, it was connected to ARPANet for distributed use in 1975, becoming the first network-available supercomputer.
Early supercomputers used operating systems that were specifically designed for the computers they were used in, the recent trend has been to adopt more generic software, such as Linux. Since massively parallel supercomputers usually separate computations from other services they provide, they generally run different operating systems on different nodes, using a lightweight kernel on computer nodes, but a larger OS on server and input/output nodes. Although most supercomputers use a Linux OS, each manufacturer has its own specific Linux derivative.
Since 1993, the fastest supercomputers have been ranked on a TOP500 list according to LINPACK benchmark results. In 2020, the fastest supercomputer is Summit. Built by IBM, it is located at Oak Ridge National Laboratory. The next fastest computer includes Sierra (IBM), Sunway TaihuLight (NRCPC), Tianhe-2A (NUDT), Frontera (Dell EMC), Piz Daint (Cray), Trinity (Cray), AI Bridging Cloud Infrastructure (Fujitsu), SuperMUC-NG (Lenovo), and Lassen (IBM).
Topics about supercomputers, in general, are appropriate for this category, as are those relating to any specific supercomputer, past or present.
 
 
Recommended Resources
A subsidiary of Hewlett Packard Enterprise since 2019, Cray is known for its supercomputers, although the company also builds systems for data storage and analytics, as well as artificial intelligence technologies. Several Cray supercomputer systems are listed in the TOP500, which ranks the most powerful supercomputers in the world. Its products are highlighted on its site, including its supercomputing-as-a-service programs, support services, and a Cray user group.
https://www.cray.com/
The OSC partners with Ohio universities, labs, and industries to provide students and researchers with high-performance computing, advanced cyber-infrastructure, research, and computational science education services. Its mission and governance are outlined, and an overview of its client services are featured, along with case studies, available software, technical support, and facilitation. A staff directory, sales information, and client support requests are posted.
https://www.osc.edu/
Named for the Piz Daint mountain peak in the Swiss Alps, Piz Daint is a supercomputer in the Swiss National Supercomputing Centre. It is a hybrid Cray XC46/XC50 system and the flagship computer system for the national HPC Service in Switzerland. Its specifications and upgrade history are outlined. Also included is a fact sheet presented in PDF format, with English, German, and Italian versions, that goes into more detail about the system, including photographs.
https://www.cscs.ch/computers/piz-daint/
Sierra is an ATS-2 supercomputer built by IBM for the Lawrence Livermore National Laboratory for use by the National Nuclear Security Administration, where it is used for predictive applications in nuclear stockpile stewardship. The capabilities of its High-Performance Computing (HPC) Innovation Center is detailed, and Sierra’s processor architecture, RHEL operating system, and other specifications are listed, with an overview of its development and history.
https://computing.llnl.gov/computers/sierra
Built in partnership with Lenovo and Intel, SuperMUC-NG consists of 6,336 thin compute nodes each within 48 cores and 96GB memory, as well as 144 fat, compute nodes, each with 48 cores, and 768 GM memory per node. Situated at the Leibniz Supercomputing Centre, the SuperMUC-NG replaces the SuperMUC, which has been decommissioned. Documentation is provided, including user guides, courses, training, and events, a system overview, and other resources.
https://doku.lrz.de/display/PUBLIC/SuperMUC-NG
SC is the annual conference established in 1988 by the Association for Computing Machinery and the IEEE Computer Society. Attendees include researchers, scientists, application developers, computing center staff and management, computing industry staff, agency program managers, journalists, and congressional staffers, its technical program is the focus of the annual conference, covering nearly every area of scientific and engineering research.
http://www.supercomputing.org/
Decommissioned on August 2, 2019, Titan is a hybrid Cray XK7 system with a theoretical peak performance that exceeded 27,000 trillion calculations per second. Financed primarily by the US Department of Energy, Titan was available for any scientific purpose and included molecular scale physics, climate models, simulations of nuclear reactions. Its specifications and features are set forth and articles on some of the projects that it has been used for are included.
https://www.olcf.ornl.gov/olcf-resources/compute-systems/titan/
Since 1993, the TOP500 project ranks the most powerful non-distributed computer systems in the world and updates the list twice a year. Lists since 2015 are posted to the site, and an overview of the High-Performance Conjugate Gradients (HPCG) Benchmark project is reported. Another list, the Green500 List, measures, and rates the energy-efficiency of the world’s top supercomputers. Its methodology is defined, and related tutorials, publications, and presentations are included.
https://top500.org/
Managed and operated by the Los Alamos National Laboratory and Sandia National Laboratories, under the Alliance for Computing at Extreme Scale partnership, the supercomputer is located at the Nicholas Metropolis Center for Modeling and Simulation in Los Alamos. The system was built by Cray on a Cray XC40 architecture. The technical specifications for the system are stated, and various presentations and papers are available.
https://lanl.gov/projects/trinity/