Parallel computing is a form of computation in which numerous calculations are carried out at the same time, accomplished by dividing the workload between more than one processor.
The focus of this category is on parallel computing, which is sometimes referred to as parallel processing.
In parallel computing, one task is split into several subtasks, which are assigned to multiple processors for the purpose of obtaining a more rapid result through the use of coordination mechanisms.
In serial programming, a single processor (CPU) executes instructions in a step-by-step manner. This works fine for most purposes, but some operations include multiple steps that can be separated into multiple tasks that can be executed simultaneously. Elements in the matrix can be allocated to several processors, with the results available faster than if all of the operations had been performed serially.
Parallel computations can be performed on shared-memory systems with multiple CPUs, distributed-memory clusters comprised of smaller shared-memory systems or single CPU systems. The coordination of the concurrent work of the multiple processors, and the synchronization of the results, are handled through program calls to parallel libraries
Groups of networked computers that have the same goal for their work are known as distributed systems, but it might also be known as a parallel system, as the processors in a distributed system run concurrently in parallel. Parallel computing, then, is a form of distributed computing, while distributed computing may be considered a loosely coupled form of parallel computing.
Nevertheless, the two can be differentiated. In parallel computing, all processors must have access to shared memory in order to exchange information between the processors while, in distributed computing, each processor has its own private memory.
Other forms of parallel computing include multicore computing, which includes multiple execution units on the same chip. Symmetric multiprocessing refers to a computer system with multiple identical processors that share memory and are connected through a bus. Due to the limitations of bus architecture, SMPs do not include more than thirty-two processors. Cluster computing refers to a group of loosely coupled computers that work together so closely that, in many respects, they can be considered a single computer, known as a supercomputer.
Another type of supercomputer is one that uses several networked processors, a system that is known as massive parallel processing. MPPs have many of the same characteristics as clusters, but MPPs have specialized interconnect networks, while clusters use commodity hardware for networking. Grid computing uses middleware, which is software that sits between the operating system and the application, managing network resources.
Different types of parallel computing include bit-level parallelization, data parallelization, and function parallelization.
In computing, word size refers to the maximum number of bits that a CPU can process at a time. Bit-level parallelism is based on increasing the processor word size. During the 1970s and 1980s, advancements in computer architecture came about through increasing bit-level parallelism. Data parallelism is the parallelization of computing across more than one processor in a parallel computing environment, with a focus on distributing data across parallel computing nodes. Function parallelization refers to the parallelization of computing across more than one processor in a parallel computing environment, the focus being on the distribution of data across multiple parallel computing nodes.
Parallel computing allows its users to save time and to be able to solve larger problems. Single computers are limited by finite memory resources, which may be overcome by using the combined memory resources in multiple computers.
Resources listed in this category should relate to parallel computing.
 
 
Recommended Resources
ACM Digital Library: ACM Transactions on Parallel Computing
TOPC is a forum for novel and innovative work on all aspects of parallel computing, including its foundational and theoretical aspects, parallel computing systems, languages, architectures, tools, and applications, as well as all aspects of parallel computing. Its authors, editorial board, and reviewers are acknowledged, citations are noted, and an archive of reviews are included. Informational articles and forthcoming articles are noted, and TOPC is available by subscription.
https://topc.acm.org/
Maintained by Penguin Computing, the site serves as a resource for those who use and design Beowulf clusters, which are generally identical, commodity-grade computers that have been networked into a small local area network (LAN) with libraries and programs installed, allowing shared processing between them, resulting in a high-performance parallel computing cluster from inexpensive personal computer hardware. The site consists of Usenet-type discussion on the topic.
https://www.beowulf.org/
Grid Infoware is an informational site intended to promote the development and advancement of computational grids, which enable the sharing, selection, and aggregation of a variety of distributed computational resources, such as supercomputers and computer clusters, as well as the storage systems, data sources, instruments, and people who are involved in the system, which is analogous to electric power grids. Links to several related resources are included.
http://www.gridcomputing.com/
International Conference on Parallel Computing
ParCo is a continuation of the International Conference on Parallel Computing & HPC that was first held in 1983. The conference seeks to encourage the development and application of parallel computers worldwide. Now organized by the non-profit ParCo Conferences in conjunction with the Faculty of Mathematics & Physics at Charles University and the Faculty of Information Technologies of the Czech Technical University. Schedules, venues, and registration data are included.
https://www.parco.org/
Affiliated with the University of Illinois and Grainger College of Engineering, the Parallel Computing Institute enables researchers from throughout the campus to come together in application-focused research centers and to achieve their scientific goals through parallel computing technologies. An overview of the program, its operations, management, and goals are put forth, and its director and program manager are identified. A private login is available.
https://parallel.illinois.edu/
Curated and maintained by a group of engineers working in scientific computing, statistical analysis, and quantitative analysis, ParallelR is a platform for on-demand, distributed parallel computing, specified with R language. Included are Parallel Grep, an open-source enhancement for the standard grep under the Linux system, as well as Parallel DNN, and RcuFFT, each of which is described and made available for download. Presentations, a blog, and contacts are included.
http://www.parallelr.com/
Scalable Parallel Computing Laboratory
The SPCL performs research in all areas of scalable computing, including scalable high-performance networks and protocols, middleware, operating systems, runtime systems, parallel programming languages, support, constructs, storage, and scalable data access. The key people involved are introduced, job opportunities are posted, and thesis topics, publications, and an overview of research results are included. A schedule of teaching sessions and tutorials are posted.
http://spcl.inf.ethz.ch/