enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. High-performance computing - Wikipedia

    en.wikipedia.org/wiki/High-performance_computing

    High-performance computing (HPC) as a term arose after the term "supercomputing". [3] HPC is sometimes used as a synonym for supercomputing; but, in other contexts, "supercomputer" is used to refer to a more powerful subset of "high-performance computers", and the term "supercomputing" becomes a subset of "high-performance computing".

  3. GPFS - Wikipedia

    en.wikipedia.org/wiki/GPFS

    GPFS (General Parallel File System, brand name IBM Storage Scale and previously IBM Spectrum Scale) [1] is high-performance clustered file system software developed by IBM.It can be deployed in shared-disk or shared-nothing distributed parallel modes, or a combination of these.

  4. Granularity (parallel computing) - Wikipedia

    en.wikipedia.org/wiki/Granularity_(parallel...

    In parallel computing, granularity (or grain size) of a task is a measure of the amount of work (or computation) which is performed by that task. [1] Another definition of granularity takes into account the communication overhead between multiple processors or processing elements. It defines granularity as the ratio of computation time to ...

  5. Task parallelism - Wikipedia

    en.wikipedia.org/wiki/Task_parallelism

    Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concurrently performed by processes or threads—across different processors.

  6. Multithreading (computer architecture) - Wikipedia

    en.wikipedia.org/wiki/Multithreading_(computer...

    Even though it is very difficult to further speed up a single thread or single program, most computer systems are actually multitasking among multiple threads or programs. Thus, techniques that improve the throughput of all tasks result in overall performance gains. Two major techniques for throughput computing are multithreading and ...

  7. Amdahl's law - Wikipedia

    en.wikipedia.org/wiki/Amdahl's_law

    In practice, as more computing resources become available, they tend to get used on larger problems (larger datasets), and the time spent in the parallelizable part often grows much faster than the inherently serial work. In this case, Gustafson's law gives a less pessimistic and more realistic assessment of the parallel performance. [10]

  8. Memory hierarchy - Wikipedia

    en.wikipedia.org/wiki/Memory_hierarchy

    Memory hierarchy affects performance in computer architectural design, algorithm predictions, and lower level programming constructs involving locality of reference. Designing for high performance requires considering the restrictions of the memory hierarchy, i.e. the size and capabilities of each component.

  9. Many-task computing - Wikipedia

    en.wikipedia.org/wiki/Many-task_computing

    MTC is reminiscent of HTC, but it "differs in the emphasis of using many computing resources over short periods of time to accomplish many computational tasks (i.e. including both dependent and independent tasks), where the primary metrics are measured in seconds (e.g. FLOPS, tasks/s, MB/s I/O rates), as opposed to operations (e.g. jobs) per month.