User Tools

Site Tools


skill-tree:k:1:2:b

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
skill-tree:k:1:2:b [2020/07/19 19:23] lucyskill-tree:k:1:2:b [2025/03/10 19:24] (current) – external edit 127.0.0.1
Line 1: Line 1:
-# K1.2-B Hardware Architectures +# K1.2 Overview Hardware Architectures
-# Background+
  
 HPC computer architectures are parallel computer architectures. A parallel computer is built out of HPC computer architectures are parallel computer architectures. A parallel computer is built out of
Line 7: Line 6:
   * A high-speed network.   * A high-speed network.
  
-Aim +## Learning Outcomes 
-  * To provide knowledge about parallel computer architectures, in particular: the distinction between shared and distributed memory systems.+  * elementary processing elements like CPUs, GPUs, many-core architectures 
 +  * vector systems, and FPGAs 
 +  * the NUMA architecture used for symmetric multiprocessing systems where the memory access time depends on the memory location relative to the processor 
 +  * network demands for HPC systems (e.g. high bandwidth and low latency) 
 +  * typical network architectures used for HPC systems, like fast Ethernet (1 or 10 Gbit) or InfiniBand
  
-# Outcomes +  * Comprehend that in traditional **CPUs** - although CPU stands for Central Processing Unit - there is no central, i.e. single, processing unit any more because today all CPUs have multiple compute cores which all have the same functionality
-   elementary processing elements like CPUs, GPUs, many-core architectures +
-  *  vector systems, and FPGAs +
-  *  the NUMA architecture used for symmetric multiprocessing systems where the memory access time depends on the memory location relative to the processor +
-  *  network demands for HPC systems (e.g. high bandwidth and low latency) +
-  *  typical network architectures used for HPC systems, like fast Ethernet (1 or 10 Gbit) or InfiniBand +
- +
-  *  Comprehend that in traditional **CPUs** - although CPU stands for Central Processing Unit - there is no central, i.e. single, processing unit any more because today all CPUs have multiple compute cores which all have the same functionality+
   * Comprehend that **GPUs** (Graphical Processing Units) or **GPGPUs** (General Purpose Graphical Processing Units) were originally used for image processing and displaying images on screens before people started to utilize the computing power of GPUs for other purposes   * Comprehend that **GPUs** (Graphical Processing Units) or **GPGPUs** (General Purpose Graphical Processing Units) were originally used for image processing and displaying images on screens before people started to utilize the computing power of GPUs for other purposes
   * Comprehend that **FPGAs** (Field-Programmable Gate Arrays) are devices that have configurable hardware and configurations are specified by hardware description languages   * Comprehend that **FPGAs** (Field-Programmable Gate Arrays) are devices that have configurable hardware and configurations are specified by hardware description languages
Line 27: Line 23:
     * **NUMA** (Non-Uniform Memory Access) combines properties from shared and distributed memory systems, because at the hardware level a NUMA system resembles a distributed memory     * **NUMA** (Non-Uniform Memory Access) combines properties from shared and distributed memory systems, because at the hardware level a NUMA system resembles a distributed memory
   * Comprehend that in general, the effort for programming parallel applications for distributed systems is higher than for shared memory systems   * Comprehend that in general, the effort for programming parallel applications for distributed systems is higher than for shared memory systems
 +  * parallelization techniques at the instruction level of a processing element (e.g. pipelining, SIMD processing)
 +  * advanced instruction sets that improve parallelization (e.g., AVX-512)
 +  * hybrid approaches, e.g. combining CPUs with GPUs or FPGAs 
 +  * typical network topologies and architectures used for HPC systems, like fat trees based on switched fabrics using e.g. fast Ethernet (1 or 10 Gbit) or InfiniBand 
 +  * special or application-specific hardware (e.g. TPUs)
 +
 +## Subskills
  
-# Subskills 
-  * [[skill-tree:k:1:2:i]] 
-  * [[skill-tree:k:1:2:e]] 
skill-tree/k/1/2/b.1595179407.txt.gz · Last modified: 2020/07/19 19:23 by lucy