Parallelism and Locality in
Matrix Computations
www.cs.berkeley.edu/~demmel/cs267_Spr09
Introduction
Jim Demmel
EECS & Math Departments, UC Berkeley
[email protected]
1
Outline (of all lectures)
• Why all computers must be parallel processors
• Arithmetic is cheap, what costs is moving data
• Recurring computational patterns
• Dense Linear Algebra
• Sparse Linear Algebra
• How do I know I get the right answer?
01/21/2008
CS267 - Lecture 1
2
Units of Measure
• High Performance Computing (HPC) units are:
- Flop: floating point operation
- Flops/s: floating point operations per second
- Bytes: size of data (a double precision floating point number is 8)
• Typical sizes are millions, billions, trillions…
Mega
Mflop/s = 106 flop/sec
Mbyte = 220 = 1048576 ~ 106 bytes
Giga
Tera
Peta
Exa
Zetta
Gflop/s = 109 flop/sec
Tflop/s = 1012 flop/sec
Pflop/s = 1015 flop/sec
Eflop/s = 1018 flop/sec
Zflop/s = 1021 flop/sec
Gbyte = 230 ~ 109 bytes
Tbyte = 240 ~ 1012 bytes
Pbyte = 250 ~ 1015 bytes
Ebyte = 260 ~ 1018 bytes
Zbyte = 270 ~ 1021 bytes
Yotta
Yflop/s = 1024 flop/sec
Ybyte = 280 ~ 1024 bytes
• Current fastest (public) machine ~ 1.5 Pflop/s
- Up-to-date list at www.top500.org
01/21/2008
CS267 - Lecture 1
3
Outline (of all lectures)
• Why all computers must be parallel processors
• Arithmetic is cheap, what costs is moving data
• Recurring computational patterns
• Dense Linear Algebra
• Sparse Linear Algebra
• How do I know I get the right answer?
01/21/2008
CS267 - Lecture 1
4
Technology Trends: Microprocessor Capacity
Moore’s Law
2X transistors/Chip Every 1.5 years
Called “Moore’s Law”
Microprocessors have
become smaller, denser,
and more powerful.
Gordon Moore (co-founder of
Intel) predicted in 1965 that the
transistor density of
semiconductor chips would
double roughly every 18
months.
Slide source: Jack Dongarra
01/21/2008
CS267 - Lecture 1
5
Performance Development
100 Pflop/s
22.9 PFlop/s
10 Pflop/s
1.1 PFlop/s
1 Pflop/s
100 Tflop/s
10 Tflop/s
SUM
17.08 TFlop/s
1.17 TFlop/s
N=1
1 Tflop/s
100 Gflop/s
59.7 GFlop/s
N=500
10 Gflop/s
1 Gflop/s
400 MFlop/s
100 Mflop/s
www.top500.org
Parallelism Revolution is Happening Now
• Chip density is
continuing
increase ~2x
every 2 years
- Clock speed is not
- Number of
processor cores
may double instead
• There is little or
no more hidden
parallelism (ILP)
to be found
• Parallelism must
be exposed to
and managed by
software
Source: Intel, Microsoft (Sutter) and
Stanford
(Olukotun, Hammond)
01/21/2008
CS267 - Lecture 1
7
Outline (of all four lectures)
• Why all computers must be parallel processors
• Arithmetic is cheap, what costs is moving data
• Recurring computational patterns
• Dense Linear Algebra
• Sparse Linear Algebra
• How do I know I get the right answer?
01/21/2008
CS267 - Lecture 1
8
Motivation
• Most applications run at < 10% of the “peak” performance
of a system
- Peak is the maximum the hardware can physically execute
• Much of this performance is lost on a single processor, i.e.,
the code running on one processor often runs at only 1020% of the processor peak
• Most of the single processor performance loss is in the
memory system
- Moving data takes much longer than arithmetic and logic
• To understand this, we need to look under the hood of
modern processors
- We will first look at only a single “core” processor
- These issues will exist on processors within any parallel computer
• For parallel computers, moving data also the bottleneck
01/21/2008
CS267 - Lecture 1
9
Outline
• Arithmetic is cheap, what costs is moving data
- Idealized and actual costs in modern processors
- Parallelism within single processors
- Memory hierarchies
- What this means for designing algorithms and software
01/21/2008
CS267 - Lecture 1
10
Outline
• Arithmetic is cheap, what costs is moving data
- Idealized and actual costs in modern processors
- Parallelism within single processors
- Memory hierarchies
- Temporal and spatial locality
- Basics of caches
- Use of microbenchmarks to characterize
performance
- What this means for designing algorithms and software
01/21/2008
CS267 - Lecture 1
11
Memory Hierarchy
• Most programs have a high degree of locality in their accesses
- spatial locality: accessing things nearby previous accesses
- temporal locality: reusing an item that was previously accessed
• Memory hierarchy tries to exploit locality
processor
control
Second
level
cache
(SRAM)
datapath
registers
on-chip
Main
memory
Secondary
storage
(Disk)
(DRAM)
Tertiary
storage
(Disk/Tape)
cache
Speed
1ns
10ns
100ns
10ms
10sec
Size
B
KB
MB
GB
TB
01/21/2008
CS267 - Lecture 1
12
Processor-DRAM Gap (latency)
• Memory hierarchies are getting deeper
- Processors get faster more quickly than memory
CPU
“Moore’s Law”
Processor-Memory
Performance Gap:
(grows 50% / year)
DRAM
DRAM
7%/yr.
100
10
1
µProc
60%/yr.
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
Performance
1000
Time
01/21/2008
CS267 - Lecture 1
13
Approaches to Handling Memory Latency
• Approach to address the memory latency problem
- Eliminate memory operations by saving values in small, fast memory
(cache) and reusing them
- need temporal locality in program
- Take advantage of better bandwidth by getting a chunk of memory and
saving it in small fast memory (cache) and using whole chunk
- need spatial locality in program
- Take advantage of better bandwidth by allowing processor to issue
multiple reads to the memory system at once
- concurrency in the instruction stream, e.g. load whole array,
as in vector processors; or prefetching
- Overlap computation & memory operations
- Prefetching
• Bandwidth has improved more than latency
- 23% per year vs 7% per year
- Bandwidth still getting slower compared to arithmetic (at 60% per year)
01/21/2008
CS267 - Lecture 1
14
Cache Basics
• Cache is fast (expensive) memory which keeps copy of data
in main memory; it is hidden from software
- Simplest example: data at memory address xxxxx1101 is
stored at cache location 1101
• Cache hit: in-cache memory access—cheap
• Cache miss: non-cached memory access—expensive
- Need to access next, slower level of cache
• Cache line length: # of bytes loaded together in one entry
• Ex: If either xxxxx1100 or xxxxx1101 is loaded, both are
• Associativity
• direct-mapped: only 1 address (line) in a given range in cache
• Ex: Data stored at address xxxxx1101 stored at cache location
1101, in 16 word cache
• n-way: n  2 lines with different addresses can be stored
• Ex: Up to 16 words with addresses xxxxx1101 can be stored at
cache location 1101
01/21/2008
CS267 - Lecture 1
15
Why Have Multiple Levels of Cache?
• On-chip vs. off-chip
- On-chip caches are faster, but limited in size
• A large cache has delays
- Hardware to check longer addresses in cache takes more time
- Associativity, which gives a more general set of data in cache, also
takes more time
• Some examples:
- Cray T3E eliminated one cache to speed up misses
- IBM uses a level of cache as a “victim cache” which is cheaper
• There are other levels of the memory hierarchy
- Register, pages (TLB, virtual memory), …
- And it isn’t always a hierarchy
01/21/2008
CS267 - Lecture 1
16
Experimental Study of Memory (Membench)
• Microbenchmark for memory system performance
s
•
01/21/2008
for array A of length L from 4KB to 8MB by 2x
for stride s from 4 Bytes (1 word) to L/2 by 2x
time
timethe
thefollowing
followingloop
loop
(repeat
(repeatmany
manytimes
timesand
andaverage)
average)
for
fori ifrom
from00totoLL by s
load
loadA[i]
A[i]from
frommemory
memory(4(4Bytes)
Bytes)
CS267 - Lecture 1
1 experiment
17
Membench: What to Expect
average cost per access
memory
time
size > L1
cache hit
time
total size < L1
s = stride
• Consider the average cost per load
- Plot one line for each array length, time vs. stride
- Small stride is best: if cache line holds 4 words, at most ¼ miss
- If array is smaller than a given cache, all those accesses will hit (after
the first run, which is negligible for large enough runs)
- Picture assumes only one level of cache
- Values have gotten more difficult to measure on modern procs
01/21/2008
CS267 - Lecture 1
18
Memory Hierarchy on a Sun Ultra-2i
Sun Ultra-2i, 333 MHz
Array length
Mem: 396 ns
(132 cycles)
L2: 2 MB,
12 cycles (36 ns)
L1: 16 B line
L1:
16 KB
2 cycles (6ns)
L2: 64 byte line
8 K pages,
32 TLB entries
See www.cs.berkeley.edu/~yelick/arvindk/t3d-isca95.ps for details
01/21/2008
CS267 - Lecture 1
19
Memory Hierarchy on a Pentium III
Katmai processor on Millennium, 550 MHz
Array size
L2: 512 KB
60 ns
L1: 64K
5 ns, 4-way?
L1: 32 byte line ?
01/21/2008
CS267 - Lecture 1
20
Memory Hierarchy on a Power3 (Seaborg)
Power3, 375 MHz
Array size
Mem: 396 ns
(132 cycles)
L2: 8 MB
128 B line
9 cycles
L1: 32 KB
128B line
.5-2 cycles
01/21/2008
CS267 - Lecture 1
21
Stanza Triad – to measure prefetching
• Even smaller benchmark for prefetching
• Derived from STREAM Triad
• Stanza (L) is the length of a unit stride run
while i < arraylength
for each L element stanza
A[i] = scalar * X[i] + Y[i]
skip k elements
...
...
1) do L triads
stanza
01/21/2008
2) skip k
elements
CS267 - Lecture 1
3) do L triads
stanza
Source: Kamil et al, MSP05
22
Stanza Triad Results
• This graph (x-axis) starts at a cache line size (16 Bytes)
• If cache locality was the only thing that mattered, we would expect
- Flat lines equal to measured memory peak bandwidth (STREAM) as on Pentium3
• Prefetching gets the next cache line (pipelining) while using the current one
- This does not “kick in” immediately, so performance depends on L
01/21/2008
CS267 - Lecture 1
23
Outline
• Arithmetic is cheap, what costs is moving data
- Idealized and actual costs in modern processors
- Parallelism within single processors
- Memory hierarchies
- What this means for designing algorithms and software
- This is the main topic of these lectures
01/21/2008
CS267 - Lecture 1
24
What this means for designing algorithms and software
• Design goal should be to minimize the most expensive
operation
- Minimize communication = moving data, either between levels of a
memory hierarchy, between processor over a network
- An algorithm that is good enough today may not be tomorrow
- Communication cost increasing relative to arithmetic
- Sometimes helps to do more arithmetic in order to do less
communication
• Rest of lectures address impact on linear algebra
- Many new algorithms, designed to minimize communication
- Proofs that communication is minimized
• Actual performance of a simple program can be a
complicated function of the architecture
- Slight changes in the architecture or program change the
performance significantly
- We would like simple models to help us design efficient algorithms
and prove their optimality
- Can we automate algorithm design?
01/21/2008
CS267 - Lecture 1
25
Outline (of all four lectures)
• Why all computers must be parallel processors
• Arithmetic is cheap, what costs is moving data
• Recurring computational patterns
• Dense Linear Algebra
• Sparse Linear Algebra
• How do I know I get the right answer?
01/21/2008
CS267 - Lecture 1
26
The “7 Dwarfs” of High Performance Computing
• Phil Colella (LBL) identified 7 kernels out of which most
large scale simulation and data-analysis programs are
composed:
1. Dense Linear Algebra
• Ex: Solve Ax=b or Ax = λx where A is a dense matrix
2. Sparse Linear Algebra
• Ex: Solve Ax=b or Ax = λx where A is a sparse matrix (mostly zero)
3. Operations on Structured Grids
•
Ex: Anew(i,j) = 4*A(i,j) – A(i-1,j) – A(i+1,j) – A(i,j-1) – A(i,j+1)
4. Operations on Unstructured Grids
• Ex: Similar, but list of neighbors varies from entry to entry
5. Spectral Methods
• Ex: Fast Fourier Transform (FFT)
6. Particle Methods
• Ex: Compute electrostatic forces using Fast Multiple Method
7. Monte Carlo
• Ex: Many independent simulations using different inputs
01/21/2008
CS267 - Lecture 1
27
Motif/Dwarf: Common Computational Methods
H PC
ML
G am es
DB
SPEC
E m b ed
(Red Hot  Blue Cool)
1 Finite State Mach.
2 Combinational
3 Graph Traversal
4 Structured Grid
5 Dense Matrix
6 Sparse Matrix
7 Spectral (FFT)
8 Dynamic Prog
9 N-Body
10 MapReduce
11 Backtrack/ B&B
12 Graphical Models
13 Unstructured Grid
01/21/2008
CS267 - Lecture 1
Health
Image
Speech
Music
Browser
Programming Pattern Language 1.0 Keutzer& Mattson
Applications
Choose your high level
structure – what is the
structure of my
application? Guided
expansion
Efficiency Layer
Productivity Layer
Pipe-and-filter
Agent and Repository
Process Control
Event based, implicit
invocation
Choose your high level architecture - Guided decomposition
Identify the key
computational patterns
– what are my key
computations?
Guided instantiation
Task Decomposition ↔ Data Decomposition
Group Tasks Order groups data sharing data access
Model-view controller
Graph Algorithms
Graphical models
Iterator
Dynamic Programming
Finite state machines
Map reduce
Dense Linear Algebra
Backtrack Branch and Bound
Layered systems
Sparse Linear Algebra
N-Body methods
Arbitrary Static Task Graph
Unstructured Grids
Circuits
Structured Grids
Spectral Methods
Refine the structure - what concurrent approach do I use? Guided re-organization
Event Based
Data Parallelism
Pipeline
Task Parallelism
Divide and Conquer
Geometric Decomposition
Discrete Event
Graph algorithms
Utilize Supporting Structures – how do I implement my concurrency? Guided mapping
Distributed Array
Fork/Join
Shared Queue
Shared-Data
CSP
Shared Hash Table
Digital Circuits
Master/worker
Loop Parallelism
BSP
Implementation methods – what are the building blocks of parallel programming? Guided implementation
Thread Creation/destruction
Message passing
Speculation
01/21/2008
CS267 - Lecture 1
29
Process/Creation/destruction Collective communication Transactional memory
Barriers
Mutex
Semaphores
Algorithms for N x N Linear System Ax=b
Algorithm
Serial
PRAM
Memory
#Procs
• Dense LU
N3
N
N2
N2
• Band LU
N2
N
N3/2
N
• Jacobi
N2
N
N
N
• Explicit Inv.
N2
log N
N2
N2
• Conj.Gradients N3/2
N1/2 *log N
N
N
• Red/Black SOR N3/2
N1/2
N
N
• Sparse LU
N3/2
N1/2
N*log N
N
• FFT
N*log N
log N
N
N
• Multigrid
N
log2 N
N
N
• Lower bound
N
log N
N
PRAM is an idealized parallel model with zero cost communication
01/21/2008
CS267 - Lecture 1
Algorithms for 2D Poisson Equation (N = n2 vars)
Algorithm
Serial
PRAM
Memory
#Procs
• Dense LU
N3
N
N2
N2
• Band LU
N2
N
N3/2
N
• Jacobi
N2
N
N
N
• Explicit Inv.
N2
log N
N2
N2
• Conj.Gradients N3/2
N1/2 *log N
N
N
• Red/Black SOR N3/2
N1/2
N
N
• Sparse LU
N3/2
N1/2
N*log N
N
• FFT
N*log N
log N
N
N
• Multigrid
N
log2 N
N
N
• Lower bound
N
log N
N
PRAM is an idealized parallel model with zero cost communication
- Lecture 1 Linear Algebra, SIAM, 1997.
Reference: J.D., Applied CS267
Numerical
01/21/2008
Algorithms for 2D (3D) Poisson Equation (N = n2 (n3) vars)
Algorithm
Serial
PRAM
Memory
#Procs
• Dense LU
N3
N
N2
N2
• Band LU
N2 (N7/3)
N
N3/2 (N5/3)
N (N4/3)
• Jacobi
N2 (N5/3)
N (N2/3)
N
N
• Explicit Inv.
N2
log N
N2
N2
• Conj.Gradients N3/2 (N4/3)
N1/2 (1/3) *log N N
N
• Red/Black SOR N3/2 (N4/3)
N1/2 (N1/3)
N
N
• Sparse LU
N3/2 (N2)
N1/2
N*log N (N4/3)
N
• FFT
N*log N
log N
N
N
• Multigrid
N
log2 N
N
N
• Lower bound
N
log N
N
PRAM is an idealized parallel model with zero cost communication
- Lecture 1 Linear Algebra, SIAM, 1997.
Reference: J.D. , Applied CS267
Numerical
01/21/2008
Algorithms and Motifs
Algorithm
Motifs
• Dense LU
Dense linear algebra
• Band LU
Dense linear algebra
• Jacobi
(Un)structured meshes, Sparse Linear Algebra
• Explicit Inv.
Dense linear algebra
• Conj.Gradients
(Un)structured meshes, Sparse Linear Algebra
• Red/Black SOR
(Un)structured meshes, Sparse Linear Algebra
• Sparse LU
Sparse Linear Algebra
• FFT
Spectral
• Multigrid
(Un)structured meshes, Sparse Linear Algebra
01/21/2008
CS267 - Lecture 1
Outline (of all four lectures)
• Why all computers must be parallel processors
• Arithmetic is cheap, what costs is moving data
• Recurring computational patterns
• Dense Linear Algebra
• Sparse Linear Algebra
• How do I know I get the right answer?
01/21/2008
CS267 - Lecture 1
34
For more information
• CS267
- Annual one semester course on parallel computing at UC Berkeley
- All slides and video archived from Spring 2009 offering
- www.cs.berkeley.edu/~demmel/cs267_Spr09
- Google “parallel computing course”
- www.cs.berkeley.edu/~demmel/cs267
- 1996 version, but extensive on-line algorithmic notes
• Parallelism “boot camp”
- Second annual 3 day course at UC Berkeley
- parlab.eecs.berkeley.edu/2009bootcamp
• ParLab
- Parallel Computing Research Lab at UC Berkeley
- parlab.eecs.berkeley.edu
01/21/2008
CS267 - Lecture 1
35
EXTRA SLIDES
FROM CS267 LECTURE 1
01/21/2008
CS267 - Lecture 1
36
Computational Science- Recent News
“An important development in
sciences is occurring at the
intersection of computer science and
the sciences that has the potential to
have a profound impact on science. It
is a leap from the application of
computing … to the integration of
computer science concepts, tools,
and theorems into the very fabric of
science.” -Science 2020 Report, March 2006
01/21/2008
CS267 - Lecture 1
Nature, March 23, 2006
37
Drivers for Change
• Continued exponential increase in computational
power  simulation is becoming third pillar of
science, complementing theory and experiment
• Continued exponential increase in experimental
data  techniques and technology in data
analysis, visualization, analytics, networking, and
collaboration tools are becoming essential in all
data rich scientific applications
01/21/2008
CS267 - Lecture 1
38
Simulation: The Third Pillar of Science
• Traditional scientific and engineering method:
Theory
Experiment
(1) Do theory or paper design
(2) Perform experiments or build system
• Limitations:
–Too difficult—build large wind tunnels
–Too expensive—build a throw-away passenger jet
–Too slow—wait for climate or galactic evolution
–Too dangerous—weapons, drug design, climate
experimentation
Simulation
• Computational science and engineering paradigm:
(3) Use high performance computer systems
to simulate and analyze the phenomenon
- Based on known physical laws and efficient numerical methods
- Analyze simulation results with computational tools and methods beyond
what is used traditionally for experimental data analysis
01/21/2008
CS267 - Lecture 1
39
Computational Science and Engineering (CSE)
• CSE is a widely accepted label for an evolving field concerned
with the science of and the engineering of systems and
methodologies to solve computational problems arising
throughout science and engineering
• CSE is characterized by
-
Multi - disciplinary
Multi - institutional
Requiring high-end resources
Large teams
Focus on community software
• CSE is not “just programming” (and not CS)
• Fast computers necessary but not sufficient
• New graduate program in CSE at UC Berkeley (more later)
Reference: Petzold, L., et al., Graduate Education in CSE, SIAM Rev., 43(2001), 163-177
01/21/2008
CS267 - Lecture 1
40
SciDAC - First Federal Program to Implement CSE
• SciDAC (Scientific Discovery
through Advanced Computing)
program created in 2001
– About $50M annual funding
– Berkeley (LBNL+UCB)
largest recipient of SciDAC
funding
Biology
01/21/2008
Nanoscience
Global Climate
Combustion
CS267 - Lecture 1
Astrophysics
41
Some Particularly Challenging Computations
• Science
-
Global climate modeling
Biology: genomics; protein folding; drug design
Astrophysical modeling
Computational Chemistry
Computational Material Sciences and Nanosciences
• Engineering
-
Semiconductor design
Earthquake and structural modeling
Computation fluid dynamics (airplane design)
Combustion (engine design)
Crash simulation
• Business
- Financial and economic modeling
- Transaction processing, web services and search engines
• Defense
- Nuclear weapons -- test by simulations
- Cryptography
01/21/2008
CS267 - Lecture 1
42
Economic Impact of HPC
• Airlines:
- System-wide logistics optimization systems on parallel systems.
- Savings: approx. $100 million per airline per year.
• Automotive design:
- Major automotive companies use large systems (500+ CPUs) for:
- CAD-CAM, crash testing, structural integrity and
aerodynamics.
- One company has 500+ CPU parallel system.
- Savings: approx. $1 billion per company per year.
• Semiconductor industry:
- Semiconductor firms use large systems (500+ CPUs) for
- device electronics simulation and logic validation
- Savings: approx. $1 billion per company per year.
• Securities industry (note: old data …)
- Savings: approx. $15 billion per year for U.S. home mortgages.
01/21/2008
CS267 - Lecture 1
43
$5B World Market in Technical Computing
1998 1999 2000 2001 2002 2003
100%
Other
Technical Management and
Support
Simulation
90%
Scientific Research and R&D
80%
Mechanical
Design/Engineering Analysis
70%
Mechanical Design and
Drafting
60%
Imaging
50%
Geoscience and Geoengineering
40%
Electrical Design/Engineering
Analysis
Economics/Financial
30%
Digital Content Creation and
Distribution
20%
Classified Defense
10%
Chemical Engineering
Biosciences
0%
Source: IDC 2004, from NRC Future of Supercomputing Report
01/21/2008
CS267 - Lecture 1
44
What Supercomputers Do
Introducing Computational Science and Engineering
Two Examples
- simulation replacing experiment that is too dangerous
- analyzing massive amounts of data with new tools
01/21/2008
CS267 - Lecture 1
45
Global Climate Modeling Problem
• Problem is to compute:
f(latitude, longitude, elevation, time)  “weather” =
(temperature, pressure, humidity, wind velocity)
• Approach:
- Discretize the domain, e.g., a measurement point every 10 km
- Devise an algorithm to predict weather at time t+dt given t
• Uses:
- Predict major events,
e.g., El Nino
- Use in setting air
emissions standards
- Evaluate global warming
scenarios
Source: http://www.epm.ornl.gov/chammp/chammp.html
01/21/2008
CS267 - Lecture 1
46
Global Climate Modeling Computation
• One piece is modeling the fluid flow in the atmosphere
- Solve Navier-Stokes equations
- Roughly 100 Flops per grid point with 1 minute timestep
• Computational requirements:
- To match real-time, need 5 x 1011 flops in 60 seconds = 8 Gflop/s
- Weather prediction (7 days in 24 hours)  56 Gflop/s
- Climate prediction (50 years in 30 days)  4.8 Tflop/s
- To use in policy negotiations (50 years in 12 hours)  288 Tflop/s
• To double the grid resolution, computation is 8x to 16x
• State of the art models require integration of
atmosphere, clouds, ocean, sea-ice, land models, plus
possibly carbon cycle, geochemistry and more
• Current models are coarser than this
01/21/2008
CS267 - Lecture 1
47
High Resolution
Climate Modeling on
NERSC-3 – P. Duffy,
et al., LLNL
01/21/2008
CS267 - Lecture 1
48
U.S.A. Hurricane
Source: M.Wehner, LBNL
NERSC User George Smoot wins 2006
Nobel Prize in Physics
Smoot and Mather 1992
COBE Experiment showed
anisotropy of CMB
Cosmic Microwave
Background Radiation
(CMB): an image of the
universe at 400,000 years
01/21/2008
CS267 - Lecture 1
50
The Current CMB Map
source J. Borrill, LBNL
• Unique imprint of primordial physics through the tiny anisotropies in
temperature and polarization.
• Extracting these Kelvin fluctuations from inherently noisy data is a
serious computational challenge.
01/21/2008
CS267 - Lecture 1
51
Evolution Of CMB Data Sets: Cost > O(Np^3 )
Experiment
Nt
Np
Nb
Limiting Data
Notes
COBE (1989)
2x109
6x103
3x101
Time
BOOMERanG (1998)
3x108
5x105
3x101
Pixel
Balloon, 1st HPC/NERSC
(4yr) WMAP (2001)
7x1010
4x107
1x103
?
Satellite, Analysis-bound
Planck (2007)
5x1011
6x108
6x103
Time/ Pixel
POLARBEAR (2007)
8x1012
6x106
1x103
Time
CMBPol (~2020)
1014
109
104
Time/ Pixel
Satellite,
Workstation
Satellite,
Major HPC/DA effort
Ground,
NG-multiplexing
Satellite,
Early
planning/design
data compression
01/21/2008
CS267 - Lecture 1
52
Which commercial applications require parallelism?
Embed
te Mach.
ional
aversal
d Grid
trix
atrix
FFT)
Prog
SPEC
DB
Games
ML
HPC
Analyzed in detail in “Berkeley View” report
www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html
ce
k/ B&B
Models
red Grid
• Claim: parallel architecture, language, compiler … must do at least these well to
run future parallel apps well
• Note: MapReduce is embarrassingly parallel; FSM embarrassingly sequential?
01/21/2008
CS267 - Lecture 1
53
Which commercial applications require parallelism?
H PC
ML
G am es
DB
SPEC
E m b ed
ParLab
Health
Image
tate Mach.
ational
Traversal
ed Grid
Matrix
Matrix
l (FFT)
c Prog
duce
ck/ B&B
al Models
tured Grid
01/21/2008
CS267 - Lecture 1
Applications
Speech
Music
Browser
Compelling Laptop/Handheld Apps
(David Wessel)
•
Musicians have an insatiable appetite for
computation + real-time demands
-
More channels, instruments, more processing,
more interaction!
-
Latency must be low (5 ms)
-
Must be reliable (No clicks)
1.
Music Enhancer
-
-
2.
Enhanced sound delivery systems for home
sound systems using large microphone and
speaker arrays
Laptop/Handheld recreate 3D sound over ear
buds
Hearing Augmenter
-
3.
Berkeley Center for New Music and
Audio Technology (CNMAT) created a
compact loudspeaker array:
10-inch-diameter icosahedron
incorporating 120 tweeters.
Laptop/Handheld as accelerator for hearing aide
Novel Instrument User Interface
-
New composition and performance systems
beyond keyboards
01/21/2008
-
CS267 - Lecture 1
Input device for Laptop/Handheld
55
Coronary Artery Disease
(Tony Keaveny)
After
Before

Modeling to help patient compliance?
•

450k deaths/year, 16M symptomatic, 72M High BP
Massively parallel, Real-time variations
•
•
CFD FE solid (non-linear), fluid (Newtonian), pulsatile
Blood pressure, activity, habitus, cholesterol
01/21/2008
CS267 - Lecture 1
56
Content-Based Image Retrieval
(Kurt Keutzer)
Relevance
Feedback
Query by example
Similarity
Metric
Image
Database
Candidate
Results
Final Result
• Built around Key Characteristics of personal
databases
1000’s of
images
01/21/2008
-
Very large number of pictures (>5K)
Non-labeled images
Many pictures of few people
Complex pictures including people, events, places,
CS267 - Lecture 1
and objects
57
Compelling Laptop/Handheld Apps
(Nelson Morgan)
• Meeting Diarist
- Laptops/ Handhelds at
meeting coordinate to
create speaker identified,
partially transcribed text
diary of meeting

Teleconference speaker identifier,
speech helper

01/21/2008
L/Hs used for teleconference, identifies who is
speaking, “closed
caption”
CS267
- Lecture 1 hint of what being said
58
What do commercial and CSE applications have in common?
Motif/Dwarf: Common Computational Methods
H PC
ML
G am es
DB
SPEC
E m b ed
(Red Hot  Blue Cool)
1 Finite State Mach.
2 Combinational
3 Graph Traversal
4 Structured Grid
5 Dense Matrix
6 Sparse Matrix
7 Spectral (FFT)
8 Dynamic Prog
9 N-Body
10 MapReduce
11 Backtrack/ B&B
12 Graphical Models
13 Unstructured Grid
01/21/2008
CS267 - Lecture 1
Health
Image
Speech
Music
Browser
Outline
all
• Why powerful computers must be parallel processors
Including your laptops and handhelds
• Large CSE problems require powerful computers
Commercial problems too
• Why writing (fast) parallel programs is hard
But things are improving
• Principles of parallel computing performance
• Structure of the course
01/21/2008
CS267 - Lecture 1
60
Principles of Parallel Computing
• Finding enough parallelism (Amdahl’s Law)
• Granularity
• Locality
• Load balance
• Coordination and synchronization
• Performance modeling
All of these things makes parallel programming
even harder than sequential programming.
01/21/2008
CS267 - Lecture 1
61
“Automatic” Parallelism in Modern Machines
• Bit level parallelism
- within floating point operations, etc.
• Instruction level parallelism (ILP)
- multiple instructions execute per clock cycle
• Memory system parallelism
- overlap of memory operations with computation
• OS parallelism
- multiple jobs run in parallel on commodity SMPs
Limits to all of these -- for very high performance, need user
to identify, schedule and coordinate parallel tasks
01/21/2008
CS267 - Lecture 1
62
Finding Enough Parallelism
• Suppose only part of an application seems parallel
• Amdahl’s law
- let s be the fraction of work done sequentially, so
(1-s) is fraction parallelizable
- P = number of processors
Speedup(P) = Time(1)/Time(P)
<= 1/(s + (1-s)/P)
<= 1/s
• Even if the parallel part speeds up perfectly
performance is limited by the sequential part
• Top500 list: currently fastest machine has P~130K
01/21/2008
CS267 - Lecture 1
63
Overhead of Parallelism
• Given enough parallel work, this is the biggest barrier to
getting desired speedup
• Parallelism overheads include:
- cost of starting a thread or process
- cost of communicating shared data
- cost of synchronizing
- extra (redundant) computation
• Each of these can be in the range of milliseconds
(=millions of flops) on some systems
• Tradeoff: Algorithm needs sufficiently large units of work
to run fast in parallel (i.e. large granularity), but not so
large that there is not enough parallel work
01/21/2008
CS267 - Lecture 1
64
Locality and Parallelism
Conventional
Storage
Proc
Hierarchy
Cache
L2 Cache
Proc
Cache
L2 Cache
Proc
Cache
L2 Cache
L3 Cache
L3 Cache
Memory
Memory
Memory
potential
interconnects
L3 Cache
• Large memories are slow, fast memories are small
• Storage hierarchies are large and fast on average
• Parallel processors, collectively, have large, fast cache
- the slow accesses to “remote” data we call “communication”
• Algorithm should do most work on local data
01/21/2008
CS267 - Lecture 1
65
Processor-DRAM Gap (latency)
Goal: find algorithms that minimize communication, not necessarily arithmetic
CPU
“Moore’s Law”
Processor-Memory
Performance Gap:
(grows 50% / year)
DRAM
DRAM
7%/yr.
100
10
1
µProc
60%/yr.
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
Performance
1000
Time
01/21/2008
CS267 - Lecture 1
66
Load Imbalance
• Load imbalance is the time that some processors in the
system are idle due to
- insufficient parallelism (during that phase)
- unequal size tasks
• Examples of the latter
- adapting to “interesting parts of a domain”
- tree-structured computations
- fundamentally unstructured problems
• Algorithm needs to balance load
- Sometimes can determine work load, divide up evenly, before starting
- “Static Load Balancing”
- Sometimes work load changes dynamically, need to rebalance
dynamically
- “Dynamic Load Balancing”
01/21/2008
CS267 - Lecture 1
67
Parallel Software Eventually – ParLab view
• 2 types of programmers  2 layers
• Efficiency Layer (10% of today’s programmers)
- Expert programmers build Libraries implementing motifs, “Frameworks”,
OS, ….
- Highest fraction of peak performance possible
• Productivity Layer (90% of today’s programmers)
- Domain experts / Naïve programmers productively build parallel applications
by composing frameworks & libraries
- Hide as many details of machine, parallelism as possible
- Willing to sacrifice some performance for productive programming
• Expect students may want to work at either level
- In the meantime, we all need to understand enough of the efficiency layer to
use parallelism effectively
01/21/2008
CS267 - Lecture 1
68
Outline
all
• Why powerful computers must be parallel processors
Including your laptops and handhelds
• Large CSE problems require powerful computers
Commercial problems too
• Why writing (fast) parallel programs is hard
But things are improving
• Principles of parallel computing performance
• Structure of the course
01/21/2008
CS267 - Lecture 1
69
Improving Real Performance
Peak Performance grows exponentially,
a la Moore’s Law

In 1990’s, peak performance increased 100x;
in 2000’s, it will increase 1000x
1,000
But efficiency (the performance relative to
the hardware peak) has declined

was 40-50% on the vector supercomputers
of 1990s
now as little as 5-10% on parallel
supercomputers of today
Close the gap through ...


Mathematical methods and algorithms that
achieve high performance on a single
processor and scale to thousands of
processors
More efficient programming models and tools
for massively parallel supercomputers
01/21/2008
CS267 - Lecture 1
100
Teraflops

Peak Performance
Performance
Gap
10
1
Real Performance
0.1
1996
2000
2004
70
Performance Levels
• Peak advertised performance (PAP)
- You can’t possibly compute faster than this speed
• LINPACK
- The “hello world” program for parallel computing
- Solve Ax=b using Gaussian Elimination, highly tuned
• Gordon Bell Prize winning applications performance
- The right application/algorithm/platform combination plus years of work
• Average sustained applications performance
- What one reasonable can expect for standard applications
When reporting performance results, these levels are
often confused, even in reviewed publications
01/21/2008
CS267 - Lecture 1
71
Performance Levels (for example on NERSC-5)
• Peak advertised performance (PAP): 100 Tflop/s
• LINPACK (TPP): 84 Tflop/s
• Best climate application: 14 Tflop/s
- WRF code benchmarked in December 2007
• Average sustained applications performance: ? Tflop/s
- Probably less than 10% peak!
• We will study performance
- Hardware and software tools to measure it
- Identifying bottlenecks
- Practical performance tuning (Matlab demo)
01/21/2008
CS267 - Lecture 1
72
Outline
all
• Why powerful computers must be parallel processors
Including your laptops and handhelds
• Large CSE problems require powerful computers
Commercial problems too
• Why writing (fast) parallel programs is hard
But things are improving
• Principles of parallel computing performance
• Structure of the course
01/21/2008
CS267 - Lecture 1
73
Course Mechanics
• Web page: www.cs.berkeley.edu/~demmel/cs267_Spr09
• Normally a mix of CS, EE, and other engineering and
science students
• This class seems to be about:
- 43 grads + 5 undergrads from UCB
- Half CS, rest Biology, BioEng, BioPhys, Chemistry, Civil, EE,
Materials, Mechanical, Physics
- Plus UC Davis, UC Merced, UC Santa Cruz
• Please fill out survey on web page (posted later today)
• Grading:
- Three programming assignments
- Final projects
- Could be parallelizing an application, building or evaluating a tool, etc.
- We encourage interdisciplinary teams, since this is the way parallel
scientific software is generally built
01/21/2008
CS267 - Lecture 1
74
Rough List of Topics
• Basics of computer architecture, memory hierarchies, performance
• Parallel Programming Models and Machines
- Shared Memory and Multithreading
- Distributed Memory and Message Passing
- Data parallelism, GPUs
• Parallel languages and libraries
- Shared memory threads and OpenMP
- MPI
- Other Languages , Frameworks (UPC, CUDA, Cilk, Titanium, “Pattern Language”)
• “Seven Dwarfs” of Scientific Computing
- Dense & Sparse Linear Algebra
- Structure dand Unstructured Grids
- Spectral methods (FFTs) and Particle Methods
• 6 additional motifs
- Graph algorithms, Graphical models, Dynamic Programming, Branch & Bound, FSM, Logic
• General techniques
- Load balancing, performance tools
• Applications: Some scientific, some commercial (guest lecturers)
01/21/2008
CS267 - Lecture 1
75
Reading Materials
• What does Google recommend?
• Pointers on class web page
• Must read:
- “The Landscape of Parallel Processing Research: The View from Berkeley”
- http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf
• Some on-line texts:
- Demmel’s notes from CS267 Spring 1999, which are similar to 2000 and 2001.
However, they contain links to html notes from 1996.
- http://www.cs.berkeley.edu/~demmel/cs267_Spr99/
- My notes from Fall 2002
- http://www.nersc.gov/~simon/cs267/
- Ian Foster’s book, “Designing and Building Parallel Programming”.
- http://www-unix.mcs.anl.gov/dbpp/
• Potentially useful texts:
- “Sourcebook for Parallel Computing”, by Dongarra, Foster, Fox, ..
- A general overview of parallel computing methods
- “Performance Optimization of Numerically Intensive Codes” by Stefan
Goedecker and Adolfy Hoisie
- This is a practical guide to optimization, mostly for those of you who have
never done any optimization
01/21/2008
CS267 - Lecture 1
76
Reading Materials (cont.)
• Recent books with papers about the current state of the
art
- David Bader (ed.), “Petascale Computing, Algorithms and
Applications”, Chapman & Hall/CRC, 2007
- Michael Heroux, Padma Ragahvan, Horst Simon (ed.),”Parallel
Processing for Scientific Computing”, SIAM, 2006.
• More pointers will be on the web page
01/21/2008
CS267 - Lecture 1
77
Instructors
• Jim Demmel, EECS & Mathematics
• Horst Simon, LBNL & EECS
• GSI: Vasily Volkov, CS
• Contact information on web page
- Office hours TBD
01/21/2008
CS267 - Lecture 1
78
What you should get out of the course
In depth understanding of:
• When is parallel computing useful?
• Understanding of parallel computing hardware options.
• Overview of programming models (software) and tools.
• Some important parallel applications and the algorithms
• Performance analysis and tuning
• Exposure to various open research questions
01/21/2008
CS267 - Lecture 1
79
Extra slides
80
Transaction Processing
(mar. 15, 1996)
25000
other
Throughput (tpmC)
20000
Tandem Himalaya
IBM PowerPC
15000
DEC Alpha
SGI PowerChallenge
HP PA
10000
5000
0
0
20
40
60
80
100
120
Processors
• Parallelism is natural in relational operators: select, join, etc.
• Many difficult issues: data partitioning, locking, threading.
01/21/2008
CS267 - Lecture 1
81
SIA Projections for Microprocessors
1000
100
Feature Size
(microns)
10
Transistors per
chip x 106
1
0.1
2010
2007
2004
2001
1998
0.01
1995
Feature Size
(microns) & Million
Transistors per chip
Compute power ~1/(Feature Size)3
Year of Introduction
based on F.S.Preston, 1997
01/21/2008
CS267 - Lecture 1
82
Much of the Performance is from Parallelism
Thread-Level
Parallelism?
Instruction-Level
Parallelism
Bit-Level
Parallelism
Name
01/21/2008
CS267 - Lecture 1
83
Performance on Linpack Benchmark
www.top500.org
100000
Earth Simulator
10000
ASCI White
ASCI Red
1000
Rmax
max Rmax
mean Rmax
min Rmax
100
System
10
n
04
03
Ju
ec
D
n
03
02
Ju
ec
D
n
02
01
Ju
ec
D
n
01
00
Ju
ec
D
n
00
99
Ju
ec
D
n
99
98
Ju
ec
D
n
98
97
Ju
ec
D
n
97
96
Ju
ec
D
n
96
95
Ju
ec
D
n
95
94
Ju
ec
94
D
n
Ju
ec
D
Ju
n
93
93
1
0.1
Nov 2004: IBM Blue Gene L, 70.7 Tflops Rmax
01/21/2008
CS267 - Lecture 1
84
Performance Projection
1 Eflop/s
100 Pflop/s
10 Pflop/s
1 Pflop/s
6-8 years
100 Tflop/s
10 Tflop/s
SUM
1 Tflop/s
100 Gflop/s
10 Gflop/s
N=1
1 Gflop/s
N=500
8-10 years
100 Mflop/s
1993
1995
1997
1999
2001
2003
2005
2007
2009
2011
2013
2015
Slide by Erich Strohmaier, LBNL
01/21/2008
CS267 - Lecture 1
85
Performance Projection
1 Eflop/s
100 Pflop/s
10 Pflop/s
1 Pflop/s
100 Tflop/s
SUM
10 Tflop/s
1 Tflop/s
100 Gflop/s
N=1
10 Gflop/s
1 Gflop/s
N=500
25
20
23
20
21
20
19
20
17
20
15
20
13
20
11
20
09
20
07
20
05
20
03
20
01
20
99
19
97
19
95
19
19
93
100 Mflop/s
Slide by Erich Strohmaier, LBNL
01/21/2008
CS267 - Lecture 1
86
n
Ju -93
n
Ju -94
n
Ju -95
n
Ju -96
n
Ju -97
n
Ju -98
n
Ju -99
n
Ju -00
n
Ju -01
n
Ju -02
n
Ju -03
n
Ju -04
n
Ju -05
n
Ju -06
n
Ju -07
n
Ju -08
n
Ju -09
n
Ju -10
n
Ju -11
n
Ju -12
n
Ju -13
n
Ju -14
n15
Ju
# processors .
Concurrency Levels
1,000,000
100,000
10,000
01/21/2008
1,000
100
10
1
Slide by Erich Strohmaier, LBNL
CS267 - Lecture 1
87
Ju
Jun-9
Jun-93
Jun-94
Jun-95
Jun-96
Jun-97
Jun-98
Jun-09
Jun-00
Jun-01
Jun-02
Jun-03
Jun-04
Jun-05
Jun-06
Jun-07
Jun-08
Jun-19
Jun-10
Jun-11
Jun-12
Jun-13
Jun-14
Jun-15
Jun-16
Jun-17
Jun-18
Jun-29
Jun-20
Jun-21
Jun-22
Jun-23
n- 4
25
# processors .
Concurrency Levels- There is a Massively Parallel
System Also in Your Future
100,000,000
10,000,000
1,000,000
01/21/2008
100,000
10,000
1,000
100
10
1
Slide by Erich Strohmaier, LBNL
CS267 - Lecture 1
88
Supercomputing Today
• Microprocessors have made desktop computing in 2007 what
supercomputing was in 1995.
• Massive Parallelism has changed the “high-end” completely.
• Most of today's standard supercomputing architecture are “hybrids”,
clusters built out of commodity microprocessors and custom
interconnects.
• The microprocessor revolution will continue with little attenuation for at
least another 10 years
• The future will be massively parallel, based on multicore
89
Outline
all
• Why powerful computers must be parallel computers
Including your laptop and handhelds
• Large important problems require powerful computers
Even computer games
• Why writing (fast) parallel programs is hard
But things are improving
• Principles of parallel computing performance
• Structure of the course
01/21/2008
CS267 - Lecture 1
90
Is Multicore the Correct Response?
• Kurt Keutzer: “This shift toward increasing parallelism is not a triumphant
stride forward based on breakthroughs in novel software and architectures
for parallelism; instead, this plunge into parallelism is actually a retreat from
even greater challenges that thwart efficient silicon implementation of
traditional uniprocessor architectures.”
• David Patterson: “Industry has already thrown the hail-mary pass. . . But
nobody is running yet.”
01/21/2008
CS267 - Lecture 1
91
Community Reaction
• Desktop/Consumer
-
Move from almost no parallelism to parallelism
-
But industry is already betting on parallelism (multicore) for its future
• HPC
-
Modest growth in parallelism is giving way to exponential growth curve
-
Have Parallel programming tools and algorithms, but driven by experts
(unlikely to be adopted by broader software development community)
• The first hardware is here, but have no consensus on hardware
details or software model necessary to program it
-
01/21/2008
Reaction: Widespread Panic!
CS267 - Lecture 1
92
The View from Berkeley: Seven Questions for Parallelism
• Applications:
1. What are the apps?
2. What are kernels of apps?
• Hardware:
3. What are the HW building blocks?
4. How to connect them?
• Programming Model / Systems
Software:
5. How to describe apps and kernels?
6. How to program the HW?
• Evaluation:
7. How to measure success?
(Inspired by a view of the
Golden Gate Bridge from Berkeley)
http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf
01/21/2008
CS267 - Lecture 1
93
Applications
• Applications:
1. What are the apps?
CS267 focus
is here
2. What are kernels of apps?
• Hardware:
3. What are the HW building blocks?
4. How to connect them?
• Programming Model / Systems
Software:
5. How to describe apps and kernels?
6. How to program the HW?
• Evaluation:
7. How to measure success?
(Inspired by a view of the
Golden Gate Bridge from Berkeley)
01/21/2008
CS267 - Lecture 1
94
Much Ado about Dwarves Motifs
High-end simulation in the physical
sciences = 7 numerical methods:
1.
Structured Grids (including locally
structured grids, e.g. Adaptive
Mesh Refinement)
• Benchmarks enable assessment of
hardware performance improvements
2.
Unstructured Grids
• The problem with benchmarks is that
they enshrine an implementation
3.
Fast Fourier Transform
4.
Dense Linear Algebra
• At this point in time, we need
flexibility to innovate both
implementation and the hardware
they run on!
5.
Sparse Linear Algebra
6.
Particles
7.
Monte Carlo
• Dwarves provide that necessary
abstraction
Map Reduce
Slide from “Defining Software Requirements for Scientific Computing”, Phillip Colella, 2004
01/21/2008
CS267 - Lecture 1
95
Do dwarfs work well outside HPC?
•
Examine effectiveness 7 dwarfs elsewhere
1. Embedded Computing (EEMBC benchmark)
2. Desktop/Server Computing (SPEC2006)
3. Data Base / Text Mining Software
- Advice from Jim Gray of Microsoft and Joe Hellerstein of
UC
4. Games/Graphics/Vision
5. Machine Learning
- Advice from Mike Jordan and Dan Klein of UC Berkeley
•
Result: Added 7 more dwarfs, revised 2 original
dwarfs, renumbered list
01/21/2008
CS267 - Lecture 1
96
Destination is Manycore
• We need revolution, not evolution
• Software or architecture alone can’t fix parallel programming
problem, need innovations in both
• “Multicore” 2X cores per generation: 2, 4, 8, …
• “Manycore” 100s is highest performance per unit area, and per Watt,
then 2X per generation:
64, 128, 256, 512, 1024 …
• Multicore architectures & Programming Models good for 2 to 32
cores won’t evolve to Manycore systems of 1000’s of
processors
 Desperately need HW/SW models that work for Manycore or
will run out of steam
(as ILP ran out of steam at 4 instructions)
01/21/2008
CS267 - Lecture 1
97
Units of Measure in HPC
• High Performance Computing (HPC) units are:
- Flop: floating point operation
- Flops/s: floating point operations per second
- Bytes: size of data (a double precision floating point number is 8)
• Typical sizes are millions, billions, trillions…
Mega
Mflop/s = 106 flop/sec
Mbyte = 220 = 1048576 ~ 106 bytes
Giga
Tera
Peta
Exa
Zetta
Gflop/s = 109 flop/sec
Tflop/s = 1012 flop/sec
Pflop/s = 1015 flop/sec
Eflop/s = 1018 flop/sec
Zflop/s = 1021 flop/sec
Gbyte = 230 ~ 109 bytes
Tbyte = 240 ~ 1012 bytes
Pbyte = 250 ~ 1015 bytes
Ebyte = 260 ~ 1018 bytes
Zbyte = 270 ~ 1021 bytes
Yotta
Yflop/s = 1024 flop/sec
Ybyte = 280 ~ 1024 bytes
• See www.top500.org for current list of fastest machines
01/21/2008
CS267 - Lecture 1
98
30th List: The TOP10
Manufacturer
Computer
BlueGene/L
Rmax
Installation Site
Country
478.2
DOE/NNSA/LLNL
USA
2007 212,992
167.3
Forschungszentrum
Juelich
Germany
2007 65,536
USA
2007 14,336
India
2007 14,240
Sweden
2007 13,728
[TF/s]
1
IBM
2
IBM
3
SGI
4
HP
Cluster Platform 3000 BL460c 117.9
5
HP
Cluster Platform 3000 BL460c 102.8
6
3
7
2
8
4
Sandia/Cray
Cray
IBM
9
Cray
10
5
IBM
eServer Blue Gene
JUGENE
BlueGene/P Solution
SGI Altix ICE 8200
Red Storm
Cray XT3
Jaguar
Cray XT3/XT4
BGW
eServer Blue Gene
Franklin
Cray XT4
New York Blue
eServer Blue Gene
126.9
New Mexico Computing
Applications Center
Computational Research
Laboratories, TATA SONS
Swedish Government
Agency
Year
#Cores
102.2
DOE/NNSA/Sandia
USA
2006 26,569
101.7
DOE/ORNL
USA
2007 23,016
91.29
IBM Thomas Watson
USA
2005 40,960
85.37
NERSC/LBNL
USA
2007 19,320
82.16
Stony Brook/BNL
USA
2007 36,864
page 99
New 100 Tflops Cray XT-4 at NERSC
Cray XT-4 “Franklin”
19,344 compute cores
102 Tflop/sec peak
39 TB memory
350 TB usable disk space
50 PB storage archive
NERSC is
enabling new
science
01/21/2008
CS267 - Lecture 1
100
Performance Development
4.92 PF/s
1 Pflop/s
280.6 TF/s
100 Tflop/s
10 Tflop/s
1 Tflop/s
BlueGene/L
NEC
N=1
1.167 TF/s
Earth Simulator
4.005 TF/s
IBM ASCI White
Intel ASCI Red
59.7 GF/s
LLNL
Sandia
100 Gflop/s
Fujitsu
'NWT' NAL
10 Gflop/s
1 Gflop/s
IBM
SUM
N=500
0.4 GF/s
2007
2006
2005
2004
2003
2002
2001
2000
1999
1998
1997
1996
1995
1994
1993
100 Mflop/s
page 101
Signpost System in 2005
IBM BG/L @ LLNL
• 700 MHz
• 65,536 nodes
• 180 (360) Tflop/s peak
• 32 TB memory
• 135 Tflop/s LINPACK
• 250 m2 floor space
• 1.8 MW power
01/21/2008
CS267 - Lecture 1
102
Outline
all
• Why powerful computers must be parallel processors
Including your laptop
• Large important problems require powerful computers
Even computer games
• Why writing (fast) parallel programs is hard
• Principles of parallel computing performance
• Structure of the course
01/21/2008
CS267 - Lecture 1
103
Why we need
powerful computers
104
New Science Question: Hurricane Statistics
What is the effect of different climate scenarios on
number and severity of tropical storms?
Northwest
Pacific
1979
1980
1981
1982
Obs
>25
~30
40
~6
~12
?
Basin
Atlantic Basin
Work in progress—results to be published
Source: M.Wehner, LBNL
01/21/2008
CS267 - Lecture 1
105
CMB Computing at NERSC
• CMB data analysis presents a significant and growing computational
challenge, requiring
- well-controlled approximate algorithms
- efficient massively parallel implementations
- long-term access to the best HPC resources
• DOE/NERSC has become the leading HPC facility in the world for CMB data
analysis
- O(1,000,000) CPU-hours/year
- O(10) Tb project disk space
- O(10) experiments & O(100) users (rolling)
source J. Borrill, LBNL
01/21/2008
CS267 - Lecture 1
106
Evolution Of CMB Satellite Maps
01/21/2008
CS267 - Lecture 1
107
Algorithms & Flop-Scaling
- Exact maximum likelihood : O(Np3)
- PCG maximum likelihood : O(Ni Nt log Nt)
- Scan-specific, e.g.. destriping : O(Nt log Nt)
- Naïve : O(Nt)
Accuracy
Speed
• Map-making
• Power Spectrum estimation
- Monte Carlo pseudo-spectral :
- Time domain : O(Nr Ni Nt log Nt), O(Nr lmax3)
- Pixel domain : O(Nr Nt)
- Simulations
Accuracy
Speed
- Iterative maximum likelihood : O(Ni Nb Np3)
– exact simulation > approximate analysis !
01/21/2008
CS267 - Lecture 1
108
CMB is Characteristic for CSE Projects
• Petaflop/s and beyond computing requirements
• Algorithm and software requirements
• Use of new technology, e.g. NGF
• Service to a large international community
• Exciting science
01/21/2008
CS267 - Lecture 1
109
Parallel Browser
(Ras Bodik)
• Web 2.0: Browser plays role of traditional OS
- Resource sharing and allocation, Protection
• Goal: Desktop quality browsing on handhelds
- Enabled by 4G networks, better output devices
• Bottlenecks to parallelize
- Parsing, Rendering, Scripting
• “SkipJax”
- Parallel replacement for JavaScript/AJAX
- Based on Brown’s FlapJax
01/21/2008
CS267 - Lecture 1
110
More Exotic Solutions on the Horizon
•
GPUs - Graphics Processing Units (eg NVidia)
-
•
FPGAs – Field Programmable Gate Arrays
-
•
Inefficient use of chip area
More efficient than multicore now, maybe not later
Wire routing heuristics still troublesome
Dataflow and tiled processor architectures
-
•
Parallel processor attached to main processor
Originally special purpose, getting more general
Have considerable experience with dataflow from 1980’s
Are we ready to return to functional programming languages?
Cell
-
01/21/2008
Software controlled memory uses bandwidth efficiently
Programming model not yet mature
CS267 - Lecture 1
111
Performance Development
6.97 PF/s
1 Pflop/s
478.2 TF/s
100 Tflop/s
SUM
10 Tflop/s 1.167 TF/s
1 Tflop/s
N=1
5.9 TF/s
59.7 GF/s
100 Gflop/s
10 Gflop/s
N=500
0.4 GF/s
1 Gflop/s
2007
2006
2005
2004
2003
2002
2001
2000
1999
1998
1997
1996
1995
1994
1993
100 Mflop/s
See www.top500.org for latest data
page 112
all
(2007)
Why powerful computers
are parallel
circa 1991-2006
113
Tunnel Vision by Experts
“I think there is a world market for maybe five computers.”
Thomas Watson, chairman of IBM, 1943.
“There is no reason for any individual to have a computer in
their home”
Ken Olson, president and founder of Digital Equipment
Corporation, 1977.
“640K [of memory] ought to be enough for anybody.”
Bill Gates, chairman of Microsoft,1981.
“On several recent occasions, I have been asked whether
parallel computing will soon be relegated to the trash heap
reserved for promising technologies that never quite make it.”
Ken Kennedy, CRPC Directory, 1994
114
Slide source: Warfield et al.
Microprocessor Transistors per Chip
• Growth in transistors per chip
• Increase in clock rate
100,000,000
1000
10,000,000
1,000,000
i80386
i80286
100,000
R3000
R2000
100
Clock Rate (MHz)
Transistors
R10000
Pentium
10
1
i8086
10,000
i8080
i4004
1,000
1970 1975 1980 1985 1990 1995 2000 2005
Year
115
0.1
1970
1980
1990
Year
2000
Impact of Device Shrinkage
What happens when the feature size (transistor size) shrinks by a
factor of x ?
Clock rate goes up by x because wires are shorter
actually less than x, because of power consumption
Transistors per unit area goes up by x2
Die size also tends to increase
typically another factor of ~x
Raw computing power of the chip goes up by ~ x4 !
typically x3 is devoted to either on-chip
parallelism: hidden parallelism such as ILP
locality: caches
So most programs x3 times faster, without changing them
116
But there are limiting forces
Manufacturing costs and yield problems limit use of density
•
Moore’s 2nd law (Rock’s
law): costs go up
Demo of
0.06
micron
CMOS
Source: Forbes Magazine
•
Yield
-What percentage of the chips
are usable?
-E.g., Cell processor (PS3) is
sold with 7 out of 8 “on” to
improve yield
117
Power Density Limits Serial
Performance
118
Parallelism in 2009?
These arguments are no longer theoretical
All major processor vendors are producing multicore chips
Every machine will soon be a parallel machine
To keep doubling performance, parallelism must double
Which commercial applications can use this parallelism?
Do they have to be rewritten from scratch?
Will all programmers have to be parallel programmers?
New software model needed
Try to hide complexity from most programmers – eventually
In the meantime, need to understand it
Computer industry betting on this big change, but does not have all
the answers
Berkeley ParLab established to work on this
119
More Limits: How fast can a serial computer
be?
1 Tflop/s, 1
Tbyte sequential
machine
r = 0.3
mm
Consider the 1 Tflop/s sequential machine:
Data must travel some distance, r, to get from
memory to processor.
To get 1 data element per cycle, this means 1012
times per second at the speed of light, c = 3x108
m/s. Thus r < c/1012 = 0.3 mm.
Now put 1 Tbyte of storage in a 0.3 mm x 0.3 mm area:
Each bit occupies about 1 square Angstrom, or the
size of a small atom.
No choice but parallelism
120
Outline
• Arithmetic is cheap, what costs is moving data
- Idealized and actual costs in modern processors
- Parallelism within single processors
- Memory hierarchies
- What this means for designing algorithms and software
01/21/2008
CS267 - Lecture 1
121
Idealized Uniprocessor Model
• Processor names bytes, words, etc. in its address space
- These represent integers, floats, pointers, arrays, etc.
• Operations include
- Read and write into very fast memory called registers
- Arithmetic and other logical operations on registers
• Order specified by program
- Read returns the most recently written data
- Compiler and architecture translate high level expressions into “obvious”
lower level instructions
A=B+C
Read address(B) to R1
Read address(C) to R2
R3 = R1 + R2
Write R3 to Address(A)
- Hardware executes instructions in order specified by compiler
• Idealized Cost
- Each operation has roughly the same cost
(read, write, add, multiply, etc.)
01/21/2008
CS267 - Lecture 1
122
Uniprocessors in the Real World
• Real processors have
- registers and caches
- small amounts of fast memory (more of slow memory)
- store values of recently used or nearby data
- different memory ops can have very different costs
- parallelism
- multiple “functional units” that can run in parallel
- different orders, instruction mixes have different costs
- pipelining
- a form of parallelism, like an assembly line in a factory
• Why is this your problem?
- In theory, compilers understand all of this and can
optimize your program; in practice they don’t.
- Even if they could optimize one algorithm, they won’t know
about a different algorithm that might be a much better
“match” to the processor
01/21/2008
CS267 - Lecture 1
123
Outline
• Arithmetic is cheap, what costs is moving data
- Idealized and actual costs in modern processors
- Parallelism within single processors
- Hidden from software (sort of)
- Pipelining
- SIMD units
- Memory hierarchies
- What this means for designing algorithms and software
01/21/2008
CS267 - Lecture 1
124
What is Pipelining?
Dave Patterson’s Laundry example: 4 people doing laundry
wash (30 min) + dry (40 min) + fold (20 min) = 90 min Latency
6 PM
7
8
9
•
- Sequential execution takes
4 * 90min = 6 hours
Time
T
a
s
k
O
r
d
e
r
30 40 40 40 40 20
In this example:
- Pipelined execution takes
30+4*40+20 = 3.5 hours
•
Bandwidth = loads/hour
•
BW = 4/6 l/h w/o pipelining
•
BW = 4/3.5 l/h w pipelining
•
BW <= 1.5 l/h w pipelining,
more total loads
•
C
Pipelining helps bandwidth
but not latency (90 min)
•
Bandwidth limited by slowest
pipeline stage
D
•
Potential speedup =
Number pipe stages
A
B
01/21/2008
CS267 - Lecture 1
125
Example: 5 Steps of MIPS Datapath
Figure 3.4, Page 134 , CA:AQA 2e by Patterson and Hennessy
Execute
Addr. Calc
Instr. Decode
Reg. Fetch
Next SEQ PC
Next SEQ PC
Adder
4
Zero?
RS1
MUX
MEM/WB
Data
Memory
EX/MEM
ALU
MUX MUX
ID/EX
Imm
Reg File
IF/ID
Memory
Address
RS2
Write
Back
MUX
Next PC
Memory
Access
Sign
Extend
RD
RD
RD
• Pipelining is also used within arithmetic units
–a
01/21/2008
126
fp multiply may have CS267
latency
10 1cycles, but throughput of 1/cycle
- Lecture
WB Data
Instruction
Fetch
SIMD: Single Instruction, Multiple Data
• Scalar processing
• SIMD processing
• traditional mode
• one operation produces
one result
X
• with SSE / SSE2
• SSE = streaming SIMD extensions
• one operation produces
multiple results
X
x3
x2
x1
x0
+
+
Y
Y
y3
y2
y1
y0
X+Y
X+Y
x3+y3
x2+y2
x1+y1
x0+y0
Slide Source: Alex Klimovitski & Dean Macri, Intel Corporation
01/21/2008
CS267 - Lecture 1
127
SSE / SSE2 SIMD on Intel
• SSE2 data types: anything that fits into 16 bytes, e.g.,
4x floats
2x doubles
16x bytes
• Instructions perform add, multiply etc. on all the data in
this 16-byte register in parallel
• Challenges:
• Need to be contiguous in memory and aligned
• Some instructions to move data around from one part of register to
another
• Similar on GPUs, vector processors (but many more simultaneous
operations)
01/21/2008
CS267 - Lecture 1
128
What does this mean to you?
• In addition to SIMD extensions, the processor may have
other special instructions
- Fused Multiply-Add (FMA) instructions:
x=y+c*z
is so common some processor execute the multiply/add as a single
instruction, at the same rate (bandwidth) as + or * alone
• In theory, the compiler understands all of this
- When compiling, it will rearrange instructions to get a good “schedule”
that maximizes pipelining, uses FMAs and SIMD
- It works with the mix of instructions inside an inner loop or other block
of code
• But in practice the compiler may need your help
- Choose a different compiler, optimization flags, etc.
- Rearrange your code to make things more obvious
- Using special functions (“intrinsics”) or write in assembly 
01/21/2008
CS267 - Lecture 1
129
Descargar

CS267: Introduction