BIL106E
Introduction to
Scientific & Engineering Computing
Hüseyin TOROS, Ph.D.
Istanbul Technical University Faculty of Aeronautics and Astronautics Dept.
of Meteorological Engineering
Voice: 285 31 27
E-mail: [email protected]
http://atlas.cc.itu.edu.tr/~toros
An important fraction of our interaction will be via e-mail
Useful Pages:
http://www.be.itu.edu.tr/
http://atlas.cc.itu.edu.tr/~toros/bil106e.htm
http://atlas.cc.itu.edu.tr/~F90/mainindex.html
http://www.fortran.com/
http://www.foldoc.org (Free OnLine Dictionary of Computing)
F Compiler: Read this first, Full installer (3.4Mb): win95nt.exe
For more information syllabus_F
Introduction to Scientific & Engineering Computing
07/26/09
1
History of Computers
The first computers were people! That is, electronic
computers (and the earlier mechanical computers) were
given this name because they performed the work that had
previously been assigned to people. "Computer" was
originally a job title: it was used to describe those human
beings (predominantly women) whose job it was to perform
the repetitive calculations required to compute such things
as navigational tables, tide charts, and planetary positions
for astronomical almanacs. Imagine you had a job where
hour after hour, day after day, you were to do nothing but
compute multiplications. Boredom would quickly set in,
leading to carelessness, leading to mistakes. And even on
your best days you wouldn't be producing answers very fast.
Therefore, inventors have been searching for hundreds of
years for a way to mechanize (that is, find a mechanism that
can perform) this task.
Introduction to Scientific & Engineering Computing
07/26/09
2
The abacus was an early aid for mathematical computations. Its
only value is that it aids the memory of the human performing the
calculation. A skilled abacus operator can work on addition and
subtraction problems at the speed of a person equipped with a
hand calculator (multiplication and division are slower). The
abacus is often wrongly attributed to China. In fact, the oldest
surviving abacus was used in 300 B.C. by the Babylonians.
Introduction to Scientific & Engineering Computing
07/26/09
3
ELECTRONİC NUMERİCAL
INTEGRATOR AND COMPUTER
1st large scale electronic digital computer
 designed and constructed at the Moore School of
Electrical Engineering of the University of Pennsylvania


since 1920s, faculty had worked with Aberdeen Proving
Ground’s Ballistics Research Laboratory (BRL)
Introduction to Scientific & Engineering Computing
07/26/09
4
INSPİRATİON AND PERSPİRATİON
UNİTE
 1943
Mauchly and Eckert prepare a proposal for
the US Army to build an Electronic Numerical
Integrator

calculate a trajectory in 1 second
 May
31, 1943 Construction of ENIAC starts
 1944 early thoughts on stored program
computers by members of the ENIAC team
 July 1944 two accumulators working
Introduction to Scientific & Engineering Computing
07/26/09
5
ACCUMULATOR
(28 VACUUM TUBES)
Introduction to Scientific & Engineering Computing
07/26/09
6
ENIAC AT MOORE SCHOOL, UNİVERSİTY OF
PENNSYLVANİA
Introduction to Scientific & Engineering Computing
07/26/09
7
Introduction to Scientific & Engineering Computing
07/26/09
8
EARLY THOUGHTS ABOUT
STORED PROGRAM COMPUTİNG
 January
1944 Moore School team thinks of better
ways to do things; leverages delay line memories
from War research
 September 1944 John von Neumann visits

Goldstine’s meeting at Aberdeen Train Station
 October
1944 Army extends the ENIAC contract
to include research on the EDVAC and the
stored-program concept
 Spring 1945 ENIAC working well
 June 1945 First Draft of a Report on the
EDVAC: Electronic Discrete Variable
Automatic Computer
Introduction to Scientific & Engineering Computing
07/26/09
9
MANCHESTER MARK I (1948)
MANCHESTER MARK I (1948)
 Freddy
Williams and Tom Kilburn
 Developed an electrostatic memory
 Prototype operational June 21, 1948 and machine to
execute a stored program
 Memory: 32 words of 32 bits each
 Storage: single Williams tube (CRT)
 Fully operational: October 1949
 Ferranti Mark I delivered in February 1951
Introduction to Scientific & Engineering Computing
07/26/09
11
EDSAC
Maurice Wilkes, University Mathematical Laboratory,
Cambridge University
 Moore School Lectures
 Electronic Delay Storage Automatic Calculator, EDSAC
operational May, 1949
 J. Lyons Company and the LEO, Lyons Electronic Office,
operational fall 1951

Introduction to Scientific & Engineering Computing
07/26/09
12
NATİONAL PHYSİCAL LABORATORY
Alan Turing
 Automatic Computing Engine (ACE)
 Basic design by spring, 1946
 Harry Huskey joins project
 Pilot ACE working, May 10, 1950
 English Electric: DEUCE, 1954
 Full version of ACE at NPL, 1959

Introduction to Scientific & Engineering Computing
07/26/09
14
MAİNFRAME COMPUTERS
Introduction to Scientific & Engineering Computing
07/26/09
16
REMİNGTON RAND UNIVAC
43 UNIVACs were delivered to government and
industry
 Memory: mercury delay lines: 1000 words of 12
alphanumeric characters
 Secondary storage: metal oxide tape
 Access time: 222 microseconds (average)
 Instruction set: 45 operation codes
 Accumulators: 4
 Clock: 2.25 Mhz

Introduction to Scientific & Engineering Computing
07/26/09
17
1951 UNİVAC
Typical 1968 prices—EX-cluding maintenance & support!
IBM 701 (DEFENSE CALCULATOR)
Addition time: 60 microseconds
 Multiplication: 456 microseconds
 Memory: 2048 (36 bit) words using Williams tubes
 Secondary memory:

Magnetic drum: 8192 words
 Magnetic tape: plastic


Delivered: December 1952: IBM World
Headquarters (total of 19 installed)
Introduction to Scientific & Engineering Computing
07/26/09
19
SECOND GENERATİON (1958-1964)

1958 Philco introduces TRANSAC S-2000

first transistorized commercial machine
IBM 7070, 7074 (1960), 7072(1961)
 1959 IBM 7090, 7040 (1961), 7094 (1962)
 1959 IBM 1401, 1410 (1960), 1440 (1962)
 FORTRAN, ALGOL, and COBOL are first standardized
programming languages

Introduction to Scientific & Engineering Computing
07/26/09
20
THİRD GENERATİON (1964-1971)
 April


1964 IBM announces the System/360
solid logic technology (integrated circuits)
family of “compatible” computers
 1964
Control Data delivers the CDC 6600
 nanoseconds
 telecommunications
 BASIC, Beginners All-purpose Symbolic Instruction
Code
Introduction to Scientific & Engineering Computing
07/26/09
21
FOURTH GENERATİON (1971- )
Large scale integrated circuits (MSI, LSI)
 Nanoseconds and picoseconds
 Databases (large)
 Structured languages (Pascal)
 Structured techniques
 Business packages

Introduction to Scientific & Engineering Computing
07/26/09
22
DİGİTAL EQUİPMENT
CORPORATİON(MİNİ-COMPUTERS)
ASSABET MİLLS, MAYNARD, MA
23
FLİPCHİP
24
PDP-8, FİRST MASS-PRODUCED MİNİ
25
PDP-11 (1970)
26
MİCROCOMPUTERS
INTEL

Noyce, Moore, and Andrew Grove leave Fairchild and
found Intel in 1968

focus on random access memory (RAM) chips
Question: if you can put transistors, capacitors, etc. on a
chip, why couldn’t you put a central processor on a chip?
 Ted Hoff designs the Intel 4004, the first microprocessor
in 1969


based on Digital’s PDP-8
Introduction to Scientific & Engineering Computing
07/26/09
27
MİCROCOMPUTERS
Ed Roberts founds Micro Instrumentation Telemetry
Systems (MITS) in 1968
 Popular Electronics puts the MITS Altair on the cover in
January 1975 [Intel 8080]
 Les Solomon’s 12 year old daughter, Lauren, was a lover of
Star Trek. He asked her what the name of the computer on
the Enterprise was. She said “ ‘computer’ but why don’t you
call it Altair because that is where they are going tonight!”

Introduction to Scientific & Engineering Computing
07/26/09
28
ALTAİR 8800 COMPUTER
29
INTEL PROCESSORS
MICROPROCESSOR
YEAR
SPEED
Intel
Intel
Intel
Intel
Intel
Intel
Intel
Intel
Intel
1969
1972
1974
1978
1981
1982
1985
1989
1993
108 KHz
200 KHz
2 MHz
4.47 MHz
4.47 MHz
12 MHz
16-33 MHz
20-100 MHz
75-200 MHz
Intel Pentium Pro
1995
150-200 MHz 32-bit
5.5 Million
.06
.06
.64
.66
.75
2.66
4
70
126 203
300
Intel Pentium MMX
1997
166-233 MHz 32-bit
4.5 Million
-
Intel Pentium II
1997
233-450 MHz 32-bit
7.5 Million
-
Intel Pentium III
1999
450-933 MHz 32-bit
-
Intel Itanium Processor
2000
1 GHz
Over 9.5
Million
15,000,000
4004
8008
8080
8086
8088
80286
80386
80486 (i486)
80586 (Pentium)
Introduction to Scientific & Engineering Computing
WORD
LENGT
H
4-bit
8-bit
8-bit
16-bit
16-bit
16-bit
32-bit
32-bit
32-bit
64-bit
TRANSIST MIPS
ORS
2,300
3,500
6,000
29,000
29,000
134,000
275,000
1.2 Million
3.3 Million
07/26/09
1,200
30
Computer Processing Speed
Computer processing speed depends on a variety of
factors. Three of the most important factors are:
•Word length (the number of bits that can be
processed at one time by the microprocessor)
•Cycle Speed (how fast individual events are
processed, measured in Megahertz)
•Data Bus Width (determines how much data can be
transferred between the CPU and memory)
•Other factors Include:
•RAM (amount of available random access memory)
•Disk Access Speed (speed that data can be read
from hard disk)
•Code Efficiency (how efficiently the computer code
has been designed)
Introduction to Scientific & Engineering Computing
07/26/09
31
İnput
Processing
Output
storage.
What is a computer?
The computer is an automatic device that




performs calculations
makes decisions
has capacity for storing
instantly recalling vast amount of information
Introduction to Scientific & Engineering Computing
07/26/09
3232
What is a computer System?
Hardware
Software
Computer system – A collection of related
components that are designed to work together.
 A system includes hardware and software.
Introduction to Scientific & Engineering Computing
07/26/09
3333
What is a computer?
Hardware
• Processor
• Memory
• I/O units (Input/Output
Units)
Introduction to Scientific & Engineering Computing
07/26/09
34
How does a computer work?
• Executes very simple instructions.
• Executes them incredibly fast.
• Must be programmed: it is the software, i.e.,
the programs, that characterize what a
computer actually does.
Introduction to Scientific & Engineering Computing
07/26/09
35
Computer Structure
CPU = Central Processing Unit
Input Devices
Control
Unit
ArithmeticLogic Unit
Output Devices
Main
Memory
External Memory
Major Components of a computing system
Introduction to Scientific & Engineering Computing
07/26/09
36
Computer Structure
Registers are a set of special high-speed memory
locations within the CPU
Access speed within the register is thousands of times
faster than access speed in RAM
MEMORY MEASUREMENT
The memory unit of a computer is two-state devices. Then
it is natural to use a binary scheme (using only the two
binary digits {bits} 0 and 1 to represent information in a
computer).
Bytes = 8 Bits
Memory is commonly measured in bytes, and a block of
210 = 1024 bytes = 1 K
1MB=1024 K=1024 . 210 = 210 . 210 = 220 = 1,048,576 bytes.
or
220 . 23 = 223 = 8,384,608 bits.
Introduction to Scientific & Engineering Computing
07/26/09
37
What is a computer program?
• The computer program characterizes what a computer
actually does.
• A program (independently of the language in which it is
written) is constituted by two fundamental parts:
• A representation of the information (data) relative to
the domain of interest.
• A description of how to manipulate the
representation in such a way as to realize the
desired functionality: operations.
• To write a program both aspects have to be addressed.
Introduction to Scientific & Engineering Computing
07/26/09
38
Program
A list of instructions that are grouped together to
accomplish a task or tasks.
The instructions, called machine code or assembly code
consist of things like reading and writing memory,
arithmetic operations, and comparisons.
Introduction to Scientific & Engineering Computing
07/26/09
3939
Program
• Every program must be translated into a machine
language that the computer can understand.
• This translation is performed by compilers, interpreters,
and assemblers.
• When you buy software, you normally buy an executable
version of a program.
• This means that the program is already in machine
language -- it has already been compiled and assembled
and is ready to execute.
Introduction to Scientific & Engineering Computing
07/26/09
40
Introduction to Scientific & Engineering Computing
07/26/09
41
Program
•While easily understood by computers, machine languages are
almost impossible for humans to use because they consist
entirely of numbers.
•Programmers, therefore, use either a high-level programming
language or an assembly language. An assembly language
contains the same instructions as a machine language, but the
instructions and variables have names instead of being just
numbers.
• Programs written in high-level languages are translated into
assembly language or machine language by a compiler.
Assembly language programs are translated into machine
language by a program called an assembler.
•Every CPU has its own unique machine language. Programs
must be rewritten or recompiled, therefore, to run on different
types of computers.
Introduction to Scientific & Engineering Computing
07/26/09
42
Compiler
A program that translates source code into object code. The compiler
derives its name from the way it works, looking at the entire piece of
source code and collecting and reorganizing the instructions. Thus, a
compiler differs from an interpreter, which analyzes and executes each
line of source code in succession, without looking at the entire
program. The advantage of interpreters is that they can execute a
program immediately. Compilers require some time before an
executable program emerges. However, programs produced by
compilers run much faster than the same programs executed by an
interpreter.
Introduction to Scientific & Engineering Computing
07/26/09
43
Compiler
Every high-level programming language (except
strictly interpretive languages) comes with a compiler.
In effect, the compiler is the language, because it
defines which instructions are acceptable. Because
compilers translate source code into object code,
which is unique for each type of computer, many
compilers are available for the same language. For
example, there is a FORTRAN compiler for PCs and
another for Apple Macintosh computers. In addition,
the compiler industry is quite competitive, so there are
actually many compilers for each language on each
type of computer. More than a dozen companies
develop and sell C compilers for the PC.
Source Program
(High-level language)
Compiler
Object Program
(machine language)
Run-time
Compilation errors
errors
Steps of execution of a Fortran program
Introduction to Scientific & Engineering Computing
07/26/09
44
Interpreter
•A program that executes instructions written in a high-level language. There are
two ways to run programs written in a high-level language. The most common is
to compile the program; the other method is to pass the program through an
interpreter.
•An interpreter translates high-level instructions into an intermediate form, which
it then executes. In contrast, a compiler translates high-level instructions directly
into machine language. Compiled programs generally run faster than interpreted
programs. The advantage of an interpreter, however, is that it does not need to
go through the compilation stage during which machine instructions are
generated. This process can be time-consuming if the program is long. The
interpreter, on the other hand, can immediately execute high-level programs. For
this reason, interpreters are sometimes used during the development of a
program, when a programmer wants to add small sections at a time and test
them quickly. In addition, interpreters are often used in education because they
allow students to program interactively.
•Both interpreters and compilers are available for most high-level languages.
However, BASIC and LISP are especially designed to be executed by an
interpreter. In addition, page description languages, such as PostScript, use an
interpreter. Every PostScript printer, for example, has a built-in interpreter that
executes PostScript instructions.
Introduction to Scientific & Engineering Computing
07/26/09
45
Programing Language
A programming language that is once removed from a
computer's machine language. Machine languages consist
entirely of numbers and are almost impossible for humans
to read and write. Assembly languages have the same
structure and set of commands as machine languages, but
they enable a programmer to use names instead of
numbers. Each type of CPU has its own machine language
and assembly language, so an assembly language program
written for one type of CPU won't run on another. In the
early days of programming, all programs were written in
assembly language. Now, most programs are written in a
high-level language such as FORTRAN or C. Programmers
still use assembly language when speed is essential or
when they need to perform an operation that isn't possible
in a high-level language.
Introduction to Scientific & Engineering Computing
07/26/09
46
Why we use a programming language ?
The main reason for learning a
programming language is to use the
computer to solve
• scientific
• engineering problems
Introduction to Scientific & Engineering Computing
07/26/09
47
Introduction to Scientific & Engineering Computing
Taught in three versions:
Fortran (F)
C
Matlab
Introduction to Scientific & Engineering Computing
07/26/09
48
Programming language
• Basic skills for scientific/engineering problem
solving using computers:
• Data structures and algorithms
• Programming skills in a (standard) language
• Skills for integrating the computing chain:
?????????????????
• Analyze  Program  Run  Visualize
Introduction to Scientific & Engineering Computing
07/26/09
49
Engineering simulation of the natural/artificial
systems
 Build
a conceptual  quantitative model (most of the time,
write down the appropriate equations)
 Formulate a solution
to these equations using numerical methods
 Data structures + algorithms
 Program
these data structures and algorithms in a language
 Run the program and analyze its output using
visualization techniques
Introduction to Scientific & Engineering Computing
07/26/09
50
Scientific & Engineering problems
???
Introduction to Scientific & Engineering Computing
07/26/09
51
solving a problem : Algorithm - Flowcharts
•Flowcharts are often used to graphically represent
algorithms.
•In mathematics, computing, linguistics, and related
subjects, an algorithm is a finite sequence of instructions,
an explicit, step-by-step procedure for solving a problem,
often used for calculation and data processing.
•It is formally a type of effective method in which a list of
well-defined instructions for completing a task, will when
given an initial state, proceed through a well-defined series
of successive states, eventually terminating in an end-state.
•The transition from one state to the next is not necessarily
deterministic; some algorithms, known as probabilistic
algorithms, incorporate randomness.
Introduction to Scientific & Engineering Computing
07/26/09
52
Introduction to Scientific & Engineering Computing
07/26/09
53
Write out the problem statement.
Include information on what you are to
solve, and consider why you need to
solve the problem.
Introduction to Scientific & Engineering Computing
07/26/09
54
A simple flowchart representing a process for dealing with a broken lamp.
Introduction to Scientific & Engineering Computing
07/26/09
55
A simple flowchart for computing factorial N (N!)
Introduction to Scientific & Engineering Computing
07/26/09
56
Kavak Ağacı ile Kabak
Ulu bir kavak ağacının yanında bir kabak filizi boy göstermiş. Bahar ilerledikçe
bitki kavak ağacına sarılarak yükselmeye başlamış. Yağmurların ve güneşin
etkisiyle müthiş bir hızla büyümüş ve neredeyse kavak ağacı ile aynı boya
gelmiş. Bir gün dayanamayıp sormuş kavağa:
-Sen kaç ayda bu hale geldin ağaç?
-On yılda, demiş kavak.
-On yılda mı? Diye gülmüş ve çiçeklerini sallamış kabak.
-Ben neredeyse iki ayda seninle aynı boya geldim bak!
-Doğru, demiş kavak.
Günler günleri kovalamış ve sonbaharın ilk rüzgârları başladığında kabak
üşümeye sonra yapraklarını düşürmeye, soğuklar arttıkça da aşağıya doğru
inmeye başlamış. Sormuş endişeyle kavağa:
-Neler oluyor bana ağaç?
-Ölüyorsun, demiş kavak.
-Niçin?
-Benim on yılda geldiğim yere, iki ayda gelmeye çalıştığın için.
Çalışmadan emek harcamadan gelinen nokta başarı sayılmaz. Kolay
kazanılan, kolay kaybedilir. Her işte alın teri ve emek şarttır.
Introduction to Scientific & Engineering Computing
07/26/09
5757
Introduction to Scientific & Engineering Computing
07/26/09
58
Introduction to Scientific & Engineering Computing
07/26/09
59
!
!
program check
integer::M,N,F
print*,”Please enter full number for N? N>1”
read*,N
M=1
F=1
do
if (M==N) then
print*,”M=“,M,” N=“,N,” F=“,F
exit
else
M=M+1
F=F*M
cycle
endif
enddo
end program check
Introduction to Scientific & Engineering Computing
07/26/09
60
Introduction to Scientific & Engineering Computing
07/26/09
61
Programming and Problem Solving
Program-development process consists of at least five
steps:
1) Problem analysis and specification
 The first stage in solving the problem is to analyze the
problem and formulate a precise specification of it
2) Data organization and algorithm design
 Determine how to organize and store the data in the problem.
 Develop procedures to process the data and produce the
required output. These procedures are called algorithms.
3) Program coding
 Coding is the process of implementing data objects and
algorithms in some programming language.
A Simple program begins with the
 PROGRAM, and ends with the
 END PROGRAM statements
Introduction to Scientific & Engineering Computing
07/26/09
62
Programming and Problem Solving
4) Execution and testing
This is the checking step that the algorithm
and program are correct.
Compile (produce an object file)
 {compile-time errors} + run {run-time errors}:
IMPORTANT!! Logic errors that arise in the
design of the algorithm or in the coding of
the program are very hard to find.
5) Program maintenance
In real world applications, programs need to
modify to improve their performance.
Introduction to Scientific & Engineering Computing
07/26/09
63
Bazı şeyler
paylaşıldıkça küçülür,
BİLGİ ve SEVGİ
ise
paylaşıldıkça büyür
Introduction to Scientific & Engineering Computing
07/26/09
64
Algoritma ve Akış Diyagramları
Algoritma sözcüğü Ebu Abdullah Muhammed bin Musa el Harezmi
adındaki İran’lı alimden gelmiştir. El Harezmi 9. yüzyılda cebir
alanındaki çalışmalarını kitaba dökerek matematiğe çok büyük bir
katkı sağlamıştır. Hazırladığı bu “Hisab el-cebir ve el-mukabala” kitabı
dünyanın ilk cebir kitabı ve aynı zamanda ilk algoritma kitabı olma
özelliğini kazanmıştır.
Algoritma, çözülmesi gereken bir problemin belirli kurallar ve mantık
çerçevesinde adım adım çözülerek yazıya dökülmesi işlemidir.
Algoritmalar sonlu ve kesin ifadelerle birlikte kullanılmalıdır. Aksi
takdirde sonu belirlenmemiş ve/veya kesin ifadeler kullanılmamış
algoritmalar yazılırsa sonsuz döngü, kilitlenme gibi istenmeyen bir
takım hatalar ortaya çıkacaktır.
http://www.ozgurlukicin.com/atolye/algoritma-ve-akisdiyagramlari/
Yazar: Mehmet PEKGENÇ
Algoritma Neden Gereklidir?
Aslında algoritmalar her zaman hayatımızın bir parçası olmuştur. Çoğu insan her
gün birtakım işlerini algoritma yoluyla yaptıkları halde bunun farkında değildir.
Örneğin, çay demlemek, yemek yapmak, işe gitmek, araba sürmek gibi işlemlerde
normal şartlar altında belli bir sıralama takip edilmektedir. Kısacası yapacağımız
tüm işlemlerde yapılması ve/veya yapılmaması gereken kurallar dizisi olacaktır.
Bu yüzden tüm programcılar bu kuralları atlamadan hatasız bir şekilde program
oluşturmaya çalışırlar. Bunu başarabilmeleri için birkaç saat ya da gün ayırarak bu
işlem basamaklarını oluşturmak zorundadırlar. Eğer algoritma oluşturmadan
program yazılırsa, daha sonra yazılan yüzlerce ve/ veya binlerce kod arasından
hata bulmak neredeyse imkansızdır. Fakat elimizin altında belli bir sırada işlem
basamakları olduğu sürece hangi programlama dilinde yazarsak yazalım
programımız hatasız olacaktır.
Özellikle bilgisayar programcılığı okuyan arkadaşlarımızın arasında rastladığım
kadarıyla çoğu, bu durumu göz ardı etmektedir. Ne kadar çok programlama dili
bilirsek o kadar iyi program yazabiliriz düşüncesine sahipler. Unutulmamalıdır ki
eğer bir soruna matematiksel ya da mantıksal bir çözüm getiremiyorsak
programlama dili bilmemizinde bir anlamı yoktur.Programcılar algoritmaların
dışında bir de akış diyagramları kullanmaktadırlar.
Akış Diyagramı Neden Gereklidir?
Algoritmanın daha iyi anlaşılabilmesi için yazı yerine akış
diyagramları kullanılmaktadır. Bunu kullanmanın diğer bir önemli
yanı da oluşturulan her algoritmanın herkesin anladığı dili
yansıtmasıdır. Aksi halde algoritmalar, yazıldığı dili (Türkçe,
İngilizce vb.) bilmeyen diğer bir kişi tarafından anlaşılır
olmayacaktır. Bu yüzden uluslararası standart sembollerle tüm
algoritmaların yazıldıkları dillerden bağımsız hale getirilmesi
gerekmektedir.
Bunun için akış diyagramları hayatımıza girmiştir. Konunun daha
iyi anlaşılması açısından şu örneği verebiliriz. Trafik lambalarında
renk ve/veya şekil yerine DUR, BEKLE ya da GEÇ yazması bizim
için anlamlıdır ancak yabancılar için hiçbir anlam ifade
etmemektedir. Trafik lambalarında yazı yerine KIRMIZI, YEŞİL
gibi renk kodlarının kullanılması tüm dünyada bu işaretleri
anlaşılır kılmıştır. Bu yüzden algoritmaları akış diyagramları ile
sembolize etmek projenizin uluslararası anlaşılırlığını artıracaktır
Algoritmanın Özellikleri
Etkinlik: Algoritma oluştururken dikkat etmemiz gereken özelliklerden biri
etkinliktir. Bu özelliğin sağlanabilmesi için oluşturulan algoritmalarda işlem
tekrarı olmamasına dikkat edilmelidir. Diğer bir işlem ise oluşturulan
algoritmanın gerekirse diğer algoritmalar içinde kullanılmasını sağlıyor.
Kesinlik: Kullanılan değerler her zaman kesin ifadeler içermelidir.
Sonluluk: Her algoritmanın bir sonu vardır. Ne kadar işlem ya da döngü
olursa olsun algoritmanın uygun bir adımda sonlandırılması gerekir.
Giriş/Çıkış: Tüm algoritmaların giriş ve çıkış verileri olmak zorundadır.
Çıkış verilerinin kesinlikle doğru olmasına dikkat edilmelidir. Çünkü,
çıkışverileri başka bir algoritmanın giriş verisi olarak kullanılabiliyor.
Performans: Performans değeri, algoritmanın ne kadar tekrar ettiği,
uyguladığı işlemler, çalışma süresi gibi işlemlere göre belirlenir.
Eğer oluşturulan algoritmada performans ihtiyacı varsa saydığım bu tür
özelliklere dikkat edilmesi gerekiyor. Eğer bu tip sorunlar varsa tekrar
incelenerek düzeltilmesi gerekir.
Örnek Algoritma...
Yeni başlayan arkadaşlara örnek oluşturması için 1’den 100’e kadar olan
tam sayıların aritmetik ortalamasını hesaplayan bir algoritma ve akış
diyagramını yapalım. Bunun için öncelikle 1’den 100’e kadar olan
sayıların toplamına ihtiyacımız olacaktır. Sonrasında bu toplamın 100’e
bölünmesiyle ortalama elde edilecektir. Burada toplam için (n*n+1) /2 ne
güne duruyor dediğinizi duyar gibiyim. :) Tabiki toplamı bulmak için
Gauss yöntemini kullanabiliriz ancak anlatmaya çalıştığımız şey
algoritma kurmayı göstermek olduğundan biz uzun yolu seçiyoruz.
Bu anlattıklarımız ışığında algoritmamız aşağıdaki gibi olacaktır.
Başla
Sayac=1 : Toplam=0 : Aritmetik_Orta=0
Eğer Sayac=100 ise Git 6
Toplam=Toplam+Sayac
Sayac=Sayac+1 : Git 3
Aritmetik_Orta=Toplam/100
Aritmetik_Orta değerini yaz
Dur.
Ayrıca anlaşılırlığını artırmak için akış diyagramını çizdiğimizde oluşacak diyagram ise aşağıdaki gibidir.
Algoritmanın programlama dillerinden
bağımsız olmasının yanında işletim
sistemlerinden de bağımsız olduğunu
unutmayınız. Programcılığın başlangıcından
itibaren, hayatınızın sonuna kadar devam
edecek ve değişmeyecek tek gerçek aslında
algoritmadır. Bu yüzden programcılığa yeni
başlayan ya da başlayacak olan arkadaşlara
algoritma ile iyi geçinmelerini tavsiye ederim.
Konu ile ilgili görüş ve önerilerinizi Özgürlükİçin
forumlarına bekliyorum.
http://www.ozgurlukicin.com/atolye/algoritma-ve-akisdiyagramlari/
Yazar: Mehmet PEKGENÇ
Descargar

week 1