Chapter 1
Mathematical Modeling and
Engineering Problem solving
•Requires understanding of engineering systems
–By observation and experiment
–Theoretical analysis and generalization
•Computers are great tools, however, without
fundamental understanding of engineering
problems, they will be useless.
by Lale Yurttas, Texas
A&M University
1
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
• A mathematical model is represented as a
functional relationship of the form
Dependent
Variable
=f
independent
forcing
variables, parameters, functions
• Dependent variable: Characteristic that usually
reflects the state of the system
• Independent variables: Dimensions such as
time ans space along which the systems
behavior is being determined
• Parameters: reflect the system’s properties or
composition
• Forcing functions: external influences acting
upon the syste
by Lale Yurttas, Texas
A&M University
2
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Conservation Laws and Engineering
• Conservation laws are the most important and
fundamental laws that are used in engineering.
Change = increases – decreases (1.13)
• Change implies changes with time (transient).
If the change is nonexistent (steady-state), Eq.
1.13 becomes
Increases =Decreases
by Lale Yurttas, Texas
A&M University
3
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Chapter 2
Programming and Software
• Objective is how to use the computer as a tool
to obtain numerical solutions to a given
engineering model. There are two ways in
using computers:
– Use available software, and/or
– Write computer programs to extend the
capabilities of available software.
• Engineers should not be tool limited, it is
important that they should be able to do both!
by Lale Yurttas, Texas
A&M University
4
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Structured Programming
• Structured programming is a set of rules that
prescribe good style habits for programmer.
–
–
–
–
An organized, well structured code
Easily sharable
Easy to debug and test
Requires shorter time to develop, test, and update
• The key idea is that any numerical algorithm
can be composed of using the three
fundamental structures:
– Sequence, selection, and repetition
by Lale Yurttas, Texas
A&M University
5
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Modular Programming
• The computer programs can be divided into
subprograms, or modules, that can be developed
and tested separately.
• Modules should be as independent and self
contained as possible.
• Advantages to modular design are:
– It is easier to understand the underlying logic of
smaller modules
– They are easier to debug and test
– Facilitate program maintenance and modification
– Allow you to maintain your own library of modules
for later use
by Lale Yurttas, Texas
A&M University
6
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
EXCEL
• Is a spreadsheet that allow the user to enter and
perform calculations on rows and columns of data.
• When any value on the sheet is changed, entire
calculation is updated, therefore, spreadsheets are
ideal for “what if?” sorts of analysis.
• Excel has some built in numerical capabilities
including equation solving, curve fitting and
optimization.
• It also includes VBA as a macro language that can be
used to implement numerical calculations.
• It has several visualization tools, such as graphs and
three dimensional plots.
by Lale Yurttas, Texas
A&M University
7
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
MATLAB
• Is a flagship software which was originally
developed as a matrix laboratory. A variety of
numerical functions, symbolic computations,
and visualization tools have been added to the
matrix manipulations.
• MATLAB is closely related to programming.
• Other Programming Languages: C, C++,
FORTRAN 90 , …
by Lale Yurttas, Texas
A&M University
8
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Chapter 3
Approximations and Round-Off Errors
• For many engineering problems, we cannot obtain analytical
solutions.
• Numerical methods yield approximate results, results that are close
to the exact analytical solution. We cannot exactly compute the
errors associated with numerical methods.
– Only rarely given data are exact, since they originate from
measurements. Therefore there is probably error in the input
information.
– Algorithm itself usually introduces errors as well, e.g.,
unavoidable round-offs, etc …
– The output information will then contain error from both of
these sources.
• How confident we are in our approximate result?
• The question is “how much error is present in our calculation and
is it tolerable?”
by Lale Yurttas, Texas
A&M University
9
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
• Accuracy. How close is a computed or
measured value to the true value
• Precision (or reproducibility). How close is a
computed or measured value to previously
computed or measured values.
• Inaccuracy (or bias). A systematic deviation
from the actual value.
• Imprecision (or uncertainty). Magnitude of
scatter.
by Lale Yurttas, Texas
A&M University
10
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Fig. 3.2
by Lale Yurttas, Texas
A&M University
11
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Significant Figures
• Number of significant figures indicates precision. Significant
digits of a number are those that can be used with
confidence, e.g., the number of certain digits plus one
estimated digit.
• 53,800
How many significant figures?
5.38 x 104
3
5.380 x 104
4
5.380 x 104
5
Zeros are sometimes used to locate the decimal point, not
significant figures.
0.00001753
4
0.0001753
4
0.001753
4
by Lale Yurttas, Texas
A&M University
12
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Error Definitions
True Value = Approximation + Error
Et = True value – Approximation (+/-)
True error
True fractional relative error 
true error
true value
True percent relative error,  t 
true error
 100 %
true value
by Lale Yurttas, Texas
A&M University
13
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
• For numerical methods, the true value will be
known only when we deal with functions that
can be solved analytically (simple systems). In
real world applications, we usually not know
the answer a priori. Then
a 
Approximat e error
 100 %
Approximat ion
• Iterative approach, example Newton’s method
a 
Current approximat ion - Previous approximat ion
(+ / -)
 100 %
Current approximat ion
by Lale Yurttas, Texas
A&M University
14
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
• Use absolute value.
• Computations are repeated until stopping
criterion is satisfied.
 a  s
Pre-specified % tolerance based
on the knowledge of your
solution
• If the following criterion is met
 s  (0.5  10
(2 - n)
)%
you can be sure that the result is correct to
at least n significant figures.
by Lale Yurttas, Texas
A&M University
15
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Round-off Errors
• Numbers such as p, e, or 7 cannot be expressed
by a fixed number of significant figures.
• Computers use a base-2 representation, they cannot
precisely represent certain exact base-10 numbers.
• Fractional quantities are typically represented in
computer using “floating point” form, e.g.,
156.78

0.15678x103 in a floating
point base-10 system
by Lale Yurttas, Texas
A&M University
16
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Floating point representation allows both fractions
and very large numbers to be expressed on the
computer. However,
– Floating point numbers take up more room.
– Take longer to process than integer numbers.
– Round-off errors are introduced because only
a finite number of significant figures are kept.
Some machines use chopping, because rounding adds
to the computational overhead. If the number of
significant figures is large enough, resulting
chopping error is negligible.
by Lale Yurttas, Texas
A&M University
17
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Chapter 4
Truncation Errors and the Taylor Series
• Non-elementary functions such as trigonometric,
exponential, and others are expressed in an
approximate fashion using Taylor series when their
values, derivatives, and integrals are computed.
• Any smooth function can be approximated as a
polynomial. Taylor series provides a means to predict
the value of a function at one point in terms of the
function value and its derivatives at another point.
by Lale Yurttas, Texas
A&M University
18
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Figure 4.1
by Lale Yurttas, Texas
A&M University
19
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Example:
To get the cos(x) for small x:
cos x  1 
x
2
2!

x
4

4!
x
6

6!
If x=0.5
cos(0.5) =1-0.125+0.0026041-0.0000127+ …
=0.877582
From the supporting theory, for this series, the error is
no greater than the first omitted term.

x
8
for
x  0 .5
 0 . 0000001
8!
by Lale Yurttas, Texas
A&M University
20
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
• Any smooth function can be approximated as a
polynomial.
f(xi+1) ≈ f(xi) zero order approximation, only
true if xi+1 and xi are very close
to each other.
f(xi+1) ≈ f(xi) + f′(xi) (xi+1-xi) first order
approximation, in form of a
straight line
by Lale Yurttas, Texas
A&M University
21
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
nth order approximation
f ( x i  1 )  f ( x i )  f  ( x i )( x i  1  x i ) 

2
( x i 1  x i )  R n
n
(xi+1-xi)= h
Rn 
2!
( x i 1  x i )  
(n)
f
n!
f
f 
( n 1)
step size (define first)
( )
( n  1)!
h
( n 1)
• Reminder term, Rn, accounts for all terms
from (n+1) to infinity.
by Lale Yurttas, Texas
A&M University
22
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
•  is not known exactly, lies somewhere
between xi+1> >xi .
• Need to determine f n+1(x), to do this you need
f'(x).
• If we knew f(x), there wouldn’t be any need to
perform the Taylor series expansion.
• However, R=O(hn+1), (n+1)th order, the order
of truncation error is hn+1.
• O(h), halving the step size will halve the error.
• O(h2), halving the step size will quarter the
error.
by Lale Yurttas, Texas
A&M University
23
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
• Truncation error is decreased by addition of terms to
the Taylor series.
• If h is sufficiently small, only a few terms may be
required to obtain an approximation close enough to
the actual value for practical purposes.
Example:
Calculate series, correct to the 3 digits.
1
1
2

1

3
1

4
by Lale Yurttas, Texas
A&M University
24
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Error Propagation
• fl(x) refers to the floating point (or computer)
representation of the real number x. Because a
computer can hold a finite number of
significant figures for a given number, there
may be an error (round-off error) associated
with the floating point representation. The
error is determined by the precision of the
computer ().
by Lale Yurttas, Texas
A&M University
25
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
• Case 1: Addition of x1 and x2 with associated errors
t1 and t2 yields the following result:
fl(x1)=x1(1+t1)
fl(x2)=x2(1+t2)
fl(x1)+fl(x2)=t1 x1+t2 x2+x1+x2
t

100 %
fl ( x1 )  fl ( x 2 )  ( x1  x 2 )
x1  x 2

 t 1 x1   t 2 x 2
x1  x 2
•A large error could result from addition if x1 and x2 are
almost equal magnitude but opposite sign, therefore one
should avoid subtracting nearly equal numbers.
by Lale Yurttas, Texas
A&M University
26
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Example
  1%
R 1  1010 
R 2  990 
R 1  R 1    1020.1 
R 1  R 1    999.9 
999.9   R 1  1020.1 
R 2  R 2    999.9 
R 2  R 2    980.1 
980.1   R 2  999.9 
1020.1   980.1   40 
0  R 1  R 2  40 
999.9   999.9   0 
 t  100 %
• Case 2: Multiplication of x1 and x2 with associated
errors et1 and et2 results in:
fl ( x1 ) fl ( x 2 )  x1 (1   t 1 ) x 2 (1   t 2 )
fl ( x1 ) fl ( x 2 )  x1 x 2 (  t 1 t 2   t 1   t 2  1)
t

100 %
fl ( x1 ) fl ( x 2 )  x1 x 2
x1 x 2
  t 1 t 2   t 1   t 2
by Lale Yurttas, Texas
A&M University
28
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Example
  1%
R 1  1010 
R 2  990 
R 1  R 1    1020.1 
R 1  R 1    999.9 
999.9   R 1  1020.1 
R 2  R 2    999.9 
R 2  R 2    980.1 
980.1   R 2  999.9 
 1020     990  
 1020.1
 t 
 1.01 k
2
    999.9    1.02 k
1.02  0.98
1.01
 100 %
2
 999.9
 t  3.96 %
    980.1    0.98 k 
2
• Since t1, t2 are both small, the term t1t2 should be
small relative to t1+t2. Thus the magnitude of the
error associated with one multiplication or division
step should be t1+t2.
t1 ≤  (upper bound)
• Although error of one calculation may not be
significant, if 100 calculations were done, the error is
then approximately 100. The magnitude of error
associated with a calculation is directly proportional
to the number of multiplication steps.
• Refer to Table 4.3
by Lale Yurttas, Texas
A&M University
30
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
• Overflow: Any number larger than the largest number
that can be expressed on a computer will result in an
overflow.
• Underflow (Hole) : Any positive number smaller than
the smallest number that can be represented on a
computer will result an underflow.
• Stable Algorithm: In extended calculations, it is likely
that many round-offs will be made. Each of these
creates an input error for the remainder of the
computation, impacting the final output. Algorithms
for which the cumulative effect of all such errors are
limited, so that a useful result is generated, are called
“stable” algorithms. When accumulation is
devastating and the solution is overwhelmed by the
error, such algorithms are called unstable.
by Lale Yurttas, Texas
A&M University
31
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Figure 4.8
by Lale Yurttas, Texas
A&M University
32
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Descargar

Mathematical Modeling and Engineering Problem solving