CS 5950/6030 Network Security
Class 18 (W, 10/12/05)
Leszek Lilien
Department of Computer Science
Western Michigan University
Based on Security in Computing. Third Edition by Pfleeger and Pfleeger.
Using some slides courtesy of:
Prof. Aaron Striegel — at U. of Notre Dame
Prof. Barbara Endicott-Popovsky and Prof. Deborah Frincke — at U. Washington
Prof. Jussipekka Leiwo — at Vrije Universiteit (Free U.), Amsterdam, The Netherlands
Slides not created by the above authors are © by Leszek T. Lilien, 2005
Requests to use original slides for non-profit purposes will be gladly granted upon a written request.
3. Program Security
3.1. Secure Programs – Defining & Testing
3.2. Nonmalicious Program Errors
3.3. Malicious Code
3.3.1. General-Purpose Malicious Code (incl. Viruses)
3.3.2. Targeted Malicious Code
Class
16
3.4. Controls for Security
a. Introduction
b. Developmental controls for security — PART 1
Class
17 -- Projects --
2
3.4. Controls for Security


3
How to control security of pgms during their development
and maintenance
Outline:
a. Introduction
b. Developmental controls for security
c. Operating system controls for security
d. Administrative controls for security
e. Conclusions
a. Introduction

4
„Better to prevent than to cure”

Preventing security flaws
 We have seen a lot of possible security flaws
 How to prevent (some of) them?
 Software engineering concentrates on developing and
maintaining quality s/w
 We’ll take a look at some techniques useful specifically
for developing/ maintaining secure s/w

Three types of controls for security (against pgm flaws):
1) Developmental controls
2) OS controls
3) Administrative controls
b. Developmental Controls for Security (1)

Nature of s/w development
 Collaborative effort
 Team of developers, each involved in  1 of these steps:









5
Requirement specification
 Regular req. specs: „do X”
 Security req. specs: „do X and nothing more”
Design
Implementation
Testing
Documenting
Reviewing at each of the above stages
Managing system development thru all above stages
Maintaining deployed system (updates, patches, new versions,
etc.)
Both product and process contribute to quality incl. security
dimension of quality
Developmental Controls for Security (4)

Techniques for building solid software
1) Peer reviews
2) Hazard analysis
3) Testing
4) Good design
5) Risk prediction & mangement
6) Static analysis
7) Configuration management
8) Additional developmental controls
... all discussed below ...
6
[cf. B. Endicott-Popovsky]
Developmental Controls for Security (15)
7) Configuration management
= process of controling system modifications during
development and maintenance
 Offers security benefits by scrutinizing new/changed code

Problems with system modifications
 One change interefering with other change


Proliferation of different versions and releases



7
E.g., neutralizing it
Older and newer
For different platforms
For different application environments
Class 16 Ended Here
8
3. Program Security
3.1. Secure Programs – Defining & Testing
3.2. Nonmalicious Program Errors
3.3. Malicious Code
3.3.1. General-Purpose Malicious Code (incl. Viruses)
3.3.2. Targeted Malicious Code
Class
16
Class 17
Class
18
3.4. Controls for Security
a. Introduction
b. Developmental controls for security — PART 1
-- Projects –
b.
c.
d.
e.
Developmental controls for security — PART 2
Operating System controls for security
Administratrive controls for security
Conclusions
4. Protection in General-Purpose OSs
4.1. Protected Objects, Methods, and Levels of Protection
a. History of protection in OSs
b. Protected objects in OSs
c. Security methods in OSs
d. Levels of protection in OSs
e. Three dimensions of protection in OSs
9
Developmental Controls for Security (16)

Reasons for software modification
 Corrective changes
 To maintain control of system’s day-to-day functions



10
Adaptive changes
 To maintain control over system’s modifications
Perfective changes
 To perfect existing acceptable system functions
Preventive changes
 To prevent system’s performance degradation to
unacceptable levels
Developmental Controls for Security (17)

Activities involved in configuration management process
(performed by reps from developers, customers, users, etc.)
1) Baseline identification
 Certain release/version (R/v) selected & frozen as
baseline
 Other R’s/v’s described as changes to the baseline
2) Configuration control and configuration management
 Coordinate separate but related v’s (versions) via:
 Separate files - separate files for each R or v
 Deltas - main v defined by „full files”
- other v’s defined by main v & deltas
(= difference files)
 Conditional compilation
- single source code file F for all v’s
uses begin_version_Vx / end_version_Vx brackets
or begin_not_version_Vx / end_not_version_Vx brackets
- compiler produces each v from F
11
Developmental Controls for Security (18)
3) Configuration auditing
 System must be audited regularly — to verify:
 Baseline completeness and accuracy
 Recording of changes
 Accuracy of software documentation for systems in
the field
 Peformed by independent parties
4) Status accounting
 Records info about system components
 Where they come from (purchased, reused, written
from scratch)
 Version
 Change history
 Pending change requests
12
Developmental Controls for Security (19)


All 4 activities performed by
Configuration Control Board (CCB)
 Includes reps from developers, customers, users
 Reviews proposed changes, approves/rejects
Security benefits of configuration mgmt
 Limits unintentional flaws
 Limits malicious modifications
by protecting integrity of pgms and documentation

13
Thanks to:
 careful reviewing/auditing, change mgmt
 preventing changes (e.g., trapdoors) to system w/o acceptance
by CCB
Developmental Controls for Security (20)
8) Additional developmental controls
8a) Learning from mistakes
 Avoiding such mistakes in the future enhances security
8b) Proofs of program correctness
 Formal methods to verify pgm correctness
 Logic analyzer shows that:
initial assertions about inputs...
... through implications of pgm statements...
... lead to the terminal condition (desired output)
 Problems with practical use of pgm correctness proofs
 Esp. for large pgms/systems
 Most successful for specific types of apps
 E.g. for communication protocols & security policies
Even with all these developmental controls (1-8) –
still no security guarantees! [cf. B. Endicott-Popovsky]
14
c. Operating System Controls for Security (1)
Developmental controls not always used
OR:
 Even if used, not foolproof
=> Need other, complementary controls, incl. OS controls


15
Such OS controls can protect against some pgm flaws
Operating System Controls for Security (2)

16
Trusted software
– code rigorously developed an analyzed so we can trust that
it does all and only what specs say
 Trusted code establishes foundation upon which untrusted
code runs
 Trusted code establishes security baseline for the whole
system
 In particular, OS can be trusted s/w
Operating System Controls for Security (3)

17
Key characteristics determining if OS code is trusted
1) Functional correctness
 OS code consistent with specs
2) Enforcement of integrity
 OS keeps integrity of its data and other resources even
if presented with flawed or unauthorized commands
3) Limited privileges
 OS minimizes access to secure data/resources
 Trusted pgms must have „need to access” and proper
access rights to use resources protected by OS
 Untrusted pgms can’t access resources protected by OS
4) Appropriate confidence level
 OS code examined and rated at appropriate trust level
Operating System Controls for Security (4)


Similar criteria used to establish if s/w other than OS can be
trusted
Ways of increasing security if untrusted pgms present:
1) Mutual suspicion
2) Confinement
3) Access log
1) Mutual suspicion between programs

Distrust other pgms – treat them as if they were
incorrect or malicious

Pgm protects its interface data

18
With data checks, etc.
Operating System Controls for Security (5)
2) Confinement


19
OS can confine access to resources by suspected pgm
Example 1: strict compartmentalization

Pgm can affect data and other pgms only within its
compartment

Example 2: sandbox for untrusted pgms

Can limit spread of viruses
Operating System Controls for Security (6)
3) Audit log / access log



Records who/when/how (e.g., for how long)
accessed/used which objects

Events logged: logins/logouts, file accesses, pgm
ecxecutions, device uses, failures, repeated
unsuccessful commands (e.g., many repeated failed
login attempts can indicate an attack)
Audit frequently for unusual events, suspicious patterns
Forensic measure not protective measure

Forensics – investigation to find who broke law,
policies, or rules
...Much more on OS controls soon...
20
d. Administrative Controls for Security (1)


21
They prohibit or demand certain human behavior via
policies, procedures, etc.
They include:
1) Standards of program development
2) Security audits
3) Separation of duties
Administrative Controls for Security (2)
1) Standards and guidelines for program development
 Capture experience and wisdom from previous projects
 Facilitate building higher-quality s/w (incl. more secure)
 They include:
 Design S&G – design tools, languages, methodologies
 S&G for documentation, language, and coding style
 Programming S&G - incl. reviews, audits
 Testing S&G
 Configuration mgmt S&G
2) Security audits
 Check compliance with S&G
 Scare potential dishonest programmer from including
illegitimate code (e.g., a trapdoor)
22
Administrative Controls for Security (3)
3) Separation of duties
 Break sensitive tasks into  2 pieces to be performed by
different people (learned from banks)
 Example 1: modularity
 Different developers for cooperating modules
 Example 2: independent testers
 Rather than developer testing her own code
...More (much) later...
23
e. Conclusions



(for Controls for Security)
Developmental / OS / administrative controls help
produce/maintain higher-quality (also more secure) s/w
Art and science - no „silver bullet” solutions
„A good developer who truly understands security will
incorporate security into all phases of development.”
[textbook, p. 172]

Summary:
Control
24
[cf. B. Endicott-Popovsky]
Purpose
Benefit
Developmental
Limit mistakes
Make malicious code difficult
Produce better software
Operating
System
Limit access to system
Promotes safe sharing of info
Administrative
Limit actions of people
Improve usability, reusability
and maintainability
25
4. Protection in General-Purpose OSs
 This section:
User’s side of protection in general-purpose OS:

Functions that directly address security

Functions that have security as a byproduct
[cf. B. Endicott-Popovsky and D. Frincke]
 Next section:
How OS design is affected by protection requirements
 Outline:
4.1. Protected Objects, Methods, and Levels of Protection
4.2. Memory and Address Protection
4.3. Control of Access to General Objects
4.4. File Protection Mechanisms
4.5. User Authentication
4.6. Summary
26
4.1. Protected Objects, Methods,
and Levels of Protection

27
Outline
a. History of protection in OSs
b. Protected objects in OSs
c. Security methods in OSs
d. Levels of protection in OSs
e. Three dimensions of protection in OSs
f. Granularity of data protection
a. History of protection in OSs (1)

Predecessors of OSs:
1) No system s/w

User entered pgms in binary


Via switches or via keyboard
Single user had full control of computer



Scheduled time for exclusive computer use
Prepare before use

Load assembler, compiler, shared subroutines, etc.
Clean up after use
2) Executive

Assist single user with preparation and cleanup

Entirely passive:

Waited for user’s request

Provided service on demand
28
History of protection in OSs (2)
3) Monitor

Assisted multiple users in multiprogramming systems

Actively controled system resources
Provided service if consistent with system policies,
denying otherwise
Protect one user from interference (malicious or
acceidental or malicious) by another



29
Impact of multiprogramming on security:

Before multiprogramming - no need to protect one user
from another

With multiprogramming - need to
b. Protected objects in OSs

Multiprogramming — sharing computer by multiple users

Multiprogramming necessitates protecting OS objects:






30
Memory
I/O Devices
 Sharable I/O devices (e.g., disks)
 Serially reusable I/O devices (e.g., printers)
Sharable programs and subroutines
Networks
Sharable data
Since OS controls system resources, OS must provide such
protection
 Many protection mechanism supported by hardware
c. Security methods in OSs (1)

Basis of security in OS: separation
= keeping one user’s objects secure from interference by other users

Kinds of separation:
1) Physical separation
 Different processes use different physical objects

E.g., different printers for different ‘confidentiality levels’ of output
2) Temporal separation
 Processes having different security req’s executed at
different times
3) Logical separation
 Illusion that OS executes processes only for single user
4) Cryptographic separation
 Processes conceal their data and computations from
other processes
5) Combinations of the above
31
Security methods in OSs (2)



32
Strength of security via separation (least to most secure):
Logical separation
Level of
Temporal separation security
Physical separation
Complexity of implementation of separation (least to most
complex):
Physical separation
Complexity of
Temporal separation
implementation
Logical separation
Cryptographic separation
Resource utilization in different kinds of separation:
 Poor: physical separation / temporal separation
 Good: logical separation / cryptographic separation
d. Levels of protection in OSs (1)


Absolute separation reduces efficiency
– need to share some resources for efficiency
Full sharing-separation spectrum = levels of protection by OS:
1) No protection

Caveat emptor („Let the buyer beware” in Latin)
User can still protect self by, e.g, temporal separation
2) Isolation
 Concurrently running processes hidden from each other
=> unaware of each other


Own address space, files, other objects for each process
3) Full sharing or no sharing
 Object/resource owner declares it as:
- public (can be shared by all)
or
- private (not shared)
...cont...
33
Levels of protection in OSs (2)
...cont...
4) Sharing via access limitation
 Access to each object by each user determined by
access rights
5) Sharing by capabilities
 Extension to „ Sharing via access limitation”
— dynamic access rights
 Can be changed by owner, subject, computation
context, object itself
6) Limited object use
 Limits not only object access — limit object use


34
E.g., can view a doc but can’t copy it
E.g., can view statistical summary of data but can’t view
individual data records (e.g., can see average salary
but not John Smith’s salary)
Levels of protection in OSs (3)


35
OS can provide different levels of protection for different
objects/resources
Complexity of implementation and fineness of protection:
1) No protection
Complexity of
2) Isolation
implementation
3) Full sharing or no sharing
and
4) Sharing via access limitation Fineness of
5) Sharing by capabilities
protection
6) Limited object use
e. Three dimensions of protection in OSs
3
Dimensions:
1—protected objects
2—security methods
3—protection levels
2
1
36
[cf. B. Endicott-Popovsky and D. Frincke]
End of Class 18
37
Descargar

Document