Erik Jonsson School of Engineering and Computer
Science
A MODEL-CHECKING IN-LINED REFERENCE
MONITORING SYSTEM FOR ACTIONSCRIPT
Meera Sridhar and Kevin W. Hamlen
(The University of Texas at Dallas)
Supported in part by grants from AFOSR
Presentation at Adobe Systems Inc.
San Francisco, CA
November 19, 2009
11/19/2009
create your Engineering
FEARLESS
future
1
www.utdallas.edu
BEYOND ACCESS CONTROL
1.
Security Policy – A security policy for a system defines desired
“secure” behavior for a system. More formally, consider a
computer system to be a finite-state automaton, with a set of
transition functions that change state. Then, a security policy is
a statement that partitions the states of the system into a set of
authorized, or secure, states and a set of unauthorized, or
nonsecure, states.
- Computer Security, Art and Science by Matt Bishop
11/19/2009
create your future
2
www.utdallas.edu
TRUSTWORTHY = SMALL
• 2. Trusted Computing Base (TCB) – a small amount
of software and hardware that security depends on
and that we distinguish from a much larger amount
that can misbehave without affecting security.
- Lampson et al, Authentication in Distributed Systems
• An important goal in computer security is to minimize
the TCB.
11/19/2009
create your future
3
www.utdallas.edu
SEPARATION OF CONCERNS
• 3. code-producer and code-consumer
– will use this terminology frequently
– code-producer – the party that wrote the code
• note that this could be different from the code-distributor, for example,
in AIR, Adobe is considered the code-distributor, but the code-producer
can be any person who chooses to create an AIR application and
release it to the public.
– code-consumer – the end-user who will run the application
11/19/2009
create your future
4
www.utdallas.edu
THE NEED FOR MORE SOPHISTICATED SECURITY
ENFORCEMENT TOOLS FOR FLASH AND AIR
• Existing security mechanisms for ActionScript and related
technologies (including the AVM, Flash, Flex and AIR) mainly
fall into two categories:
– Code-signing
– Sandboxing
• Suffice for certain classes of attacks (mainly access control),
BUT:
– code-signing
• places the code-producer in the TCB
• consumer can either choose to trust the code-producer and run the program or
choose not to trust the code-producer – there is no in-between
– sandboxing
• enforces only a small class of coarse-grained access control policies
• policies are built into the AVM and runtime libraries.
11/19/2009
create your future
5
www.utdallas.edu
THE NEED FOR MORE SOPHISTICATED SECURITY
ENFORCEMENT TOOLS FOR FLASH AND AIR, CONTD.
• Limitations:
• cannot enforce system- and application-specific policies
– e.g. prohibiting write access to files with only certain file extensions
• cannot enforce finer-grained policies
– e.g. those that constrain arguments to individual ActionScript instructions
(such as arguments to eval)
– programs cannot use file system – too coarse grained
• policy is encoded in the underlying OS/VM, so
– code-producer needs to know the policy – policy cannot be
modified/specified by code-consumer after receiving the code
•
e.g. AIR force wants to enforce policies that they can’t even talk about
– changes to policy would need to be reflected in the OS/VM by modifying it –
expensive, inflexible
11/19/2009
create your future
6
www.utdallas.edu
ENTER IRMS: WHAT ARE THEY?
• In-Lined Reference Monitors (IRM’s)
– inject runtime security guards directly into untrusted binaries
– guards test whether an impending operation constitutes a
policy violation
– if so, some corrective action is taken to prevent the violation,
such as premature termination.
– result is self-monitoring code that can be safely executed
without external monitoring
11/19/2009
create your future
7
www.utdallas.edu
IRMS CONTD.
IRMs
• keep track of history of security-relevant events observed
• enables enforcement of a powerful class of security policies
(not precisely enforceable by any purely static analysis)
• do not require any admin/VM-level privileges to enforce policy
IRMs also
• allow code consumers to specify or modify the security policy
after receiving the code
– code-producer need not know the policy, which is the case with static
analyses/code-signing/sandboxing
• changes to policy doesn’t require modifications to underlying
OS/VM
11/19/2009
create your future
8
www.utdallas.edu
A VERY SIMPLE EXAMPLE
Let’s say the desired security policy is the following:
The program should not open more than 3 pop-up windows.
11/19/2009
create your future
9
www.utdallas.edu
A VERY SIMPLE EXAMPLE, CONTD.
public class RUnsafeURL extends Sprite{
public function RUnsafeURL(){
//create textfield object here
var theField:TextField = new TextField();
theField.text = "Click me!";
theField.border = true;
theField.background = true;
theField.selectable = false;
addChild(theField);
//add event listener here
theField.addEventListener(MouseEvent.CLICK, clickListener);
}//constructor RUnsafeURL
private function clickListener(e:MouseEvent):void {
var url:String="javascript:window.open(
http://www.youtube.com/watch?v=pXT9WSaH8uw&feature=topvideos
','title',04.'width=800,height=600,toolbar=no,resizable=o,menubar
=no,05.status=no,scrollbars=no');void(0);";
while(true)
{
navigateToURL(new
URLRequest("http://www.youtube.com/watch?v=pXT9WSaH8uw&feature=topvideos"));
}
}//end method clickListener
11/19/2009
class definition
create}//end
your future
10
www.utdallas.edu
A VERY SIMPLE EXAMPLE, CONTD.
public class RUnsafeURL extends Sprite{
private var security:Number = 0;
public function RUnsafeURL(){
//create textfield object here
var theField:TextField = new TextField();
theField.text = "Click me!";
theField.border = true;
theField.background = true;
theField.selectable = false;
addChild(theField);
//add event listener here
theField.addEventListener(MouseEvent.CLICK, clickListener);
}//constructor RUnsafeURL
private function clickListener(e:MouseEvent):void {
var url:String="javascript:window.open(
‘http://www.youtube.com/watch?v=pXT9WSaH8uw&feature=topvideos
','title',04.'width=800,height=600,toolbar=no,resizable=o,menubar
=no,05.status=no,scrollbars=no');void(0);";
while(true)
{
if (security > 3)
HALT;
navigateToURL(new
URLRequest("http://www.youtube.com/watch?v=pXT9WSaH8uw&feature=topvideos"));
security++;
}
}//end method clickListener
11/19/2009
}//end class
create your future
definition
11
www.utdallas.edu
BINARY-LEVEL IRM ARCHITECTURE
ORIGINAL SWF FILE
SAFE SWF FILE
ORIGINAL SWF DATA
ABC EXTRACTOR
INSTRUMENTED ABC FILE
ABC FILE
PARSER
ABSTRACT SYNTAX
TREE (AST)
ABC INJECTOR
CODE GENERATOR
REWRITER
INSTRUMENTED AST
SECURITY POLICY
11/19/2009
create your future
12
www.utdallas.edu
For more information on what I’ve presented so far,
please refer to the following paper:
M. Sridhar and K.W. Hamlen. ActionScript In-Lined
Reference Monitoring in Prolog. In Proc. Intl.
Symposium on Practical Aspects of Declarative
Languages, 2010. to appear.
11/19/2009
create your future
13
www.utdallas.edu
TWO IMPORTANT CONCERNS
1. Does the IRM preserve good code?
–
–
inserted guards have no visible side affect, unless the code is
policy-violating
depends on strictly one defines “ preserve”
•
–
–
not applicable to self-modifying code for example
best determined through testing
fairly long history of doing this (may not always be known as IRM)
•
•
Java compiler for example inserts array bounds check before any array access
user doesn’t notice any difference, unless there is a array out-of-bounds
violation
2. More important concern – how do we know that the rewriter
prevents policy violations?
–
–
testing not good enough, since attacker is actively trying to find loop-holes
in the code
needs rigorous proving
11/19/2009
create your future
14
www.utdallas.edu
THE NEED FOR VERIFICATION
• we would formally verify the rewriter
• also, want to minimize the TCB
– rewriters are powerful, but therefore complex.
– we generally have a rewriter tailored to a particular security
policy class
• so, the combined size of all the rewriters can be large
– rewriters employ many optimization schemes, so any
change to the optimization means a change to the TCB
• not desirable, since we want to keep the TCB fairly stable
11/19/2009
create your future
15
www.utdallas.edu
THE NEED FOR VERIFICATION, CONTD.
• a verifier (separate, smaller program) checks the rewritten
program against the security policy, and
• either
– deems the program to be policy-adherent (accepts), or
– deems it to be policy-violating (rejects), and produces a
counterexample – a sequence of events that led up to the security
policy being violated in a certain program state
• can take the rewriter out of the TCB
• typically, a good verifier is general and less subject to change,
and the verifier code is the same across security policy classes
• therefore, much more desirable to have the verifier rather than
the rewriter in the TCB
11/19/2009
create your future
16
www.utdallas.edu
VERIFYING IRM ARCHITECTURE
ORIGINAL SWF FILE
VERIFIER
ABC EXTRACTOR
INSTRUMENTED ABC FILE
ABC FILE
PARSER
R2
…
REJECT +
COUNTER-
FRAMEWORK
BINARY REWRITERS
R1
AST
IRM
ABSTRACT SYNTAX TREE (AST)
SECURITY
POLICY
PARSER
MODEL-CHECKER +
ABSTRACT
INTERPRETER
EXAMPLE
ACCEPT + VERIFIED ABC FILE
Rn
ABC INJECTOR
INSTRUMENTED AST
ORIGINAL
SWF DATA
SAFE SWF FILE
CODE GENERATOR
TRUSTED COMPUTING BASE
11/19/2009
create your future
17
www.utdallas.edu
USING MODEL-CHECKING FOR VERIFICATION
• Model-checking is an extremely powerful software
verification paradigm that can verify properties
– more complex than those typically expressible by type-systems
• ActionScript already has some basic type-checking
• while type-checking suffices to prove policies such as ensuring the
argument to a jump is indeed a label, does not prove policies like “do
not open more than 3 pop-up windows”
– more semantically flexible and abstract than those typically
encoded by contracts
• contracts tend to expect strict syntactic- adherence
• not flexible for various rewriting-strategies
11/19/2009
create your future
18
www.utdallas.edu
WHAT IS MODEL-CHECKING?
• Informally, given a model of a system (in this case, the
ActionScript program), and a safety property, a model-checker
explores all possible states that the system can be in, checking
whether the property is violated in any state.
• If no violations occur, then the model-checker accepts.
• If in a particular state the property is violated, then the modelchecker produces the sequence of states that led up to the
violating state.
• clearly suited for verification
• main challenge with model-checking – state space explosion –
since the MC explores each possible state, it quickly runs out of
memory, especially if there are loops
–
tackling this for IRMs has been one of the main focuses of our research
11/19/2009
create your future
19
www.utdallas.edu
KEY ELEMENTS OF THE VERIFIER
•
security policy is modeled as a security automaton
–
policy-writer uses a high-level policy language
•
–
high-level policy is reduced to an automaton
•
•
can be anything from check-boxes to Linear Temporal Logic
this idea is obtained from literature
abstractly interprets untrusted programs, reasoning
about possible event histories
11/19/2009
create your future
20
www.utdallas.edu
KEY ELEMENTS OF THE VERIFIER
• uses memoization and co-induction to reason about
fixed points
– suppose sec-relevant property; no iteration changes whether
property is true or false
– the infinite number of states can be replaced with a single,
representative state
– needed for termination of the model-checker
For more information on this aspect specifically, please
see:
B. W. DeVries, G. Gupta, K. W. Hamlen, S. Moore, and M. Sridhar.
ActionScript Bytecode Verification with Co-logic Programming. In
Proc. of the ACM SIGPLAN Workshop on Prog. Languages and
Analysis for Security (PLAS), 2009.
11/19/2009
create your future
21
www.utdallas.edu
KEY ELEMENTS OF THE VERIFIER, CONTD.
• another way to combat state space explosion is to
use abstract interpretation
– when certain properties of variable values, rather than their
concrete values suffice for reasoning about policyadherence, concrete variable values can be replaced with
abstract values from an abstract domain
– can greatly reduce state space, since several concrete
states can be mapped onto a single abstract state
11/19/2009
create your future
22
www.utdallas.edu
KEY ELEMENTS OF THE VERIFIER, CONTD.
Abstract Interpretation, contd.
– domain is constructed based on knowledge of
properties crucial for interpretation
– of course, one has to take care that this
abstraction preserves soundness, but also,
– the precision of this abstraction determines the
power of our analysis
• it is possible to create an abstraction that is correct, but abstracts away too much
information – this results in a high conservative rejection rate – that is, no policyviolating programs are accepted, but many policy-adherent programs are
conservatively rejected.
11/19/2009
create your future
23
www.utdallas.edu
KEY ELEMENTS OF THE VERIFIER, CONTD.
• model-checking rewritten code requires abstract
interpretation of
– not only program variable values, but also
– history of sec-relevant events
• uses a sophisticated abstract interpretation scheme
– by creating an abstract interpretation lattice built from the
security policy automaton
11/19/2009
create your future
24
www.utdallas.edu
KEY ELEMENTS OF THE VERIFIER, CONTD.
• the model-checking step is encoded in the
operational semantics of the abstract machine, where
policy-violations are modeled as stuck states
• thus, if abstract machine reaches a stuck state,
program is rejected as policy-violating
11/19/2009
create your future
25
www.utdallas.edu
KEY ELEMENTS OF THE VERIFIER, CONTD.
• We have proved that our method is sound – that is,
adopting the proposed abstract interpretation scheme
does not result in any policy-violating program being
accepted by the model-checker.
• We have shown convergence - that our modelchecker will always terminate in polynomial time.
11/19/2009
create your future
26
www.utdallas.edu
• Please refer to the following paper for a
detailed treatment:
M. Sridhar and K. W. Hamlen. Model-checking In-lined
Reference Monitors. In Proc. Intl. Conf. on Verification,
Model-Checking and Abstract Interpretation, 2010. to
appear.
11/19/2009
create your future
27
www.utdallas.edu
IMPLEMENTATION AND RESULTS
• Rewriting system is a prototype system; not a fully-production
level system yet. Model-checking system is a very simple
system that can verify these properties. We don’t want a very
complex model-checker.
• ABC Extractor/Injector in C
• Rest in Prolog (simplifies implementation, but doesn’t have to be
in Prolog)
– Definite Clause Grammars greatly simplify parsing
– reversible nature of predicates facilitates code-generation as reverseparsing
– Prolog already has memoization (tabling) and coinduction
• Lines of code:
– binary-rewriters ~ 400 lines of Prolog code per security policy family
– shared parser/generator ~ 900 lines
11/19/2009
28
– veriifer ~ 2000 lines of code
create your future
www.utdallas.edu
redir policy:
SAMPLE POLICIES
•
prohibits malicious URL-redirections by ABC ad applets
•
redirections are implemented at the bytecode level by
navigateToURL system calls
•
The policy requires that method check_url(s) be called to validate
destination s before any redirection to s may occur. Method
check_url has a trusted implementation provided by the ad distributor
and/or web host, and may incorporate dynamic information such as ad
hit counts or webpage context.
•
Our IRM enforces this policy by injecting calls to check_url into
untrusted applets. For better runtime efficiency, it positions some of
these calls early in the program's execution (to pre-validate certain
URL's) and injects runtime security state variables that avoid potentially
expensive duplicate calls by tracking the history of past calls.
11/19/2009
create your future
29
www.utdallas.edu
SAMPLE POLICIES, CONTD.
postok policy:
•
sanitizes strings entered into message box widgets
•
This can be helpful in preventing cross-site scripting attacks, privacy
violations, and buffer-overflow exploits that affect older versions of the
ActionScript VM.
•
We enforced the policy on the Posty AIR application, which allows
users to post messages to social networking sites such as Twitter,
Jaiku, Tumblr, and Friendfeed.
11/19/2009
create your future
30
www.utdallas.edu
SAMPLE POLICIES, CONTD.
flimit policy
•
enforces a resource bound that disallows the creation of more than n
files on the user's machine
•
enforced this policy on the FedEx Desktop AIR application, which
continuously monitors a user's shipment status and sends tracking
information directly to his or her desktop
•
IRM implements the policy by injecting a counter into the untrusted
code that tracks file creations
11/19/2009
create your future
31
www.utdallas.edu
EXPERIMENTAL RESULTS
PROGRAM TESTED
POLICY
ENFORCED
SIZE
BEFORE
SIZE
AFTER
REWRITING
TIME
VERIFICATION
countdownBadge
redir
1.80 KB
1.95 KB
1.429s
0.532s
navToURL
redir
0.93 KB
1.03 KB
0.863s
0.233s
fiona
redir
58.9 KB
59.3 KB
15.876s
0.891s
calder
redir
58.2 KB
58.6 KB
16.328s
0.880s
posty
postok
112.0 KB
113.0 KB
54.170s
2.443s
fedex
flimit
77.3 KB
78.0 KB
39.648s
1.729s
All tests were performed on an Intel Pentium Core 2 Duo machine running Yap Prolog v5.1.4.
11/19/2009
create your future
32
www.utdallas.edu
Our papers are available at:
www.utdallas.edu/~meera.sridhar
11/19/2009
create your future
33
www.utdallas.edu
BIBLIOGRAPHY
1. B. W. DeVries, G. Gupta, K. W. Hamlen, S. Moore, and M. Sridhar.
ActionScript Bytecode Verification with Co-logic Programming. In Proc.
of the ACM SIGPLAN Workshop on Prog. Languages and Analysis for
Security (PLAS), 2009.
2. K. W. Hamlen, G. Morrisett, and F. B. Schneider. Computability
Classes for Enforcement Mechanisms. In ACM Trans. Prog.
Languages and Systems, 2006.
3. F. B. Schneider. Enforceable Security Policies. ACM Trans. Information
and System Security, 3:30–50, 2000.
4. M. Sridhar and K. W. Hamlen. Model-checking In-lined Reference
Monitors. In Proc. Intl. Conf. on Verification, Model-Checking and
Abstract Interpretation, 2010. to appear.
5. M. Sridhar and K.W. Hamlen. ActionScript In-Lined Reference
Monitoring in Prolog. In Proc. Intl. Symposium on Practical Aspects of
Declarative Languages, 2010. to appear.
11/19/2009
create your future
34
www.utdallas.edu
ACKNOWLEDGEMENTS
• We thank the Air Force Office of Sponsored Research (AFOSR)
for funding part of this research.
• We thank Dr. Gopal Gupta, Dr. Feliks Kluzniak, and Dr. R.
Chandrasekaran from The University of Texas at Dallas for
many helpful discussions and feedback.
• Finally, many thanks to Peleus Uhley here for giving us this
opportunity to share our work with you.
11/19/2009
create your future
35
www.utdallas.edu
Descargar

Document