Friday, April 25, 2008

Patch obfuscation etc.

So it seems the APEG paper is getting a lot of attention these days, and some of the conclusions that are (IMO falsely) drawn from it are:
  • patch time to exploit is approaching zero
  • patches should be obfuscated
Before I go into details, a short summary of the paper:
  1. BinDiff-style algorithms are used to find changes between the patched and unpatched version
  2. The vulnerable locations are identified.
  3. Constraint formulas are generated from the code via three different methods:
    1. Static: A graph of all basic blocks on code paths between the vulnerability and the data input into the application is generated, and a constraint formula is generated from this graph.
    2. Dynamic: An execution trace is taken, and if the vulnerability occurs on a program path that one can already execute. Constraints are generated from this path.
    3. Dynamic/Static: Instead of going from data input to target vulnerability (as in the static approach), one can use an existing path that comes "close" to the vulnerability as starting point from which to proceed with the static approach.
  4. The (very powerful) solver STP is used for solving these constraint systems, generating inputs that exercise a particular code path that triggers the vulnerability.
  5. A number of vulnerabilities are discussed which were successfully triggered using the methods described in the paper
  6. The conclusion is drawn that within minutes of receiving a patch, attackers can use automatically generated exploits to compromise systems.
In essence, the paper implements automated input crafting. The desire to do this has been described before -- Sherri Sparks' talk on "Sidewinder" (using genetic algorithms to generate inputs to exercise a particular path) comes to mind, and many discussions about generating a SAT problem from a particular program path to be fed into a SAT solver (or any other solver for that matter).

What the APEG paper describes is impressive -- using STP is definitely a step forwards, as it appears that STP is a much superior solver to pretty much everything else that's publically available.

It is equally important to keep the limitations of this approach in mind - people are reacting in a panicked manner without necessarily understanding what this can and cannot do.
  1. Possible NP-hardness of the problem. Solving for a particular path is essentially an instance of SAT, and we know that this can be NP-hard. It doesn't have to be, but the paper indicates many formulas STP cannot solve in reasonable time. While this doesn't imply that these formulas are in fact hard to solve, it shows how much this depends on the quality of your solver and the complexity of the formulas that are generated.
  2. The method described in the paper does not generate exploits. It triggers vulnerabilities. Anyone who has worked on even a moderately complex issue in the past knows that there is often a long and painful path between triggering an overflow and making use of it. The paper implies that the results of APEG are immediately available to compromise systems. This is, plainly, not correct. If APEG is successful, the results can be used to cause a crash of a process, and I refuse to call this a "compromise". Shooting a foreign politician is not equal to having your intelligence agency compromise him.
  3. Semantic issues. All vulnerabilities for which this method worked were extremely simple. The actual interesting IGMP overflow Alex Wheeler had discovered, for example, would not be easily dealt with by these methods -- because program state has to be modified for that exploit in a non-trivial way. In essence, a patch can tell you that "this value YY must not exceed XX", but if YY is not direct user data but indirectly calculated through other program events, it is not (yet) possible to automatically set YY.
So in short one could say that APEG will succeed in triggering a vulnerability if the following conditions are met:
  1. The program path between the vulnerability and code that one already knows how to execute is comparatively simple
  2. The generated equation systems are not too complex for the solver
  3. The bug is "linear" in the sense that no complicated manipulation of program state is required to trigger the vulnerability
This is still very impressive stuff, but it reads a lot less dramatic than "one can generate an exploit automatically from an arbitrary patch". All in all, great work, and I do not cease to be amazed by the results that STP has brought to code analysis in general. It confirms that better solvers ==> better code analysis.

What the paper gets wrong IMO are the conclusions about what should be done in the patching process. It argues that because "exploits can be generated automatically, the patching process needs fixing". This is a flawed argument, as ... uhm ... useful exploits can't (yet) be generated automatically. Triggering a vulnerability is not the same as exploiting it, especially under modern operating systems (due to ASLR/DEP/Pax/GrSec).

The paper proposes a number of ways of fixing the problems with the current patching process:

1. Patch obfuscation. The proposal that zombie-like comes back every few years: Let's obfuscate security patches, and all will be good. The problems with this are multifold, and quite scary:
    1. Obfuscated executables make debugging for MS ... uhm ... horrible, unless they can undo it themselves
    2. Obfuscated patches remove an essential liberty for the user: The liberty to have a look at a patch and make sure that the patch isn't in fact a malicious backdoor.
    3. We don't have good obfuscation methods that do not carry a horrible performance impact.
    4. Obfuscation methods have the property that they need to be modified whenever attackers break them automatically. The trouble is: Nobody would know if the attackers have broken them. It is thus safe to assume that after a while, the obfuscation would be broken, but nobody would be aware of it.
    5. Summary: Obfuscation would probably a) impact the user by making his code slower and b) impact the user by disallowing him from verifying that a patch is not malicious and c) create support nightmares for MS because they will have to debug obfuscated code. At the same time, it will not provide long-term security.
2. Patch encryption: Distributing encrypted patches, and then finally distributing the encryption key so all systems update at once. This proposal seems to assume that bandwidth is the limiting factor in patch installation, which, as far as I can tell, it is not. This proposal does less damage than obfuscation though -- instead of creating certain disaster with questionable benefit, this proposal just "does nothing" with questionable benefit.

3. Faster patch distribution. A laudable goal, nothing wrong with this.

Anyhow, long post, short summary: The APEG paper is really good, but it uses confusing terminology (exploit ~= vulnerability trigger) which leads to it's impact on patch distribution being significantly overstated. It's good work, but the sky isn't falling, and we are far away from generating reliable exploits automatically from arbitrary patches. APEG does generate usable vulnerability triggers for vulnerabilities of a certain form. And STP-style solvers are important.

2 comments:

Mark said...

I agree that the terminology in the paper is a bit misleading (they say "exploit" but mostly mean "vulnerability trigger")

However, in section 4.1 they do mention how they (manually) can specify specific memory addresses to be overwritten, and the solver will generate input that meets this condition. So this could be used to generate partial exploit that overwrites some program-control structure with a user-controlled value. Not a fully-working exploit, but a bit further along than just a crash :)

One thing they also down-play a bit is that the "safety policy" needs to be tailored to the vulnerability, and they do not have polciies for all classes of vulns. (e.g. the IGMP CPU DoS, or ASP.Net issue).

So, while their paper definitely is a step forward towards automatic exploit generation, there is still a lot of know-how and manual intervention required.

Raj said...

The vulnerability described in the MS08-067 was discovered a couple of weeks before the Security Bulletin was released. This vulnerability was discovered as part of the research activity on possible Malware exploitation of Windows XP. Once it was felt by Microsoft that the vulnerability that existed with the SERVER Service was “WORMABLE”, on October 23, 2008 they came up with the Security Bulletin MS08-067 for the said problem. Even within this short time span we have already seen three Malwares that are exploiting this CRITICAL Vulnerability.

Trojan.Gimmiv.A was discovered on 24 October 2008
http://www.symantec.com/en/ph/enterprise/security_response/writeup.jsp?docid=2008-102320-3122-99

W32.Wecorl was discovered on November 2, 2008
http://www.symantec.com/security_response/writeup.jsp?docid=2008-110306-2212-99

W32.Downadup was discovered on November 24, 2008
http://www.symantec.com/security_response/writeup.jsp?docid=2008-112203-2408-99

Moreover, also look at the dates and see the count of the viruses that have come out in the wild as soon as the Vulnerability was made public...

In short... is it by any means possible that "Patch Based Exploit Generation" has started?? Although it may seem very unlikely, but what if it really did start ???