Quantcast
Channel: Security – Random ASCII – tech blog of Bruce Dawson
Viewing all articles
Browse latest Browse all 3

Buggy Security Guidance from Apple

$
0
0

In February 2014 Apple published their Secure Coding Guide. I glanced through it and noticed that their sample code for detecting integer overflow was buggy – it triggered undefined behavior, could be optimized away, and was thus unsafe to use.

I tweeted this, then Jon Kalb blogged it, and then Apple quietly fixed their guide.

But their code is still broken, still triggers undefined behavior, and fails completely on 64-bit builds.

Update, June 17, 2014: no change. Apple’s security guidance is still entirely broken for 64-bit builds.

Update, August 6, 2014: No change. Apple’s security guidance is still entirely broken for 64-bit builds. The document was updated 7/22/2014 but the revision history gives no indication of what changed, and page 28 is still completely broken for 64-bit builds. And yes, this does matter.

Apple’s Secure Coding Guide is available here. And I think the guide is mostly correct. But they sure are having trouble with detecting integer overflow. To be fair, it is tricky, but still. They should be able to do better.

The problem is on page 28, in their “Avoiding Integer Overflow and Underflows” section. This page contains two code snippets. The first is an example of a common mistake, and the second is Apple’s recommended guidance on how to avoid integer overflow when multiplying.

The original problem

Their recommended code originally looked like this:

size_t bytes = n * m;
if (n > 0 && m > 0 && SIZE_MAX/n >= m) {
… /* allocate “bytes” space */
}

imageThe types of ‘n’ and ‘m’ are never mentioned, which is unforgivably ambiguous for a security document. If you carefully read the surrounding text and its discussion of undefined behavior on overflow, in particular its reference to CWE-733, CERT VU#162289, it is clear that ‘n’ and ‘m’ are both signed integers. If they were unsigned then their previous discussion about the C language specification allowing compilers to optimize out the tests makes no sense.

Unsigned overflow is defined to “obey the laws of arithmetic modulo 2^n”

The problem with their initial recommended code is that signed overflow gives undefined behavior in C/C++. It doesn’t matter if the results of the signed overflow are never used – the entire execution of the program is undefined as soon as signed integer overflow happens. Since Apple did the check for overflow after doing the multiply they have closed the undefined door after the metaphorical horse has left. They checked too late. If ‘n’ and ‘m’ overflow then the compiler is permitted to launch nethack, format your drive, or sing sea shanties in Spanish.

(Singing sea shanties in Spanish requires the optional voice module, or an embedded web browser for multimedia functionality)

More practically speaking (since most compilers are not actively evil), an optimizing compiler could detect that the checks are unnecessary, and omit them, thus creating a perfect example of security theater.

(Examples of deleted security checks due to undefined behavior can be found here)

Some optimizing compilers exploit undefined behavior by using it to prune the state tree. Since signed integer overflow is undefined the compiler is allowed to assume that it cannot happen. Therefore, at the time of the ‘if’ statement the compiler is allowed to assume that n*m does not overflow, and the compiler may therefore determine that the overflow checks will always pass, and can therefore be removed. We loves the optimizations.

This is not a theoretical concern. Serious security bugs have happened because compilers removed checks that were provably unnecessary due to undefined behavior.

In short, you have to check for overflow before you overflow. Doing the signed integer overflow and then checking is not legal.

The new problem

At some point Apple quietly fixed their document. They didn’t change the date (the footer still says February 11 but the properties now say March 10) or acknowledge the error (the revision history is silent), but they fixed the code. This is now their recommended code for avoiding integer overflow when multiplying:

if (n > 0 && m > 0 && SIZE_MAX/n >= m) {
size_t bytes = n * m;
… /* allocate “bytes” space */
}

Much better! Except not really.

They now avoid doing the multiplication until they have done the check, but their code is still wrong.

SIZE_MAX is the maximum value of a size_t typed variable. We’ve already established that ‘n’ and ‘m’ are not size_t types – they appear to be signed integers. The maximum value for a signed integer is INT_MAX. Apple is using the wrong constant!

In order to avoid integer overflow (undefined behavior, dragons be here) they either need to check against INT_MAX instead of SIZE_MAX, or they need to cast ‘n’ and ‘m’ to size_t before doing the multiplication.

The fact that the result of the multiplication is assigned to a size_t is irrelevant. The multiplication is a signed integer multiplication, it produces a signed integer result, and this result is then converted to a size_t during the assignment.

When compiling 64-bit code this is a disaster. For 64-bit code the value of SIZE_MAX is 2^64-1 and (assuming 32-bit integers) there are no values of ‘n’ and ‘m’ that could possibly exceed it, so all positives values will pass the test. For a 64-bit build Apple’s code offers precisely zero protection. goto fail. Do not pass go. Do not collect $200.

When compiling 32-bit code the outcome is not so clear. On every CPU I have ever used a signed multiply gives the same results in the bottom word of the result as an unsigned multiply, so it is highly likely that the result will always be correct. But the behavior is still undefined. And anytime you find yourself saying “I’m sure that this undefined behavior won’t matter” you should make sure you aren’t working on software where security matters. And you certainly shouldn’t be writing security guides…

Code that works

The code below should work – and I hereby license this fixed version to Apple with the only requirement being that they acknowledge the original errors so that developers who followed their advice might hear about the problems:

if (n > 0 && m > 0 && SIZE_MAX/n >= m) {
size_t bytes = (size_t)n * (size_t)m;
… /* allocate “bytes” space */
}

If using C++ then you could also use numeric_limits<>::max() and a typedef to ensure that your MAX values match your types.

This stuff is crazy tricky. If you want to get serious about this you should check out SafeInt, for code or for ideas. Or check out this guide (mentioned in the reddit discussion):

https://www.securecoding.cert.org/confluence/display/seccode/INT32-C.+Ensure+that+operations+on+signed+integers+do+not+result+in+overflow

Maybe they meant size_t

One could argue that Apple’s code is fine as long as ‘m’ and ‘n’ have type size_t. That is true, but that supposition is just not supported by the surrounding text. And, if ‘n’ and ‘m’ are intended to be of type size_t then that needs to be explicitly stated. Failing to list such a crucial requirement is absolutely unforgivable.

Perspective

It’s nice to take a break from critiquing Microsoft…


Viewing all articles
Browse latest Browse all 3

Trending Articles