GrammaTalk

Another Round in the Arms Race 

Posted on

by

Programmers maintain a perverse commitment to writing code in type-unsafe languages, and as long as they continue to do so, they will perpetually be plagued by memory corruption bugs. As soon as you utter the words “memory corruption,” exploit writers start salivating uncontrollably, and with good reason – after decades of measures and countermeasures, the “classic buffer overflow” is still featured prominently on the CWE/SANS list of Top 25 Most Dangerous Software Errors. Here’s a brief summary of those decades (recounted in efficient detail by a “statement of knowledge” paper from IEEE Security & Privacy):

Attacker: What were you thinking, allowing me to write into a buffer on the stack? I’ll write out my desired payload, and then rewrite the return address to target my payload!

Defender: Blast that wretched gets()! I’ll protect my memory so that pages can be written to or executed from, but not both!

Attacker: Ah, so I can no longer write my own payloads? Fortunately, you’ve left me plenty to work with in your own code pages! I can find “gadgets” – small snippets of code that perform a specific operation – followed by a “RET” instruction. I can then construct my payload out of these “gadgets.” This is surprisingly easy to do, it turns out… your code is likely filled with them. Then, I can set up a sequence of return addresses on the stack, so that each gadget in my payload executes, and then calls the next one! (Less concise, more accurate description? See Wikipedia.)

Defender: Dang, that’s actually pretty clever. Fortunately, your attack relies on having knowledge of the location of my instructions. I’ll shift the location of my sections by some random offset every time I run, so you won’t be able to find your gadgets at runtime. (Again, see Wikipedia for more.)

Attacker: Well played, Defender – now I need an information disclosure vulnerability in addition to a memory corruption vulnerability to find my gadgets. Fortunately, programmers are really bad at not leaking code addresses, so this isn’t that high of a bar.

Defender: All right, that’s enough. I’ll randomize my code at a much finer granularity – individual functions, or even individual instructions! Now a single information disclosure won’t tell you anything.

Attacker: Oh, that’s very good. I suppose I’ll just give up.

Bit of wishful thinking on that last part. The defender’s latest technique is known as “fine-grained address space layout randomization” – and of course attackers are going to make every effort to bypass it.

I See What You Did There

One such attempt was recently presented at the IEEE Symposium on Security and Privacy. In the paper, Just-In-Time Code Reuse: On the Effectiveness of Fine-Grained Address Space Layout Randomization, the authors present an exploit framework that dynamically maps the entirety of a program’s memory layout, identifies gadgets, and compiles a payload into an exploit using those gadgets, all in a way that defeats fine-grained address space layout randomization.

Here’s a brief summary of the attack, which relies on a single memory disclosure and a single memory corruption vulnerability:

  • Given a single valid code pointer, the attacker knows that the target address lies on a page that is mapped — that is, every address on that page can be accessed without crashing the program.
  • The attacker scans this known good page for instructions and disassembles them. If they find direct CALL or JMP instructions, they assume that these instructions target valid addresses. The attacker iterates this discovery process for each address found. A single information disclosure is thus used to gain a (more or less) complete picture of the program’s memory.
  • The attacker harvests gadgets from the code pages they’ve discovered, and uses just-in-time compilation to construct the desired exploit out of these gadgets.
  • The attacker runs the exploit, obtains a root shell, and goes to town.

If reading that paper makes you nervous about the software you rely on, well, good. Fortunately, just as no defense is invulnerable, no attack is omnipotent, and one possible solution (dismissed by the aforementioned paper as “impractical” due to “high performance penalties”) has already been implemented.

Bring on the Next Remove

The technique is called instruction layout randomization, or ILR, described in a 2012 paper, ILR: Where’d My Gadgets Go?. A program protected by ILR executes under a “process-level virtual machine”, or PVM, which randomizes the location of instructions, then defines a fall-through map that provides each instruction with its successor. The fall-through map is hidden from the program itself by read- and write-protecting the relevant pages. And when executing the application code, the VM itself is memory protected. Thus, the attacker is denied access to their precious ROP gadgets – they cannot discern the mapping from the layout of instructions in memory to the actual sequence of instructions as interpreted and executed by the PVM.

The implementation of ILR described in this paper has an average run-time overhead of 13%, which certainly suggests that the performance penalties are minimal, and the concept is practical.

And so the eternal war in memory continues. It will probably take a long time for both these attacks and the defenses against them to become commonplace. On the attacker side, there exist simpler attacks that still work most of the time. On the defender side, there are mountains of legacy software that would break if subjected to a harsh glance, much less a formal hardening technique. But just as W^X pages and ASLR have made their way into modern, mainstream operating systems, we will no doubt see the advent of fine-grained ASLR (and attacks against it) in the years to come.

Related Posts

Check out all of GrammaTech’s resources and stay informed.

view all posts

Contact Us

Get a personally guided tour of our solution offerings. 

Contact US