Most of today's computer platforms operate under a fundamental assumption that has proven utterly false: that if a user executes a program, the user completely trusts the program. This assumption has been made by just about every operating system since Unix and is made by all popular operating systems used today. This one assumption is arguably responsible for the majority of end-user security problems. It is the reason malware -- adware, spyware, and viruses -- are even possible to write, and it is the reason even big-name software like certain web browsers are so hard to keep secure. We need to stop making this assumption.
Under the classical security model, when a user runs a piece of software, the user is granting that software the capability to do anything that the user can do. Marc Stiegler et al. compare this to giving your janitor the master key to your building. The master key opens not only the rooms the janitor needs to clean, but opens the vault where you keep your gold. Why would you do this? The smart thing to do would be to give the janitor a key only to the rooms he needs to clean, not to the vault.
Capability-based security solves this problem. Under capability-based security, when a user runs a piece of software, the software starts out with no capabilities whatsoever. The user may then grant specific capabilities to the program. For example, the user might grant the program permission to read its data files. However, the user would not give the program permission to read other parts of the hard drive, such as the user's private documents or the operating system files. The user could also control whether or not the program may access the network, play sounds, etc. Most importantly, all of these abilities can be controlled independently on a per-program basis.
Now, try to imagine writing a virus under such a system. Say your virus works by sending itself as an attachment to an email and hoping the user runs it. Under classical security systems, if the receiver runs the program, your virus can now take over their system, read their address book, and begin sending out copies of itself. Under capability-based security, the virus would first have to ask the user for an address book to read -- since it has no capability to search the hard drive for one -- and then ask the user for access to a network to send copies of itself. At this point, the user could clearly see that something fishy was going on and would deny your virus the ability to replicate. On the other hand, if you just want to send your friend a little program that draws a neat picture, you can do that, and your friend can run it, without any security concerns whatsoever.
Even better than implementing capability-based security in an operating system, though, is implementing capability-based security in a programming language. By implementing it at the language level, developers are able to control the capabilities available to each piece of code independently. Good practice, then, would be to only give each component of your software the bare minimum capabilities that it needs to perform its desired operation. And, in fact, it is easier to give fewer capabilities, so this is what programmers will usually do. The result is that if a security hole exists in some part of your program, the worst an attacker can do is gain access to the capabilities that were given to that component. On the other hand, under the classic security model, a security hole in any one component of your program could allow an attacker to gain complete control over your entire system.
It is, in fact, quite possible to prove that a program written in a capability-based language is secure. It is often possible to restrict all "dangerous" capabilities to one very small piece of code within your program. Then, all you have to do is make sure that that one piece of code is secure. When you are only dealing with a small amount of code, it is often possible to prove that this is the case.
Having read all this, you might think that capability-based security sounds incredibly complicated and difficult to maintain. As it turns out, this is not the case at all. If you are familiar with object-oriented programming, you already know how to write capability-based code. Capabilities are simply represented by objects which implement some abstract interface. For example, the capability to access a directory on the hard drive might be represented by an object implementing the Directory interface, which might contain methods like openFile() and listContents(). The key is, the only way to obtain a Directory object representing a directory on disk is to receive a reference to one from some higher authority. You cannot simply say 'new Directory("/important_files");' to get access to the important_files directory; someone who already has a reference to the directory must pass that reference to you.
Indeed, some people prefer to look at capability-based security as an extension of object-oriented design. Since all operating system functions must necessarily be accessed via objects implementing abstract interfaces, it is entirely possible to implement your own virtual OS by implementing these interfaces yourself, then run programs within this virtual OS. Neat, huh?
Implementation Status: Implemented in Evlan prototype version 0.3.