At a recent security steering committee meeting we were reviewing an update of our Password Policy which I had drafted. We got to a particular requirement which stated “Passwords should not include ‘guessable’ data such as personal information about yourself, your spouse, your pet, your children, birthdays, addresses, phone numbers, locations, etc.” One of the committee members said “that’s not enforceable.”
The comment made me stop and think for a second because at first it didn’t make sense. Of course it’s enforceable, I thought. If a user doesn’t perform that action they’ll be violating the policy. When we detect that they’ve violated the policy (in this case, if a password cracker reveals guessable passwords or someone’s easily guessable password leads to a breach), we’ll enforce it at that time. Sure, it’s after the fact, but it will still get enforced.
But what the committee member was referring to was the idea that the policy wasn’t enforceable by technical means before the user attempts to violate it. In other words, there was no way we could proactively prevent users from performing that action.
While this is true, that did not equate to unenforceability in my mind. I explained to the committee that, following this logic, then speed limits are unenforceable. After all, there’s nothing preventing drivers from violating the speed limit—say some sort of mechanical inhibitor that doesn’t allow vehicles to exceed the posted speed limits. The only way speed limits can be enforced is after the violation. Somebody (a police officer) has to actively monitor traffic for speed limit violations and when they’re detected (with a radar detector, for instance) only then is the policy (in this case, a law) enforced (usually with a ticket).
After some discussion, that policy requirement and the entire policy itself was approved. But I was unsatisfied. It seemed to me that there must be some formal explanation for policy elements that can be enforced proactively before the fact or reactively after the fact.
It took a bit of research, but I found a couple of technical security papers discussing exactly this point. The first one jumped right out at me: “Enforceability vs. Accountability in Electronic Policies” by Travis D. Breaux and Annie I. Antón. This paper does a good job of defining the relevant terms:
- “An enforceable policy requires a pre-emptive mechanism to irrefutably constrain or compel a principal’s actions.”
- “An accountable policy, on the other hand, only requires a reactive mechanism to determine if a principal is compliant.”
Aha! Here was scholarly support for my assertion that there were two different types of security policies, both of which were perfectly valid.
Another paper, “Automated Counterexample-Driven Audits of Authentic System Records” by Rafael Accorsi reinforced the point further:
“A policy is thus enforceable if a mechanism capable of ensuring policy adherence is available. A policy is accountable if a non-compliant state can only be detected and reacted upon.”
Like Breaux and Antón, Accorsi is clearly referring to two types of policies: those that are enforceable and those that are accountable. These are two sides of the same policy coin:
And as a bonus, Accorsi introduced me to a much fancier way to say “reactive policy enforcement”: “a posteriori policy enforcement.” “A posteriori” literally means “from the latter” as in knowledge that’s dependent on experience or empirical evidence. Accorsi describes a posteriori policy enforcement as “the after-the fact enforcement of sanctions or penalties incurred by the non-adherence to corresponding obligations.”
What this means is it’s perfectly acceptable to write a security policy that isn’t enforceable by technical means as long as your intent is to enforce it through auditing. One type of security policy—either enforceable or accountable—isn’t better than the other.
Photo credit: coreforce
Thanks for pointing out the distinction between these two types of policies. I see myself using it going forward.