Being knowledgable in the field of information security is useful and beneficial. However, it’s not sufficient, and while it’s (somewhat) easy to argue that it’s necessary there’s a big gap between being a security expert and making software better, or even making software more secure.
The security interaction on many projects goes something like this:
- Develop software
- Get a penetration tester in
- Oh, shit
- Fix anything that won’t take more than two days
- Get remaining risk signed off by senior management
- Ship
- Observe that most of the time, this doesn’t cause much trouble
Now whether or not a company can afford to rely on that last bullet point being correct is a matter for the executives to decide, but let’s assume that they don’t want to depend on it. The problem they’ll have is that they must depend on it anyway, because the preceding software project was done wrong.
Security people love to think that they’re important and clever (and they are, just not any more than other software people). Throughout the industry you hear talk of “fail” or even “epic fail”. This is not jargon, it’s an example of the mentality that promotes calling developers idiots.
Did the developer get the security wrong because he’s an idiot, or was it because you didn’t tell him it was wrong until after he had finished?
“But we’re penetration testers; we weren’t engaged until after the developers had written the software.” Who’s fault is that? Did you tell anyone you had advice to give in the earlier stages of development? Did you offer to help with the system architecture, or with the requirements, or with tool selection?
You may think at this point that I shouldn’t rock the boat; that if we carry on allowing people to write insecure software, there’ll be more money to be made in testing it and writing reports about how many high-severity issues there are that need fixing. That may be true, though it won’t actually lead to software becoming more secure.
Take another look at the list of actions above. Once the project manager knows that the software has a number of high-priority issues, the decision that project manager will have to take looks like this:
If I leave these problems in the software, will that cause more work in the project, or in maintenance? Do I look like my bonus depends on what happens in maintenance?
So, as intimated in the process at the top of the post, you’ll see the quick fixes done – anything that doesn’t affect the ship date – but more fundamental problems will be left alone, or perhaps documented as “nice to haves” for a future version. Anything that requires huge changes, like architectural modification or component rewrites, isn’t going to happen.
If we actually want to get security problems fixed, we have to distribute the importance assigned to it more evenly. It’s no good having security people who think that security is the most important thing ever, if they’re not going to be the people making the stuff: conversely it’s no good having the people who make the thing unaware of security if it really does have some importance associated with it.
Here’s my proposal: it should be the responsibility of the software architect to know security or to know someone who knows security. Security is a requirement of a software system, and it’s the architect’s job to understand what the requirements are, how the software is to implement them and how to make any trade-off needed if the requirements come into conflict. It’s the architect’s job to justify those decisions, and to make them and see them followed throughout development.
That makes the software architect the perfect person to ensure that the relative importance of security versus performance, correctness, responsiveness, user experience and other aspects of the product is both understood and correctly executed in building the software. It promotes (or demotes, depending on your position) software security to its correct position in the firmament: as an aspect of constructing software.