There’s a post by clever security guy Jim Bird on Appsec’s Agile Problem: how can security experts participate in fast-moving agile (or Agileā¢) projects without either falling behind or dragging the work to a halt?
I’ve been the Appsec person on such projects, so hopefully I’m in a position to provide at least a slight answer :-).
On the team where I did this work, projects began with the elevator pitch, application definition statement, whatever you want to call it. “We want to build X to let Ys do Z”. That, often with a straw man box-and-line system diagram, is enough to begin a conversation between the developers and other stakeholders (including deployment people, marketing people, legal people) about the security posture.
How will people interact with this system? How will it interact with our other systems? What data will be touched or created? How will that be used? What regulations, policies or other constraints are relevant? How will customers be made aware of relevant protections? How can they communicate with us to identify problems or to raise their concerns? How will we answer them?
Even this one conversation has achieved a lot: everybody is aware of the project and of its security risks. People who will make and support the system once it’s live know the concerns of all involved, and that’s enough to remove a lot of anxiety over the security of the system. It also introduces awareness while we’re working of what we should be watching out for. A lot of the suggestions made at this point will, for better or worse, be boilerplate: the system must make no more use of personal information than existing systems. There must be an issue tracker that customers can confidentially participate in.
But all talk and no trouser will not make a secure system. As we go through the iterations, acceptance tests (whether manually run, automated, or results of analysis tools) determine whether the agreed risk profile is being satisfied.
Should there be any large deviations from the straw man design, the external stakeholders are notified and we track any changes to the risk/threat model arising from the new information. Regular informal lunch sessions give them the opportunity to tell our team about changes in the rest of the company, the legal landscape, the risk appetite and so on.
Ultimately this is all about culture. The developers need to trust the security experts to make their expertise available and help out with making it relevant to their problems. The security people need to trust the developers to be trying to do the right thing, and to be humble enough to seek help where needed.
This cultural outlook enables quick reaction to security problems detected in the field. Where the implementors are trusted, the team can operate a “break glass in emergency” mode where solving problems and escalation can occur simultaneously. Yes, it’s appropriate to do some root cause analysis and design issues out of the system so they won’t recur. But it’s also appropriate to address problems in the field quickly and professionally. There’s a time to write a memo to the shipyard suggesting they use thicker steel next time, and there’s a time to put your finger over the hole.
If there’s a problem with agile application security, then, it’s a problem of trust: security professionals, testers, developers and other interested parties[*] must be able to work together on a self-organising team, and that means they must exercise knowledge where they have it and humility where they need it.
[*] This usually includes lawyers. You may scoff at the idea of agile lawyers, but I have worked with some very pragmatic, responsive, kind and trustworthy legal experts.