Detecting agile BS
A recent post by an organization called Agile Government Leaders, which describes itself as “a nonprofit association serving the government innovation community,” caught my eye with its warnings about agile.
“Last month at an innovation conference in Sacramento,” the post began, “a government executive sat on stage and qualified a reference to ‘agile’ with ‘dare I say it,’ implying the term has sometimes been overused, confusing or misapplied in the public sector. There is growing evidence the term ‘agile’ is facing backlash. … Semantics can make it difficult to embrace a word that has acquired so much baggage.”
The problem is that these days everyone, especially vendors, is describing what they do as agile, even when, as is often the case, it actually resembles traditional practices — often referred to as “agilefall.” If only a tiny proportion of activities characterized as agile actually have the characteristics of agile, this is a recipe for the failure of the movement to come close to realizing its potential for government. That would be a shame.
The Agile Government Leaders post called readers’ attention to a DOD document from last October, which I hadn’t seen before, called Detecting Agile BS.
The first interesting thing about this document is that something with the word “BS” in the title was released by the Defense Department at all. Yes, it came out of the Defense Innovation Board, which is not DOD civil servants, but even still… (the weblink has a defense.gov domain name). In my recent blog about middle-aged Air Force civil servants promoting innovative IT practices, I noted that they were working with the Defense Digital Service, AKA “the kids in hoodies.” The military is changing, and for my mind new here is good.
But more important, of course, is the content of the document. Much of it is in the form of questions. For example:
How do you know a project is not agile?
- Nobody on the software development team is talking with and observing the users of the software in action; we mean the actual users of the actual code.(The Program Executive Office does not count as an actual user, nor does the commanding officer, unless she uses the code.)
- Continuous feedback from users to the development team (bug reports, users assessments) is not available. Talking once at the beginning of a program to verify requirements doesn’t count!
- End users of the software are missing-in-action throughout development; at a minimum they should be present during Release Planning and User Acceptance Testing.
- Manual processes are tolerated when such processes can and should be automated (e.g. automated testing, continuous integration, continuous delivery).
The document also provides questions for different participants in the process. For the programming team, one example is, “how do you test your code?” (They then note that “wrong answers” include “we have a testing organization” and “OT&E is responsible for testing.”)
One basic difference between waterfall and agile is that in waterfall, the software developers talk with users, if at all, only at the beginning of “requirements development,” when most users provide less-than-useful hypothetical information about features they might want. Similarly, in waterfall, testing is held up for the last stage of the development process, at which time earlier mistakes that create a do-loop of rework, delay, and extra cost have already been baked into the system. With agile, both data about user wants and testing of software continues from beginning to end.
Two of the document’s questions for program managers are “what have you learned in your past three sprint cycles and what did you do about it?” (Among the wrong answers are “what’s a sprint cycle?” and “we are waiting to get approval from management.”) Another question to program managers is “who are the users that you deliver value to each sprint cycle? Can we talk to them?” (Wrong answer: “we don’t directly deploy our code to users”) For customers and users, questions include “how do you send in suggestions for new features or report issues or bugs in the code? What type of feedback do you get to your requests/reports? Are you ever asked to try prototypes of new software features and observed using them? And “what is the time it takes for a requested feature to show up in the application?”
Finally, for program leadership: “Are teams delivering working software to at least some subset of real users every iteration (such as every two weeks)?” And: “Are teams empowered to change the requirements based on user feedback?”
This is a short document, only about five pages, with zero bureaucratese. The entire documents is very useful and practical, and IT managers working with software development contractors (or in-house teams if they have them) should commit the material here to working memory for constant use. If government actually moves practice in the direction of the answers to these questions, agile should move forward, and our software development should improve.
Published with permission from Fcw.com. This originally appeared in fcw.com: https://fcw.com/blogs/lectern/2019/02/kelman-agile-practices.aspx.