
Good project pace results of two conflicting forces: market or financial pressure to go fast (typically relayed by management) and technical pressure to do things right (typically relayed by architects and developers).
This conflict is not symmetrical, for several reasons.
- Management has organizational power and excellent communication skills – compared to developers who tend to emerge from the ideal world of their IDE after hours lost in abstraction in a kind of semi-conscious, almost hungover state, barely able to talk to human beings 🙂 And management is always interested in having more bang for the buck and in shipping earlier to build or strengthen a market position.
- Technical issues make your product explode on the long run, not today, and as such are easy to sacrifice. By technical issues I mean not only rotting architecture, but also documentation and regulatory issues: taking them too lightly will not cause an earthquake today, but years later. Example of disasters: technical bankruptcy (throwing away that entire unmaintainable code base and starting again from scratch), denial of authorization to sell your medical device by the regulator of a certain market, a patient gets killed using your product. And I have seen managers who, either because of incompetence or of sheer cynicism, are perfectly able to take decisions that have catastrophic long-term consequences for the sake of short-term political advantage – some people are amazingly able to lie their way out of any situation.
This conflict is dissymmetrical but it doesn’t mean that technical people are always right. A friend of mine worked in a startup without adult supervision: developers happily spent three years refactoring the code without adding any new feature. No joking. Three years of getting high with code. Gold plating can go very far. Another story from the trenches: I knew a software architect who convinced his management that building a tool to refactor the code was required (ugly Borland Delphi 6 was really unstable and unproductive) and he spent two years writing this tool alone instead of taking care of the codebase he was responsible for, in particular database concurrency issues that caused much trouble at the end of the project – he clearly worked for interests that were not those of the company, but of personal pleasure. The thing is that usually technical people don’t care much about their organization making money: they just want to enjoy coding well-done stuff and avoid becoming obsolete. If the company goes bankrupt or the project fails, they just move to another company where they shine with these new skills they honed instead of working on what was required to get that project done. Don’t get me wrong: I’m not saying that all developers are selfish and not interested in moving projects forward, but that some of them are, and that there exists a natural tendency to privilege thrill over duty that must be contained.
A simple model to help find the balance
So how we find the appropriate balance between conflicting forces? It’s not easy to find, it has to be managed. Over the years, I have come up with a model designed to understand and manage each force, and it has proved useful.
Real quality won’t happen by chance, or only thanks to the sacrifice of your teammates workings spontaneously nights and week-ends, but because time has been allocated for it (assuming the right processes and mindset are already in place). On the other end of the spectrum, there has to be a clear focus on delivering story points to avoid getting lost in gold plating. My preferred approach is to set up an iteration model where I allocate time for quality-related activities (stabilization phase size, refactoring proportion during construction phase) and set a story points goal for each iteration to keep everybody focused on delivering customer value.
- Model for quality forces.
- Duration of construction and stabilization phases. They might not be constant all project long: as the maintenance burden increases, stabilization phases may get longer. Remember, stabilization is time devoted to bug fixing and documentation. It’s quality time.
- Proportion of construction time allocated to technical tasks. I’m not alluding to the mandatory technical tasks that deserve user stories of their own (such as sending error reports, writing an installer, have that load test pass), but to unpredictable refactorings. Be careful to think it through. Without dedicated time, refactoring won’t happen at the magnitude required for quality projects. This technical time is also the oxygen skilled technical people breathe: its helps you hire and retain them. Set that value to zero, and you likely will accumulate technical debt and scare away gifted technicians. On the other hand, set it to 100% and the project will stop moving forward. And once again, this value should not be constant: high at project start (say 50%) when frameworks and practices are not established, medium in the middle of the project (say 25%), low at project end when everybody struggles to finish that version (say 15%).
- Model for production
- I find it necessary to have a target of accepted story points for each iteration. This is the sum of user stories and technical stories (refactoring and anything related to inner quality that no user will see).
- It is best measured as the average of the total story points of accepted user stories (accepted by testers, with a little allowance for a few minor bugs) on the last few iterations. This measurement is essential to feed the model with reality (total team velocity captures a great deal of variables that are impossible to model: estimate errors, organizational overhead, tooling problem, motivation, quality of the personnel, maintenance burden, architectural issues…).
- Aligning the goal to the measurement is a delicate choice if the team produces less than expected: it can be invaluable to predict an accurate project end date, but maintain expectations help fight gold plating tendencies and maintain commitments firmly. In my experience, the target should be maintained just a little above average measured velocity – insufficient production must be fought by the team, not too easily accepted.
- This target and the proportion of construction time devoted to refactoring make it easy to calculate the estimated user story points target and the budget for technical stories.
- I find it necessary to have a target of accepted story points for each iteration. This is the sum of user stories and technical stories (refactoring and anything related to inner quality that no user will see).
Here’s a sample spreadsheet to help clarify my intent. I’m not saying it’s perfect: you should probably design yours; just consider mine a starting point. Simple iteration management model
A model for unknown team velocity
Measuring actual team velocity and feeding it into the model is very powerful. But sometimes it’s not practical:
- When the team is just starting and has no history
- When projects are long (several years in medical devices development). In particular, the maintenance burden will likely get heavier and turnover will happen.
- When there are lots of variations in team size. This is especially the case when management asks: “I want this project to be ready by date X, what do you need to make it happen?”. Tricky question: team dynamics are way more complex than a simple multiplication – such as thinking that doubling the headcount will double the throughput. Choose your answers wisely. This is a very strategic issue: if you can predict very early that a project is late and add the necessary staff soon enough so that it is profitable (people should stay at least one year to offset training costs), you will save the deadline. Do it too late, and haphazard late staffing efforts will terminate the team: “adding manpower to a late software project makes it later“.
- When projects have not started yet. To approve a project, management needs to know its scope, its duration and its cost. The best way to do this with an agile mindset would be to build a product map to have a backlog and estimate user stories to have a project size, then estimate the velocity of the team to deduce time and cost.
So I have designed another model to estimate team velocity when there is no empirical data. Here are the main variables:
- Real time
- Average daily velocity by developer. The number of ideal days in a real day. Could be 100% if you worked alone in a monastery with absolute concentration, no task switching, and perfect estimation skills. In reality, there are useful and useless meetings, coffee breaks, errors of all kinds. I have made many measurements and find that people are often around 70% in this respect.
- Working day factor. You have to take into account that a week day is not a working day: people are sick, have holidays, get trained. In my current environment (France, where holidays are plentiful and sacred), people work around 220 days a year. That’s not 52*5=260.
- Real workforce. Don’t just count people. Take into account:
- Turnover. I usually count that one person out of ten goes every year, and that it takes six month to replace them (so I loose 5% of the workforce). In other environments, you might have higher attrition rate or shorter recruitment delays.
- Training. Newbies are not as productive as historical team members. Beginners are often not as productive as principal engineers. There has to be some ramp-up in the workforce when someone arrives (50% productivity the first month, complete productivity after 3 to 6 month depending on experience and the complexity of the work environment).
- Communication and management overhead. My rule of thumb: every new person in the team eats 20% of the time of the equivalent of a person. One person is as productive as one. Two persons are as productive as 1,8. 6 persons are as productive as 5. This factor is very important when you start computing the effect on the deadline of various staffing scenarios. For very big teams, this factor might be higher.
Here’s the sample spreadsheet: Team velocity estimation model
This might sound a little complicated and over-engineered (don’t complain, I spared you the spreadsheet where I mix both models with release burnup graphs and macros to generate user story cards 🙂 – you won’t need it if you have decent agile tooling, which was not my case). But having sound predictions of what will happen in the future is a prerequisite to act upon that future. When budgets are scary and the deadline is years away, a spreadsheet with many parameters and experimental data will prove a good way to negotiate with top management. And once people realize that what you predicted one year ago proved true, they will listen very carefully on what you say will happen in two years, and maybe grant those two additional developers you need. This might also help you to slow down if quality gets out of hand: the automatic adjustment feature of the two-phase iteration (automatic stabilization phase extension which leads to decreased team velocity on the long run) will help justify why team throughput decreases – quality is just a priority.