In scientific or research and development (R&D) projects, the full experimental design cannot be established in advance, since most experiments require validation of assumptions. You can only budget the initial experiments to validate the assumptions.
After failing to stay within the budget in several projects, last year we decided to create a process to properly estimate the cost of all our future activities. We hired a consultant and allocated 3 experienced researchers to investigate the reason why we were failing and how to fix it. We took previous project plans and broke them down trying to extract basic mechanisms of estimation and execution, in order to propose a model that could be applied across distinct situations. We spent hours and hours modelling in spreadsheets and workflow software, but our ‘brick-block’ budgeting system (depicted above), although theoretically correct, was too complicated and too far from the reality of a laboratory, to work out.
Why did we fail to understand the failure?
It took me another year, and a deep dive into Agile management methodologies, to understand the fundamental problem behind budgeting R&D projects.
When a researcher envisions a project, it is usual for him/her to make several assumptions. It can’t be different since uncertainty is the rule and not the exception in science. It is also the rule that not all this assumptions will be validated and the researcher is expected to ‘navigate’ this uncertainty. A good researcher is committed with concluding whatever the data allows him/her to conclude, which, most of the time, is not what he/she anticipated. Fail to validate an assumption is not a fail: it is part of the scientific method and that is how we produce trustable, replicable, scientific knowledge.
Open parenthesis: Managers, on the other hand, are committed to their estimates. If a project deviates from the estimated deadline or budget, there was a fail either in the planning or in the execution. And this fail, is a fail. Close parenthesis.
When I started budgeting projects, I was assuming that all my assumptions would be validated. Moreover, I was assuming that they would be validated on the first try. Both quite unrealistic assumptions. As a researcher, I needed the project’s limits to be… not really open, but… let’s say… blur, to account for that uncertainty. However, in the real world, time and money are limited resources with high opportunity cost. The real world could not accept the loose commitment of researchers with boundaries.
The ‘brick-block’ budgeting approach was an attempt to accommodate the demands of the uncertainty of the R&D projects with the demands of control of the project’s manager. It estimated the average cost of each ‘assumption validation’ and the ‘experiment/execution’. It established ‘decision gates’ and limited the number of attempts to validate each assumption and replicate each experiment. To be as ‘realistic’ as we could, we divided assumptions and experiments into categories: literature review, in silico, in vitro, in vivo experiments, all which have different CAPEX and OPEX.
But when I tried to apply the model to my next project, the cost of estimating the cost of all the possible alternative scenarios (even if limited to 3 assumption and 3 subsequent experiments with 3 replicates) became too high (at least for me and my team), and, truly, brought no overall increase in the budget’s confidence.
At the time, I felt in a dead end.
Fortunately, it was not only researchers who were failing in executing projects as estimated: civil engineers, software developers, teachers… no one seemed to be able to stick to the plan.
It turned out that the human brain is simply very, very, very bad at estimating things. Studies show that, on average, the error in the estimation of the execution time is around 400%!!! Maybe that is why Neils Bohr (or someone else) said that “It is hard to make predictions, specially about the future”.
And that added a ‘bad estimating brain’ variable to a scenario of ‘uncertainty in R&D’ and ‘limited resources world’: How to reconcile all that?
The proposal of Agile management was to ‘top-down’ the budget and prioritize activities within a defined time slot.
With Agile, you have the product in mind based on your estimation of the user experience (and the human brain is much better in creating stories than in estimating number of hours required to do something) but you only planyour next step, the deliverable. The client tells you the budget: how much he/she wants to spend in that deliverable. And gives you a deadline. The deliverable is determined after you have all other variables (if you give me $40 and a week, the best that I can do is a plaster model of a bridge or the wireframes of a new software).
Agile came out of the Toyota factories in the 50’s and spread in the software world in the early days of the 21st century. Now, everyone that matters is using agile methodology (or in the process of moving to it). Read this excerpt from Jeff Sutherland’s ‘Scrum – the art of doing twice the work in half the time’:
“In a revamped scrum-driven world, instead of approving a plan to build a bridge across a river, a legislative body would say to the highway department “we want X number of people to be able to travel over this waterway with Y amount of time and Z cost. How do you do that, is up to you”. That would allow for discovery and innovation”.
A major change in the way the project evolves in agile methodology is that you need to increase your interactions with the client (at the end of every step). This may also be time consuming, but the result is… the product! A happy client! The result is the reduction of overall cost and duration of the project. Not to mention the reduction in planning time and in frustration of the execution team.
In the end, it’s a win-win situation.
So, why do some people, smart people, still ask, demand, for Gantt charts and detailed project costs?
Michael Shermer says in his book ‘Why People Believe Weird Things’ that our choices are the result not of our intelligence, but rather of our alternatives, incentives and the opinion of our peers. And that after you make a choice, based on some of these factors, just then you will use all your intelligence to justify it.
After a while, you don’t even remember why you want what you want. You just do. It became a habit. And those, like prejudice, are hard to change.
John Seely Brown, who led the Xerox Research labs in the 70’s, argues in his book ‘A new culture of learning’ that the half-life of knowledge, that use to be 30 years, is now only 5 years. Our biggest challenge, in the years to come, it is not to learn new things: it is to unlearn old, useless stuff to be able to learn new things.
We could start by giving up unrealistic, extensive and exhausting detailed budgeting estimations.