International development cooperation has always been concerned with results. Whether there is an increased attention to results in recent years can be questioned, for those with good memory there has been successive waves of results focus. The new EBA-report by Cathy Shutt, at the University of Sussex, reviews the critical debate about results based management. She shows that the debate is not only a matter of obsessive measurement and reporting of meaningless numbers for political accountability, but also a matter of problematic assumptions and how we think about development, evidence and learning. In the report, Shutt explores what could be learned from those who are proposing alternative ways to think about results work for sustainable development.. Are these new alternatives an answer to the criticism? In the EBA-seminar on August 31, there was a shared concern about these questions among the panelists representing the Government, Sida and civil society, although some panel members argued that there is – and always has been – a focus on long-term goals in Swedish aid. The inputs from the panel members and from the study itself raised several issues, but I’d like to take note of two questions.
Can we pinpoint the activities that make up results management?
First, the actual system of results-based management itself may be difficult to pinpoint. The word itself suggests that we are dealing with the processes whereby results are selected and formulated, communicated and put into practice in planning, subsequently monitored and evaluated, and with such information in mind, new decisions taken and lessons learned. It is thus a very large part of aid administration, and a part which can hardly be separated from other aspects of management. Nevertheless, even though the system’s boundaries are fluid, it can be described. And it is a designed system.
How to deal with the practical dilemmas?
How is such a system designed? To answer that question it is useful to think in terms of design variables, that is, specific structures and processes that take different shapes. Shutt’s report point to several design variables. For a start, results can be defined in the short term or in the long term. Results can be defined at global or local levels. The system can be more geared towards accountability or to learning. There are certainly other design variables. The report suggests other important design features, for example assumptions around change and power. A ‘good’ system needs to be balanced; it must, for example, provide information on both short term and long term results. Currently the system appears to be tilted towards the short term, towards the local and towards accountability. That balance needs to be redressed. The day the system is too focused on the long term, on the global effects and on learning, someone will need to address that problem. But that’s not where we are at present.
Getting Value for Money from results management?
Second, discussions of results often end up with a summary of value for money. But what about the results management system? Does it produce value for money? To answer that we first need to understand its costs. Like in all areas there are the direct costs, such as budgets for monitoring and evaluation, etc. But there are also indirect costs, such as staff time in reference groups, coordination of goals and indicators, deliberation on evidence. Hopefully useful things to do, but they do entail costs. Finally there are the hidden costs that may for example, be displacement effects. We know relatively little about the costs of results based management, whichever shape it takes. In fact, we also know little about the value side. If we assume that value in the ‘value for money’ equation would consist of achieving the stated purposes of accountability, learning, and decision support, then the question (do we get value for money?) remains on the table.
In sum, the report and the seminar raises urgent questions and provides a good foundation for further debates on managing for results. What I take with me is both; (1) the rather massive and well documented criticism of simplified approaches to results, approaches that do not recognize the long-term effects, partnerships and stakeholder objectives, non-linear change and complex political and social relationships, and (2) an introduction to alternative approaches – other ways of striking a balance in the system. And the overriding concern of Value for Money.
Kim Forss, member of the Expert Group for Aid Studies