At the end of a long, dusty red dirt road bordered by fields of maize sit several huts owned by Luzi Orphan Care. This community organization in rural Malawi receives around $500 a year to care for children orphaned by AIDS. Five hundred dollars go a long way here. We know this because we saw the organization’s meticulously kept records: Binders of crisp white pages, filed neatly in boxes, recording how those $500 were spent and how many orphans received care.
We asked the group if they needed anything. They didn’t request more money. Instead, they asked: “How are we doing? How does our work compare to others? Are there things we could do better?”
Their questions were instructive. For too long, foreign aid programs measured their worth based on the size of their budget, not on the results achieved. Fortunately, that is beginning to change. Instead of using last year’s funding amounts to justify budget requests, organizations as large as the U.S. government and as small as Luzi Orphan Care are looking to accomplish more with existing resources.
This interest in effectiveness and efficiency is being demonstrated across the foreign aid world. Increasingly, organizations are focusing on using data, evaluations (such as randomized control trials) and advanced analytics to measure results and demonstrate value for money. And they are deploying the power of technology to pinpoint where development interventions are needed.
This revolution arrives just in time. With the Trump Administration’s proposed budget portending fewer resources for foreign assistance, providers must accomplish as much as possible with available resources. In addition, better data about what well-run programs can accomplish, will help persuade the public that foreign assistance is a wise investment.
Donor governments and aid organizations should take three basic steps to generate that data:
First, every aid program should be accompanied by a rigorous framework to assess whether it achieves its goals. Backward-looking evaluations after a project’s conclusion are not enough; at that point, the donor has a powerful vested interest in ensuring that the project is deemed a success. Under those circumstances, without a pre-determined baseline against which to measure the results, a passing grade is virtually guaranteed. That kind of evaluation tells us little about whether the project achieved its intended goals or delivered good value for money.
There is a better way. Instead, before the project begins, donors and implementers should agree on the metrics, or “indicators,” they will use to measure the project’s effects. The next step is to gather baseline data for those indicators and set clear targets for project success. With baseline data and pre-determined indicators and targets, donors and implementers can rigorously measure what a project achieved.
Second, development professionals need to collect the right data—and sometimes that means collecting fewer indicators. One of the downsides of the drive to show aid effectiveness is that both donors and implementers have become overwhelmed with indicators.
For example, in a recent study in Malawi, we found that health workers collect more than 3,500 different HIV-related data elements. This means that staff at already-overburdened health facilities spend, on average, several days each month filling out reports. This takes time away from patient care and leaves little time to use the data to improve the clinic’s services.
One way to reduce this burden is for donors to better coordinate the indicators they ask governments and organizations to collect. An African health system supported by five separate donors should not have to collect five slightly different datasets to respond to each donor’s requirements.
Third, donors must analyze and use these data to better tie resources to results. Some implementing partners provide great value for money; others do not. For example, in countries receiving support from the President’s Emergency Plan for AIDS Relief (PEPFAR), we found that the unit expenditure to deliver HIV treatment varied substantially. Fortunately, a relatively new technique called Expenditure Analysis allows PEPFAR to explore why it costs more or less for different organizations to deliver similar results. Sometimes there is a straightforward reason for the higher cost: the work takes place in a remote area, labor costs are higher, and so forth. Sometimes it is a warning sign that something is amiss. This information allows planners to identify best practices that can be replicated elsewhere.
Today, the political winds are blowing against foreign aid. In this challenging climate, donors, implementers, and advocates should continue to mount a principled defense of foreign assistance. But pushing back against budget cuts is not enough: Like Luzi Orphan Care, we should strive to do better with the resources we already have.
Hannah Cooper and Tyler Smith are co-founders of Cooper/Smith, a DC-based startup that uses data to increase the effectiveness and efficiency of development programs. They previously served in the State Department’s Office of the Global AIDS Coordinator.