How to Launch a High-Impact Nonprofit

@tags:: #lit✍/📚book/highlights
@links::
@ref:: How to Launch a High-Impact Nonprofit
@author:: Joey Savoie

=this.file.name

Book cover of "How to Launch a High-Impact Nonprofit"

Reference

Notes

13. Long-term planning

13.1. Theory of change
Quote

A “theory of change” explicitly articulates the assumptions that underlie your plan to achieve a specific goal, and lays out a method to test them.
- View Highlight
-

Quote

A well-designed theory of change allows you to communicate clearly what your activities are and why they lead to the outcomes that you and your supporters want.
- View Highlight
-

Quote

(highlight:: Do reviews of literature → Find insights that seem useful → Publish them
This is not a good theory of change. It doesn’t properly outline your goal, it doesn’t explain how your actions will lead to that goal, and it doesn’t explain how you will measure what you are doing. Presumably, your final goal is to impact living beings positively, not just publish papers.)
- View Highlight
-

Quote

(highlight:: A good theory of change draws the full causal chain from your actions to the final impact, which is your end goal.
)
- View Highlight
-

Quote

(highlight:: After going through Charity Entrepreneurship’s curriculum, the Happier Lives Institute published a theory of change that follows the above principles.2
)
- View Highlight
-

Quote

(highlight:: Below is a theory of change for an organization that uses cash transfers to increase immunizations.
)
- View Highlight
-

Quote

Quote

When designing measurement systems, keep in mind how much they cost. If a measurement is expensive but isn’t meaningfully informing your decision on whether it’s worth investing more resources in this intervention, cut it. Do try to measure things in more than one way, however, especially if there’s scope for a particular measurement to be misleading.
- View Highlight
-

Quote

Quote

Common errors to watch out for
- View Highlight
- h4,

Quote

(highlight:: Don’t forget the endline metric. The above example is a good theory of change, but can it be improved? Immunization isn’t the end goal – saving lives and improving quality of life is. This may seem obvious, but it is very important to note it explicitly so that you remember to model how much disease is actually prevented and how many lives are actually saved by increasing immunization in a region.
If you skip this step, you might (for instance) end up choosing a location that maximizes the number of people you can immunize, rather than one that maximizes the number of lives you can save (e.g., by picking a location with a high child mortality rate).)
- View Highlight
-

Quote

(highlight:: Beware of mission creep: This theory of change actually has two interventions: opening camps and incentivizing people to attend them. Sometimes, this is appropriate. But suppose it turns out that building camps provides most of the benefit, even without the incentive? In that scenario, you might divert the money that goes into incentives and use it to build more camps, and vaccinate more people and save more lives. Or, suppose it turns out that incentivizing preexisting camps would provide most of the benefit of creating new camps at a fraction of the price? In that scenario, you could divert resources you might spend building camps toward just handing out conditional cash transfers, and therefore vaccinate a lot more people and save more lives.
Suppose as this hypothetical nonprofit grows larger, they begin to think: “If we have the camps, why not also provide vitamin supplements, or distribute essential medicines, or have the workers build latrines to prevent the spread of disease?”
Sometimes adding more programs is efficient, squeezes more impact out of fewer resources, and does make sense! It also often looks good on paper and impresses some donors to have a more comprehensive and holistic program. However, be sure to do an explicit cost-effectiveness analysis first. There are exceptions, but generally speaking, if you are considering multiple interventions, one of those interventions will be more efficient than all the others and you shouldn’t divert any resources from it.)
- View Highlight
-

Quote

(highlight:: Stay specific and skeptical: Don’t allow any hand-wavy4 steps in your theory of change. Some examples of hand-wavy steps are:
Publishing research → lives are saved. As covered above, you need to add steps where you explicitly model who is going to use it and what actions they will take, and you need to actually talk to these decision-makers. And, you need to quantify the impact of these decisions on living beings.
Raising awareness → less suffering. You should explicitly model and measure, or at least estimate, the rate at which this translates to behavioral changes, voting changes, policy changes, etc. And, you need to quantify the impact of these changes on living beings.
Changing people’s views → impact. You should outline the mechanism of how changing attitudes influences living beings, and estimate or measure the degree of this impact.
Don’t just assume that one step will lead to another. Don’t overestimate the probability that one step will lead to another. Find an objective way to verify it. Get feedback from someone you know to be skeptical.)
- View Highlight
-

13.2. Three levels of long-term planning
Quote

We recommend three levels of planning: an aspirational five-year plan, a brief but precise one-year plan, and a more detailed month-to-month plan.
- View Highlight
-

Quote

Five-year plan
- View Highlight
- h4,

Quote

Stating your long-term goals, and criteria for success and failure, beforehand will keep you honest about whether or not you are meeting them. In this way, you can avoid moving the goalposts – changing the criteria for “success” to fit whatever you did achieve. For instance, say you’ve completed a counterfactual impact evaluation, which specified that you had to achieve a certain goal in order to consider your activities sufficiently high impact to be worth doing given the alternatives. Five years later, you have not achieved it. Though tough, it will be easier for you to acknowledge this failure to yourself if you’ve set non-negotiable criteria beforehand.
- View Highlight
-

Quote

In addition to its direct importance in realizing long-term impact, having a clear vision for the longer-term future is essential when communicating with donors and grantmakers. Many will want to know whether the project will be sustainable (Will it eventually be able to continue without their support?) and scalable (Does it have the structure to grow cost-effectively?).
- View Highlight
-

Quote

One-year plan
- View Highlight
- h4,

Quote

(highlight:: A one-year plan tends to include the following elements:
• A set of very specific, measurable goals and clear timelines for achieving them. It also includes a plan for how any progress on the goals will be measured.
• A budget. Funding is typically given out on a yearly basis, so one-year plans are often part of a fundraising ask. It may be wise to include a rough outline of who you are hoping will fund this budget.
• An explanation of how these goals tie into past activities and connect with longer-term future goals)
- View Highlight
-

Quote

Month-to-month timeline
- View Highlight
- h4,

Quote

Quote

Tracking progress and evaluating next steps
- View Highlight
- h4,

Quote

Every month, it’s worth going through your monthly and one-year goals. Try highlighting things that are “on track,” “not on track,” and “seem unlikely to happen” in different colors. After seeing how the month went, you can refocus.
- View Highlight
-

Quote

(highlight:: You should also build longer, more in-depth reevaluation points where you deeply question all your core assumptions.
• Which goals did you meet this past year?
• Do you still think this is the most impactful project you can work on, and if not, how can you pivot or scale down?
• Which aspects of your process worked well, and which need improvement?
• Did you learn any new information, and how should you change what you are doing based on that info?
Reevaluations should also be done when a major organizational change occurs or a new key piece of information is discovered.)
- View Highlight
-

13.3. To scale, or not to scale: that is the question
Quote

In their article for the Stanford Social Innovation Review, Alice Gugelev and Andrew Stern shift the conversation away from scale-up and toward the concept of endgame. They observe that, while scaling up can certainly increase your organization’s impact, scale and impact are imperfectly aligned. Essentially, through thinking consciously about our endgame, we can better map action onto impact and do good more effectively.6
- View Highlight
-

Quote

For a nonprofit, the leap from early stages (where budget is capped at roughly $5 million) to breakout ($5-$10 million) or full scale (upward of $10 million presents an enormous challenge. The authors describe this gap as the social capital chasm, and highlight four aspects of the nonprofit sector that create it:
• Incentive structures (e.g., lack of equity/stock options) make attracting managerial talent difficult.
• There is usually no overlap between the funders and direct beneficiaries, so charities must “win two games.”7
• An emphasis on minimizing overhead can undermine operational capacities.
• Funding is erratic, since grants are normally allocated to specific programs rather than to broad missions.)
- View Highlight
-

Quote

Gugelev and Stern sketch out six possible endgames.
- View Highlight
- h4,

Quote

(highlight:: Open source. An organization cultivates new ideas and interventions through research. Knowledge and resources can then be shared with other organizations.
For example, Charity Entrepreneurship extensively researches interventions and supports co-founders to start the most effective through our Incubation Program. CE’s research and handbook for entrepreneurs are both publicly available to extend impact beyond program participants.)
- View Highlight
-

Quote

(highlight:: Replication. An organization creates a model or product that can be easily reproduced. The original organization can offer certification and training, and act as a center of excellence.
Charter school networks in the US use replication centers to teach their model to other educators, whose preexisting infrastructure and embeddedness within a community mean that they may be better positioned to implement the model.)
- View Highlight
-

Quote

(highlight:: Government adoption. This endgame is appropriate for an intervention that can be delivered at scale and requires lobbying to influence policy and budget. After the intervention is adopted, the organization may continue in an advisory role or as a service provider. Yet the ultimate goal is for the government to be in charge of financing and decision-making.
Suvita partners with state governments (as well as with other NGOs) to ensure the sustainability of their intervention, which involves sending SMS reminders for vaccinations. At some point, the government might be able to adopt the intervention itself, run it through the Ministry of Health, and fund it with tax income. While the Suvita model is very lightweight, government adoption is even more important for harder-to-scale programs involving the distribution of cash or in-kind aid.)
- View Highlight
-

Quote

(highlight:: Commercial adoption. An organization explores a potentially profitable product or service, which commercial organizations can then adopt and expand.
The Good Food Institute (GFI) works to expand the market for plant-based and clean meat, supporting companies and innovation by connecting experts to opportunities. A project supported by GFI, Counterfactual Ventures, aims at creating for-profit start-ups in the field of clean meat.)
- View Highlight
-

Quote

(highlight:: Mission achievement. Once an organization has reached a clearly defined, achievable goal, it then winds down its activities. The organization may also pivot if there’s another problem it can effectively tackle with its resources and knowledge.
Recognizing that fundraising work was no longer neglected within the EA community, the team behind Charity Science Outreach wound down the project8 and shifted their focus toward Charity Science Health and, ultimately, Charity Entrepreneurship.)
- View Highlight
-

Quote

(highlight:: Sustained service. Although this tends to be the default, sustained service is only appropriate if the public or private sectors cannot meet a need. In this case, a nonprofit organization fills the gap, and must constantly build on the efficiency of its program.
The Nigerian Government is currently not able to fund widespread cash transfers for vaccinations, and the private sector cannot operate a sustainable business model in this field. GiveWell top charity New Incentives will continue to serve as many beneficiaries as possible.)
- View Highlight
-

Quote

Gugelev and Stern outline three basic imperatives:
- View Highlight
- h4,

Quote

Define your endgame early. Having a clear path forward will keep your organization on track for impact. Working on your endgame will also help refine your theory of change.
- View Highlight
-

Quote

Focus on your core goals. Ensure that your organization’s activities move you toward your endgame.
- View Highlight
-

Quote

Prepare your team. As a nonprofit, your responsibilities are first and foremost to your beneficiaries. But you’re also responsible for your employees. Unless your organization’s endgame is sustained service, its budget should level off or shrink – this has implications for your staff.
- View Highlight
-

Quote

The ultimate goal of nonprofit work is our own obsolescence. We dream that the disease we’re fighting will be eradicated; that no animals will be born into brief and pain-filled lives on factory farms. Such goals are our lodestar. Reflecting on our endgame brings them back in focus
- View Highlight
-

Putting it all together in the charity world
Quote

To project our mill outreach for the following year, we put together a miller scale-up plan. Within this, we asked each member of our team to provide their own scale-up projections for the next year across differently sized mills. Each team member was given a total of twelve mills to distribute. With these twelve mills, we invited each team member to add their best assumption for: a) how many mills in each production band we could partner with, and b) what proportion of production we could extend fortification to within those mill bands. We then took the average of the team members’ individual projections.
- View Highlight
-

14. Cost-effectiveness analysis

Quote

Cost-effectiveness analysis is commonly used in economics, health economics, and charity evaluation. It calculates a ratio of the cost of a given action or intervention relative to its impact. Cost is usually measured in dollars, with impact often measured in something like DALYs or lives saved.
- View Highlight
-

Quote

14.1. Strengths & weaknesses
- View Highlight
-

Quote

Modeled vs. true cost-effectiveness
- View Highlight
- h4,

Quote

It is important to distinguish between the true cost-effectiveness of an action and the modeled cost-effectiveness. The true cost-effectiveness of an action – if known – would be a highly relevant metric and could be weighted very heavily when making a decision. However, we often lack important data about the world, or a sufficient amount of it. The closest we can usually get to the true cost-effectiveness of an intervention is through constructing a model – an imperfect estimate.
- View Highlight
-

Quote

Why is cost-effectiveness analysis useful?
- View Highlight
- h4,

Quote

Benefits of CEAs (from strongest to weakest):
- View Highlight
-

Quote

Allow formal sensitivity analysis: A sensitivity analysis can locate the most important assumptions, variables, and considerations affecting the endline conclusion – the factors that could most radically change the amount of good achieved.2 Formal sensitivity analysis can be done quickly and easily on a CEA, showing the key parameters that are the most important to get right.
- View Highlight
-

Quote

Give a transparent picture of the evaluator’s rationale: Cost-effectiveness models provide a high level of transparency. Since each input is identified and clearly quantified, an outsider can see where assumptions are being made and more easily assess the validity of the conclusions.
- View Highlight
-

Quote

Reduce some biases: CEAs are less susceptible to certain human biases that affect other analyses. For example, a well-used CEA can reduce the base rate fallacy,5 conjunction fallacy,6 and hyperbolic discounting.7
- View Highlight
-

Quote

Why we shouldn’t rely solely on cost-effectiveness analysis
- View Highlight
- h4,

Quote

Subject to the “optimizer’s curse”: All estimates are prone to error, and these errors compound.10 An intervention whose CEA yields a high cost-effectiveness is more likely to have had errors in its favor. This means that the most and least cost-effective interventions are likely to regress closer to the average upon further examination. Overweighting CEAs in your decision-making could lead you to neglect good opportunities that did not have as many favorable errors.
- View Highlight
-

Quote

Necessarily involve value judgments: It is surprising how much value judgments can differ. For example, GiveWell assumes that the “value of averting the death of an individual under 5 [years of age]” is 50 times larger than the value of “doubling consumption for one person for one year.”11 Reasonable estimates could be as large as six times this amount, using life-satisfaction years. If all value judgments are subjective preferences that vary among individuals, then CEAs are only generalizable insofar as the researcher’s values align with the reader’s.
- View Highlight
-

Quote

Model uncertainty: Cost-effectiveness models are necessarily simplifications of reality. This is both a strength and a weakness. Although it allows us to get a clearer understanding faster, it also means that they do not accurately capture reality. Adjustments in the variables used will change the final value of the CEA. One way to combat this is to create several models and see if they converge.
- View Highlight
-

Quote

Prone to mistakes: Mistakes are inevitable, due to human error and/or poor information quality. Although small mistakes usually only translate to small problems on their own, these mistakes compound in a multivariate model, thus exaggerating the consequences. For example, GiveWell once found five separate errors in a DCP2 DALY figure for deworming that contributed to an overestimation of the intervention’s cost-effectiveness by one hundred times.12
- View Highlight
-

Quote

May not be generalizable to other contexts: Some CEAs rely heavily on randomized controlled trials (RCTs) for their data, and in some cases, this can be problematic. If an RCT was conducted in one particular region or with one particular method, the effect size may change dramatically in different regions or with other methods.
- View Highlight
-

Quote

Make it hard to model flow-through effects: It is difficult to model flow-through effects in CEAs properly. Indeed, a common tactic is to ignore them entirely. The currently proposed ways to incorporate flow-through effects take vast amounts of time or are prone to error.
- View Highlight
-

Quote

Can be misleading in many ways: If researchers fail to consider important factors or are not transparent in their reasoning, CEAs can yield misleading results. For example, a CEA looking at expected value needs to incorporate the probability of success. A CEA that only reports expected value makes no distinction between a 50% chance of saving ten children and a 100% chance of saving five children. This fails to account for any level of risk aversion.
- View Highlight
-

Quote

Subject to researcher bias: CEAs are resistant to certain biases but susceptible to others. The researcher conducting a particular CEA may (consciously or unconsciously) bias the results toward their own views of its strengths. A researcher’s desire to find novel, cost-effective interventions may also have this result.
- View Highlight
-

Quote

May bias you toward interventions with more measurable results: Effects that are difficult to measure may increase the error rate or be neglected. This can lead us to underestimate the effectiveness of interventions with hard-to-measure outcomes.
- View Highlight
-

Quote

Ninety-percent confidence intervals can be misleading: Depending on how well calibrated researchers are, the worst-case scenario, the best case, and the 90% confidence interval (CI) may be incorrect. CIs are particularly susceptible, as we are likely to underestimate the range of uncertainty. Worst case and best case are no better, as these may rely on many unlikely events happening, meaning the probability of either occurring is minimal.
- View Highlight
-

Quote

How often is cost-effectiveness analysis the right tool?
- View Highlight
- h4,

Quote

Given that CEAs have many benefits and flaws, it is important to use them only in conjunction with other methodologies. CEAs are great as one of three or four components used. It’s worth considering the convergence of these components, i.e., in which direction the different models point overall. We expect CEAs to be more useful in areas where quantitative differences can be very large and where analysis based on our other evaluative criteria is less reliable.
- View Highlight
-

14.2. How to create a cost-effectiveness analysis
Software
Quote

Some tools can be great for certain more specific use cases (e.g., we love Causal13 for time series models). For general CEAs, Google Sheets and Guesstimate seem to be the best tools.
- View Highlight
-

Quote

Guesstimate is a less commonly used system but offers advanced Monte Carlo and sensitivity analysis features. It is too slow to use for very quick CEAs, but can be handy for models with high levels of uncertainty.
- View Highlight
-

Quote

For important models, we recommend using Google Sheets and Guesstimate in combination. Create an initial CEA in a GSheets spreadsheet, and then remodel the data using Guesstimate for sensitivity analysis and simulated endline point estimates. Using two models decreases the odds that an error in one model will have a very significant effect on the overall outcome, particularly since the software packages require somewhat different formatting.
- View Highlight
-

Formatting
Quote

Color coding
- View Highlight
- h4,

Quote

(highlight:: Yellow: Value and ethical judgments
These numbers could change if the reader has different values from the researcher. For example, reasonable people could disagree about the answer to the question, “How many years of happiness is losing the life of one child under five worth?” When making these judgments, we generally consult the available literature, but there is often no clear answer.)
- View Highlight
-

Quote

(highlight:: Green: Citation-based numbers
These numbers are based on a specific citation. If we found and considered multiple citations, the best will be hyperlinked to the number, and the others will be included in the reference section. If a number is an average of two other numbers, both numbers will be entered into the sheet, and the average will become a calculated number with a different color format.)
- View Highlight
-

Quote

(highlight:: Blue: Calculated number
These numbers are calculations generated from others within the sheet. Calculated numbers involve no more than five variables, both for readability and to allow for sanity checking. Generally, it is harder to err when making a higher number of subtotals rather than a single very large, multi-variable calculation.)
- View Highlight
-

Quote

(highlight:: Orange: Estimated numbers
Sometimes, no specific numbers can be found for a parameter. In this case, the number is estimated by one or more staff members. These estimates will often be the numbers within a CEA that we have the lowest confidence in.)
- View Highlight
-

Quote

Discounting
- View Highlight
- h4,

Quote

(highlight:: On paper, two interventions might show a similar number of QALYs,15 welfare points, lives saved, etc. In practice, they might be supported by different levels of evidence, occur over different time frames, or have other extenuating factors that change your view of their true cost-effectiveness.
Applying discounting factors is one way to address these issues. We try to keep our discounting clear and separate from the original number in the CEA, as these discounts are generally subjective.)
- View Highlight
-

Quote

Certainty discounting: If a source of evidence suggests one number but the source is extremely weak, we might apply a certainty discount to it. This is based on the assumption that, in general, numbers regress as they get more certain. Thus, using a very weakly evidenced number in one estimate and a strongly evidenced number in another will systematically favor the areas with weaker evidence, as these numbers will be more positive.
- View Highlight
-

New highlights added 2024-07-16 at 5:40 PM

Quote

Talking things over with a focus group is a great way to get key qualitative background information about how they experience the intervention. It’s often helpful to get this sort of data before investing resources in more rigid and quantitative metrics, as it gives you a basic sense of the relevant and applicable considerations. For instance, without checking first in a focus group, the metric or question types used in a randomized controlled trial might be biased in ways you would not know about. The general rule here is: talk to many beneficiaries before you run sophisticated studies. To get started, see the Citizens Advice guide “How to run focus groups.”7
- View Highlight
-

Quote

Survey data
- View Highlight
- h4,

Quote

Some common methods of survey data collection include:
- View Highlight
- h4,

Quote

Phone surveys: Phone surveys are cheaper and less geographically constrained. It’s also easier to monitor surveyors and prevent scams. However, it can be harder to get ahold of people, because cell phones are often switched off in settings where people can’t count on electricity or data is expensive. Additionally, phones may be shared among many people, so the wrong person often picks up. Some objective metrics (e.g., does the person live on a dirt floor) are harder to get over the phone.
- View Highlight
-

Quote

In-person surveys: In-person surveys are more expensive. It’s hard to locate people – many regions do not have signs or street addresses, so you have to ask around to find a specific person. It’s also harder to monitor surveyors and prevent scams. However, you can often give longer surveys in person, and get objective metrics.
- View Highlight
-

Quote

Paper surveys: Paper surveys typically have low response rates, and require participants to have high literacy. However, on paper, you have the advantage that questions are always asked in the same way, whereas in person or on the phone, surveyors often paraphrase questions in a way that might change the responses.
- View Highlight
-

Quote

Online surveys: Online surveys are much like paper surveys, but with the added advantage that you can randomize question order, which removes some sources of bias. They’re also quicker to pull data out of. Of course, they are limited by respondents having internet access.
- View Highlight
-
- [note::Hadn't considered question order as a source of bias before but makes sense - I wonder if Google Forms or Airtable has the ability to randomize question order?]

Quote

Miscellaneous tips for surveys in the developing world:
- View Highlight
- h4,

Quote

• Be mindful of scams. Build accountability into your system (e.g., if you have surveyors, deploy them in teams and give each team a trusted supervisor). You can also use tablets to record conversations and see how long was spent on each question. Err on the paranoid side here, and implement back checks for as many submissions as possible (e.g., 10%). Frauds you would never expect, such as surveyors filling out fake forms in their hotel rooms instead of in the field, are common.
- View Highlight
-

Quote

Factor into your budget that tablets often break, run out of charge, and get stolen.
- View Highlight
-

Quote

Respondents may not be accustomed to taking surveys and may feel nervous. Place easier and more impersonal questions near the beginning to make respondents more comfortable before delving into potentially more sensitive content.
- View Highlight
-

Quote

Aim for your surveys to take between five and 10 minutes. Following this rule will limit the questions you ask to what really matters and avoid combining too many topics in one survey.
- View Highlight
-

Quote

General tips for all surveys
- View Highlight
- h4,

Quote

Role-playing the questions with surveyors beforehand and directly observing the first few responses can help you work out kinks in your process.
- View Highlight
-

Quote

Some people don’t take surveys seriously, don’t pay attention, respond randomly, or make jokes. To prevent this, include some questions that are designed to weed out unreliable respondents.
- View Highlight
-

Quote

Don’t fiddle with analysis methods to make your data look good. Pre-commit to your analysis methods beforehand (sometimes called “preregistration” of your study).
- View Highlight
-

Quote

Running large-scale studies
- View Highlight
- h4,

Experimental and quasi-experimental design
Quote

a few examples of experimental designs
- View Highlight
- h5,

Quote

(highlight:: Randomized controlled trials are generally the gold standard for evaluating impact. The design is straightforward: you randomly8 assign one portion of the population to receive the intervention, and another portion not to. If performed correctly, confounding differences between the two groups disappear, and any observed differences might now be attributable to your intervention.
RCTs usually generate the highest-quality data, because they allow you to understand the counterfactual impacts clearly. If there’s an economic downturn and your beneficiaries were worse off after your intervention, but people who didn’t get the intervention were even more worse off, an RCT would still capture your positive impact. If your beneficiaries were better off after your intervention, but people who didn’t get it were also better off and you had nothing to do with it, an RCT will capture that, too. Without a randomized controlled trial, it can be quite a bit more complex to figure out the true effect of your actions.)
- View Highlight
-

Quote

Discontinuity regressions take an intervention applied at some arbitrary cutoff and compare data points on either side of the threshold to determine the effect. For example, if you were to tutor every student who scored less than 70% on a given exam, you could look at the differences in outcomes for those who scored 69% vs. those who scored 71% to get a sense of the effect of the tutoring. This method is a type of “natural experiment” – a situation where two similar groups experience different things, and you quantify the effects of those differences. This method circumvents some of the expenses and ethical dilemmas surrounding random assignment but is vulnerable to confounding factors.
- View Highlight
-

Quote

(highlight:: Difference-in-differences is a statistical technique that can be used to compare two groups that differ slightly from each other but are generally going through the same changes over time.
Suppose you have child mortality data from two villages but are running your intervention in just one of them. The two villages may not be the same – for instance, one may have a higher child mortality rate than the other. However, they may both undergo similar changes over time. They’re likely to be similarly affected by variations in the national economy, changes in the weather, and natural disasters. If the two groups change over time but in similar ways, the method returns a null result, meaning that your intervention was ineffective.
The weakness of this method is that the groups you are comparing may change at different rates for reasons unrelated to the intervention. Remember that a positive result isn’t necessarily attributable to the intervention or event that you’re interested in, it only means that something changed.)
- View Highlight
-

Quote

Interrupted time series analyses compare the same population for a long period of time before and after an intervention. For example, because global GDP generally rises at a somewhat predictable rate, you might try using an interrupted time series to check whether a global pandemic had a lasting effect on it a decade later. This is often a good method to use when there is no possibility of controlled randomization and no reasonable group that might serve as a natural control. However, there is a risk of unrelated events occurring at the cutoff point and contaminating the data.
- View Highlight
-

Quote

Terms to know that might influence or confound your results
- View Highlight
- h5,

Quote

“Significance” vs. “effect size”: If a result is statistically significant, it means that what you’re seeing might not be due to random coincidence. “Effect size” reports how big the effect actually is, and for your purposes, this will generally be what actually matters. Two programs that increase vaccination rates by 1% and 10% might both be statistically significant, but have very different effect sizes.
- View Highlight
-

Quote

Correct for multiple comparisons: If you run 100 statistical tests with a 5% chance of a false positive, you’ll get, on average, five false positives. Sadly, running into a false positive in the scientific literature is common, because positive results are often more exciting than negative. You can combat this by preregistering your study before seeing the results, or by statistically correcting for multiple comparisons.
- View Highlight
-

Quote

Generalizability: Sometimes, results taken in one context will generalize to another. Other times, they won’t. While a randomized controlled trial can demonstrate that an intervention works within a specific context (internal validity), you can’t be confident that the results will apply in a different region (external validity).
- View Highlight
-

Quote

Organizations to know
- View Highlight
- h4,

Quote

In the effective altruism community, GiveWell often partners with IDinsight. In the broader development community, J-PAL, IPA, and CEGA have a network of academics who conduct high-quality impact evaluations. Having your work evaluated by a third-party organization well respected in the field can provide useful data for you and your organization and demonstrate your impact to donors.
- View Highlight
-

Quote

What if a well-designed study says our intervention is low-impact?
- View Highlight
- h4,

Quote

When you have clear evidence that a different use of resources would improve the world more effectively, stop what you are doing and pursue that instead. Starting an organization is a high-risk, high-reward activity, so plan for both success and failure.
- View Highlight
-

Putting it all together in the charity world
Quote

Douglas W. Hubbard, How to Measure Anything (Hoboken, New Jersey: John Wiley & Sons, Inc., 2010).
- View Highlight
-

16.1. What does good decision-making look like?
Making predictions
Quote

External review
- View Highlight
- h4,

Quote

Results
- View Highlight
- h4,

16.2. Key principles
Quote

(highlight:: p you recall the most important lessons of decision-making applied to charity entrepreneurship.
Frame your decision well before working on it, and sharpen your tools:
• Determine upfront how much time to put into activities and time cap based on importance.
• Set up reevaluation points at specific dates to reconsider your activities.
• If you are narrowing down from many options, set up rounds of iterative depth, time capping each one.
• Spend the most time on the key choices: co-founder, country, and intervention selection.
• Be ready for many of your decisions to be made with far less than 100% confidence.)
- View Highlight
-

Quote

(highlight:: p you recall the most important lessons of decision-making applied to charity entrepreneurship.
Frame your decision well before working on it, and sharpen your tools:
• Determine upfront how much time to put into activities and time cap based on importance.
• Set up reevaluation points at specific dates to reconsider your activities.
• If you are narrowing down from many options, set up rounds of iterative depth, time capping each one.
• Spend the most time on the key choices: co-founder, country, and intervention selection.
• Be ready for many of your decisions to be made with far less than 100% confidence.)
- View Highlight
-

have a possible endgame.

• Monitoring and evaluation – The tape measure

Pick your metrics carefully, and set up systems to measure them often.

16.3. Twelve months in: Are you on the right track?

  1. Are you focusing on a single very impactful thing? Or have you spread out between many areas, some of which are less effective?

  2. Have you progressed fast enough to have started taking action in your implementation country? Have you spent at least a few months there?

  3. Have you established a group of trusted advisors, so that you’re getting good feedback at least once a month?

  4. Have you progressed on your one-year plan, and completed at least half of it?

  5. Have you spent less than 30% of your time on things that are not your primary objective?

  6. Do you have an established system to track an important endline metric?

  7. Are the co-founders of the project and the leadership team stable?

  8. Are you making better decisions for your project than you were a year ago?

  9. Do you have a long document of lessons learned?

  10. Are you fast and effective at using three or more tools during decision-making?

Part III. Key decisions