top of page

Giving Green's Research Process

Giving Green's Research Overview

This report was last updated in July 2023.


Giving Green’s Research Overview

Executive Summary

Purpose of This Overview

High-level Process

Evidence Sources

Data

Literature

External input

Step 1: Identify Impact Strategies

Step 2: Assess Impact Strategies

Scale: How big a problem is it?

Feasibility: How hard is the problem to address?

Funding need: How much would more donations help?

Shallow dives and deep dives

Theory of change

Cost-effectiveness analysis

Step 3: Longlist Organizations

Step 4: Evaluate Funding Opportunity

Step 5: Publish Recommendations

Recurring Step: Reassess Existing Recommendations

Key Uncertainties


Executive Summary


This page is an overview of Giving Green’s current research process, which we expect to continue to update over time. We hope this increased transparency helps donors make more informed decisions, and also opens us up to additional scrutiny to improve our work. This page may be especially useful for those interested in digging into the details of our work. If you have any questions or feedback, we invite you to contact us.


High-level process

Giving Green’s mission is to reduce human suffering due to climate change. Our theory of change involves directing more funding to our recommended high-impact climate strategies to reduce climate change. We follow a five-step research process: identify impact strategies, assess impact strategies, longlist potential organizations, evaluate specific funding opportunities, and publish recommendations. 


Evidence sources

We rely on three broad types of evidence: data (e.g., emissions, philanthropic funding), literature (e.g., academic journals, industry reports), and external input (e.g., climate researchers, policymakers). The type of evidence we use, as well as how we use it, depends on its availability and the research stage.


Step 1: Identify impact strategies

As a first step, we want to answer the question: What are potentially promising impact strategies? Regardless of geography or approach, we look for very rough indications that a strategy may be a promising fit for Giving Green (e.g. expected high impact of the marginal dollar). The output of this step is additions to our research prioritization dashboard.


Step 2: Assess impact strategies

At this stage, we move from identification to evaluation. We seek to answer the question: What is the scale, feasibility, and funding need of an impact strategy? For scale, we want to know how much a specific problem is contributing to human suffering due to climate change, or how much an impact strategy could reduce it. For feasibility, we want to determine an impact strategy’s likelihood of achieving success given additional philanthropy finding. And for funding need, we want to understand (a) whether climate philanthropy funding opportunities exist and (b) how much an impact strategy is constrained by philanthropic funding. At this early stage in our analysis, we use a combination of metrics and heuristics to qualitatively rank these criteria as low, medium, or high, and use these rankings to decide which impact strategies to prioritize for additional research.


Shallow dives and deep dives

We subsequently evaluate prioritized impact strategies at two depths: shallow dives and deep dives. These dives move beyond qualitative rankings, and also consider major co-benefits and adverse effects. At this stage in our analysis, we also introduce two analytical tools: theories of change and cost-effectiveness analyses (CEAs). Theories of change to help us map out and assess an impact strategy’s pathway. We use CEAs as an input into our comparison of the cost-effectiveness of different strategies and organizations. Depending on the certainty of our inputs, we may use CEAs to identify or confirm important parameters, assess whether it is plausible that a donation could be highly cost-effective, and/or estimate the actual cost-effectiveness of a strategy or organization.


Step 3: Longlist organizations

As we focus in our analysis on a specific impact strategy, we seek to answer the question: Are there promising organizations that may have funding needs? We longlist organizations to map out the universe of funding opportunities for a given impact strategy, and roughly assess organizations based on their focus (how much it aligns with a strategy), effectiveness (whether an organization could be highly effective), and size (operational scale and potential funding need)


Step 4: Evaluate funding opportunity

At this stage, we seek to answer the question: Does an organization have a cost-effective theory of change with specific funding needs? For a given impact strategy, we generally evaluate three to five organizations using the same shallow and deep dive formats outlined above. After completing a deep dive, we decide whether an organization should receive a top recommendation status.


Step 5: Publish recommendations

At this final stage, we publish summaries of our deep dives that seek to answer the question: Why is this likely to be among the most cost-effective funding opportunities?


Recurring step: Reassess existing recommendations

As an ongoing step, we update our existing recommendations annually. We seek to answer the question: has anything changed about an organization or its context that would cause us to longer list it as a top recommendation? We assess implications for an organization’s scale, feasibility, and funding need, and maintain or remove our recommendation, accordingly. 


Key uncertainties

Our research process has evolved over time, and we continue to have uncertainties about our approach. These include: whether and how we should prioritize a diversity of recommendations, how to best define and estimate human suffering due to climate change, an appropriate balance of research breadth versus depth, how to avoid false precision, and how to balance transparency with other considerations.


Purpose of This Overview


This is page is an overview of our current research process, which we expect to continue to update over time. We hope this increased transparency helps donors make more informed decisions, and also opens us up to additional scrutiny to improve our work. This page may be especially useful for those interested in digging into the details of our work.


This page provides a high-level overview of our current (2023) research process. Our research has continued to evolve over time, from its initial exclusive focus on carbon offsets (see Our Mistakes) to our increased focus on reasoning transparency in 2022. Not only does this page reflect our latest thinking, but it also represents a new level of transparency (one of our organizational values) to help supporters understand our overall research-to-recommendation pipeline.


We hope this overview accomplishes a few goals:

  • Helps donors make more informed decisions about whether to trust Giving Green’s recommendations, which could increase donations directed by Giving Green.

  • Helps donors in their broader (non-specific to Giving Green) climate philanthropy decisions, which could increase the cost-effectiveness of general climate philanthropy efforts.

  • Opens us up to additional scrutiny of our research methodology, which could allow us to increase the quality and usefulness of our work.

  • Encourages increased transparency among the broader climate philanthropy community, to allow us to collaborate and learn more from each other.


This page is meant for those interested in digging into the details of our overall process, or for those curious to learn more about a specific aspect of our methodology (e.g., how we use external input). This page is not meant to be an exhaustive explanation of our research, nor is it meant to set in stone an unchanging process. We expect to intentionally review our process on an annual basis, but generally consider this to be a live document that we may continually update.


If you have any questions or feedback, we invite you to contact us.


High-level Process


Giving Green’s mission is to reduce human suffering due to climate change. To accomplish this goal, we prioritize high-impact research, conduct rigorous evaluations, and make recommendations for cost-effective climate funding opportunities [1]. Our theory of change (Figure 1) involves combining our recommendations with communication, leading to increased funding for high-impact climate strategies, lowered warming, and eventually reduced human suffering.\



Figure 1. Giving Green organizational theory of change

At a high level, we follow a five-step research process (Figure 2): we identify impact strategies, assess impact strategies, longlist potential organizations, evaluate specific funding opportunities, and publish recommendations [2]. We explore each step in more detail below.


Figure 2. Giving Green research process

While this process primarily applies to how we identify our overall top recommendations, we use a similar approach for our other workstreams, such as our business recommendations and Australia recommendations. In these instances, we might introduce constraints on the landscape of opportunities (e.g., Australia-specific organizations) or approaches (e.g., business strategy decisions instead of philanthropically-funded interventions) [3].


For all our workstreams, we also strive to have a diversity of highly cost-effective recommendations that account for donor/business preferences. Though we think our target audiences are generally interested in high-impact philanthropy, we do not think their donation decisions are always uniquely driven by cost-effectiveness. We also believe there is considerable uncertainty about what are the “best” philanthropic strategies. Therefore, we think it’s important to have at least some variety in the strategies and sectors of our recommendation, even if we were to think that one strategy may be more cost-effective than others. This is in line with our organizational value of humility. Overall, we believe that even if we make some recommendations that may not be as cost-effective as others, we think this still maximizes our overall impact (see Key uncertainties).


While our research process is theoretically stepwise, it is not always linear. For example, we may rule out an impact strategy if we are unsure whether any existing organizations implement this approach. For this reason, we might attempt to identify organizations working on a particular impact strategy before deeply assessing the approach.


Since the specifics of our analyses also depend on varying factors such as available evidence and the nuances of individual impact strategies, this overview is necessarily limited in its depth and comprehensiveness. For details on our specific research process for different impact strategies, we invite you to review our research reports.


Evidence Sources


In general, we rely on three broad types of evidence to inform our research process: data, literature, and external input.


Data


Data types and sources vary substantially by research project, but there are generally three bodies of data we almost always use: emissions, progress, and funding. Some of the largest and most reliable data we use relates to past and current emissions. For example, we often use emissions by sector as a quick input into our assessment of scale. To inform our assessment of broader climate progress and feasibility, we reference data on climate targets, progress, and policies, e.g. Climate Action Tracker [4]. Since we are ultimately interested in optimal resource allocation across climate impact strategies, we also review data on public, private, and philanthropic spending. [5]


Literature


Literature reviews are a part of all our research projects, and can include both academic and non-academic evidence.


When available and relevant, we use academic journal articles as a relatively important input into our research process. We rely on academic articles because they are often (but certainly not always) written by experts with deep knowledge, thoughtfully peer-reviewed as part of the publication process, and accompanied by critiques and citations that help us efficiently assess its quality. Academic articles might provide specific analysis of an impact strategy, or might also provide us with useful framing for assessing broader opportunities (e.g., Malhotra and Schmidt 2020 assesses low-carbon innovation attributes) [6]. However, academic articles are not immune to bias, and are also often less available for especially new impact strategies.


Non-academic literature, such as reports, new articles, blogs, or even tweets,  can be especially useful for directly applicable analysis and evidence. These literature can complement academic evidence in a few ways: broader assessments across impact strategies (e.g., Halstead 2022’s compilation of evidence across scientific fields), faster publication timelines (e.g., a feasibility study on novel geothermal microdistricting technology), more private sector perspectives (e.g., a McKinsey assessment of cultivated meat), and generally stronger or more uncertain statements that might be useful to consider (e.g., a Carbon Brief summary of the IPCC’s sixth assessment cycle). However, non-academic literature may also be biased, less broadly accepted/vetted, and/or less deeply researched.


External input


As a small team of climate researchers, we rely heavily on the expertise of others. We speak with other climate researchers, climate philanthropists, private sector representatives, policymakers, government employees, and members of civil society to guide and critique our research.


As part of our commitment to our value of humility, we are especially focused on ensuring we receive a diversity of feedback, and seek to proactively engage with stakeholders who may have different or contrary views to our own. We intentionally seek to have at least one conversation with someone who we believe may disagree with our assessment, and generally sort potential contacts by affiliation (e.g., private sector), background (e.g., economist), geographic focus (e.g., US), and potential biases (e.g., topic-focused specialist) to assess external input diversity.


As with our other research tools, how we use expert input largely depends on the research stage and question. In earlier-stage research prioritization, we consult with external stakeholders to gather high-level perspectives on the state of climate progress, as well as to solicit feedback on Giving Green’s overall strategy. Once our research advances to organization-specific evaluations, we primarily use external stakeholders to (a) ensure we have a comprehensive view of the ecosystem of organizations working in a space and (b) assess an organization’s effectiveness.


We also have three more formal review steps. At important milestones in our overall research process (e.g., tentative recommendation), we inform some or all of our advisors and solicit general feedback. For organizations we are considering recommending, we ask them for reference checks. However, due to potential selection and response bias, we primarily use these references to learn more about specific organizational activities. And finally, for near-final drafts of especially important and/or uncertain research, we solicit a detailed review by an external researcher with expertise in the area of investigation.


We think there is more progress we could make on increasing the rigor of our external input process, and plan to consider improvement opportunities in 2023 (see Future plans).


Step 1: Identify Impact Strategies


Our first step is to create a continually-evolving list of all philanthropic strategies that are potentially promising for future research. At first pass, we are interested in all impact strategies, regardless of geography or approach. [7]


Since there are many ways to look at the same challenges or opportunities, we don’t think it’s possible to ever create a complete list of all impact strategies. For example, “avoiding deforestation” could mean saving trees, but it could also mean reducing the need for livestock grazing land. For this reason, we bring our own framing into our analysis and consider our assessment to be an ongoing process, rather than a one-time exercise to create a seemingly completely exhaustive list.


At this stage, we look for very rough indications that an impact strategy may be a promising fit for Giving Green. For example, is there a sector that is a large and/or growing greenhouse gas emissions source, but receives a relatively small share of philanthropic funding (e.g., industry)? [8] If the answer is yes, that could suggest there is an opportunity for impact. [9] Or is there a nascent mitigation technology that seems stuck in a “valley of death” for which philanthropic funding might help? [10]


The main output to this step is additions to our research prioritization dashboard. View the full dashboard as a Google Sheet here.


Screenshot of a spreadsheet. Each row describes an impact strategy and contains text and color-coded cells that describe our assessment.
Figure 3. A preview of our research prioritization dashboard as of July 2023


Example: From speaking with stakeholders in the climate philanthropy community, we believed efforts to decarbonize industry emissions might be relatively underfunded by philanthropy due to perceptions of low feasibility. We added this impact strategy to our dashboard to assess funding need, as well as whether feasibility might be higher than generally expected.


Step 2: Assess Impact Strategies


At this stage, we move from identification to evaluation. We seek to answer the question: What is the scale, feasibility, and funding need of an impact strategy? We think this step is often the most important, since it gives us adequate granularity to more confidently compare the impact potential of one strategy over another. Subsequent steps of identifying specific organizations and funding opportunities are also essential, but may be relatively less important than generally prioritizing one impact strategy over another.


With so many different strategies to investigate, we run the risk of conducting overly shallow dives on many topics and accidentally deprioritizing an impactful opportunity. On the other hand, we also risk conducting overly deep dives on just a few topics, limiting our capacity to investigate other impactful opportunities. To balance these risks, we use three broad criteria to assess the promisingness of an approach: scale (how big a problem is it?), feasibility (how hard is it to address?), and funding need (how much would more donations help?). [11]


For each of these indicators, we initially use a combination of metrics and heuristics to assign low/medium/high ratings for these indicators (see research prioritization dashboard). We then use these ratings to prioritize impact strategies for additional research, which no longer relies on these qualitative rankings. See below for additional high-level commentary on each indicator.


Scale: How big a problem is it?


What do we want to know? At a high level, we want to determine how much a specific problem is contributing to human suffering due to climate change, or how much an impact strategy could reduce it. As a rough proxy for this, we estimate how much an approach could theoretically avoid greenhouse gas emissions, remove atmospheric greenhouse gasses, and/or reduce radiative forcing. Though these are intermediate effects, we use them because they are relatively easy to measure and communicate. However, we also use heuristics (see below) to prioritize strategies we think are especially likely to reduce human suffering, and sometimes consider ways to estimate human suffering more directly.


Why does scale matter? All else equal, the bigger the scale, the more we want to look into it. We can also think about scale as a hedge on the other indicators. If feasibility or funding need (see below) turn out to be lower-than-expected, bigger scale is a way to still ensure high impact relative to a smaller-scale problem with similar feasibility and funding need.


How do we assess scale?

  • Metrics: We estimate the percentage of future expected greenhouse gas emissions that could be avoided or removed by a certain strategy, or how much it would generally reduce radiative forcing through 2100. [12]

  • Heuristics: Is this problem expected to grow or shrink? Is this problem concentrated in a country or region where this problem is generally expected to grow or shrink? Do we expect this opportunity to scale rapidly enough to significantly affect climate change within the next 75 years? Is this problem or opportunity relevant across a wide range of climate scenarios, including scenarios with relatively high human suffering (roughly defined as > 4°C)? [13]


How do we use our assessment to decide whether to prioritize further research? Scale matters, but we don’t think bigger is always better. For example, we think some strategies and sectors with high scale already receive substantial funding and may benefit less from a Giving Green recommendation compared to other causes we could fund. Therefore, we think scale is primarily useful for deprioritizing research into topics that seem less promising because they are not a big or likely problem or opportunity. Based on the metrics and heuristics above, we assign a simple low/medium/high qualitative ranking. [14] For strategies with a low rating, we only prioritize further research if we are especially optimistic about feasibility and funding need.


What are key uncertainties or limitations to this approach? We are especially uncertain as to the value to place on reduced climate change over different time periods (see Key uncertainties). [15] 


Example: We estimated that industry accounts for around 29% of global greenhouse gas emissions. [16] “Industry” is a broad term that may not be fully targeted by some impact strategies. Nevertheless, we ranked its scale as high, since we believe impact strategies in this area could generally affect 5% or more of future expected greenhouse gas emissions through 2100. Some topics are also cross-cutting (e.g., clean energy) or have a highly uncertain future (e.g., specific carbon removal technologies).


Feasibility: How hard is the problem to address?


What do we want to know? At a high level, we want to determine an impact strategy’s likelihood of achieving scale relative to the counterfactual. Feasibility assesses how much a funding opportunity can actually contribute to solving a problem.


Why does feasibility matter? Focusing resources on a problem is not useful unless it is solvable. Feasibility takes our scale assessment and fact-checks it based on practical real-life constraints. All else equal, we prioritize opportunities where there is a clear place for philanthropy to add to a strategy’s success, in order to increase the odds that aspirational impact becomes a reality.


How do we assess feasibility?

  • Metrics: We think it is unlikely we can estimate actual likelihood of success with much certainty, and therefore rely primarily on heuristics for this assessment.

  • Heuristics: In general, how strong is the theory of change? Are there important and weak parts of the theory of change that suggest low feasibility? In general, how complicated or simple is the impact strategy? Are there organizations working on this, past examples of success, promising track records, or other pieces of evidence that suggest high feasibility? For topics without substantial precedent, is there any forward-looking or theoretical analysis that makes a strong case for future success? Does philanthropy have an important role to play in this strategy, or is it more easily supported by government or private sector stakeholders?


How do we use our assessment to decide whether to prioritize further research? Based heuristics above, we assign a low/medium/high qualitative ranking. [17] For strategies with a low rating, we only prioritize further research if we are especially optimistic about scale and funding need.


What are key uncertainties or limitations to this approach? As part of our initial assessment, we think this is the indicator we are most likely to get wrong, since feasibility assessments often require a deep understanding of specific nuances. If we misunderstand these nuances, we think there is a relatively high risk of creating false negatives. [18] The main way we try to mitigate this risk is by soliciting external input. In later stages, we also develop a more formal theory of change (see Theory of change) that allows us to more methodically estimate feasibility.


Example: We struggled to easily and confidently the feasibility of decarbonizing heavy industry. This was partially due to highly varying impact strategies, ranging from corporate pressure campaigns to tweaking international trade regulation. [19] However, we also heard varying opinions from reports we read and key stakeholders we spoke with. On one hand, economic incentives seem to give credence to the “hard-to-decarbonize industry” nickname. [20] On the other hand, some specific impact strategies we assessed seemed highly plausible. At this early stage in our analysis, we classified this overall strategy as having medium feasibility.


Funding need: How much would more donations help?


What do we want to know? At a high level, we want to understand (a) whether climate philanthropy funding opportunities exist and (b) how much an impact strategy is constrained by philanthropic funding.


Why does funding need matter? There are thousands of non-profit organizations with highly effective approaches. However, some of these approaches are relatively well-established and well-funded, and additional donations may be unlikely to have a major effect on future impact. On the other hand, there are also highly effective (or highly promising) impact opportunities that are not well-funded, have large growth potential, and/or could engage in higher-risk activities with more funding certainty. All else equal, we want to prioritize approaches with relatively higher funding needs.


How do we assess funding need?

  • Metric(s): Conditional on available data, we assess current philanthropic, private sector, and public sector funding to roughly understand (a) which source(s) provide funding and (b) how these funding amounts vary in size. For broader impact strategies (e.g., clean energy), we also estimate its percentage of philanthropic funding to generally understand whether it has been a major focus of philanthropic funding. [21] We also consider year-over-year changes in philanthropic spending to assess whether funding has grown or shrunk over time.

  • Heuristics: Is there a clear role or gap that climate philanthropy is well-placed to fill? Are there specific phases of an impact strategy (e.g., RD&D) that may benefit from philanthropic funding? [22] Is this a low-interest idea that might signal funding need? [23] On the other hand, are there governments, corporations, donors, and/or organizations already allocating substantial money/resources to this? How is philanthropic funding distributed among different geographies? [24]


How do we use our assessment to decide whether to prioritize further research? If we cannot identify any organizations working on an impact strategy, we deprioritize further research. (In the future we may look to help seed new organizations for promising strategies that have no one working on them.) For those remaining, we assign a qualitative low/medium/high ranking based on the metrics and heuristics above. [25] In particular, we examine relatively well-funded impact strategies to consider whether they may have relatively lower funding needs. For any topic for which we believe the funding need is low, we deprioritize further research.


What are key uncertainties or limitations to this approach? Trying to determine funding need based on funding trends can be difficult and counterintuitive. For example, ClimateWorks notes that carbon removal efforts received a “sizable” funding increase in 2021. [26] This could imply that funding need is now lower since there is more philanthropic funding. However, increased philanthropic funding could also indicate even higher funding need, perhaps because philanthropic efforts play a complementary role to increased US government support for carbon removal. [27] Assessing broader funding trends can also hide niche funding opportunities with high funding needs. For example, it is our general impression that US policy advocacy work has relatively low funding need. In 2022, we nonetheless recommended Evergreen Collaborative in part because of the timely and specific funding opportunity to assist specific states to take better advantage of Inflation Reduction Act funding. [28] Because of this, we rely more heavily on heuristics than metrics, and generally have a high bar for deprioritizing impact strategies based on funding need.


Example: We used ClimateWorks data to assess that industry efforts received a relatively small amount (~3%) of climate philanthropy funding. [29] However, there were also some signs that well-funded climate philanthropists (e.g., Jeff Bezos) may be ramping up funding for this impact strategy, which might reduce overall funding need. [30] In tandem with increasing focus and funding from the climate philanthropy community, we also became aware of some new early-stage funding opportunities that might have high funding need. [31] We also assessed the degree to which private sector incentives and spending might render philanthropic funding moot, but generally did not find these arguments to be overly compelling. Given all these considerations, we ranked industry decarbonization as having medium funding need.


Shallow dives and deep dives


For impact strategies we prioritize based on the criteria above, we subsequently evaluate them at two depths: shallow dives and deep dives. [32] These dives continue to evaluate opportunities based on scale, feasibility, and funding need, but move beyond the qualitative low/medium/high ranking we use to make early-stage research prioritizations. For example, instead of assessing general funding need, we attempt to determine how a marginal donation would actually be used, as well as what would happen in the absence of that donation. Additionally, we consider major co-benefits and adverse effects that may affect our prioritization decisionmaking. [33]


We conduct many shallow dives on different impact strategies and, for the subset we continue to prioritize, deep dives to reassess in substantially more detail. While both focus on the same fundamental criteria, we start with a quicker analysis and only dig deeper if we think a strategy has a high probability of leading to a top recommendation. [34]


At this stage of our analysis, we also formally introduce two analytical tools to help guide our decision-making: theories of change and cost-effectiveness analyses.


Theory of change

We use theories of change to help us map out and assess an impact strategy’s pathway to reducing human suffering due to climate change. For a given theory of change, we focus especially on assumptions that are important for a theory of change to become reality. For each assumption, we rank whether we have low, medium, or high certainty in our assessment.


Theories of change may not always be amenable to easy measurement or quantification, or supported by a robust evidence base. All else equal, we think strategies that have lengthier and/or less certain theories of change are less likely to be successfully implemented, since there are more and/or larger opportunities for an important node or link to fail.


Cost-effectiveness analysis

We use cost-effectiveness analyses (CEAs) as an input into our comparison of the cost-effectiveness of different strategies and organizations. CEAs often go hand-in-hand with theories of change, as we often consider CEAs to be a “quantified theory of change.” However, many of the opportunities we view as most promising also have highly uncertain inputs. [35] Because of this, our CEAs often primarily serve as a way to (a) identify or confirm especially important parameters that most directly determine how much a donation might reduce climate change and (b) assess whether it is plausible that a donation could be highly cost-effective. [36]


For example, our decarbonizing heavy industry CEA (explanation, model) estimates the cost-effectiveness of advocacy efforts to secure a federal US government commitment to switch to lower-carbon cement procurement and its potential subsequent effects on global cement. Since this CEA includes highly subjective guess parameters, its estimates are highly uncertain and should not be taken literally. Instead, it helped us think through what parameters and assumptions were necessary in order for nonprofit advocacy to reduce greenhouse gas emissions, and whether these efforts plausibly have high cost-effectiveness. 


We also sometimes use CEAs to compare roughly similar climate impact strategies to determine which to prioritize. By constructing CEAs for each strategy, we can identify (a) which parameters vary and (b) whether one opportunity appears substantially more cost-effective than the other. Though we may have low confidence in our specific cost-effectiveness estimates, we may have high confidence in our assessment of the relative cost-effectiveness of one opportunity over another, and prioritize accordingly. For instance, we found this approach useful when comparing advocacy organizations in the US in the lead-up to climate legislation in 2022. [37]


Example: As we initially refined our investigation, we roughly mapped out nine different strategies to decarbonize heavy industry. [38] Though there are many different industry types and decarbonization impact strategies, our general impression was that heavy industry can be decarbonized if there is (1) adequate demand for low-carbon products, (2) a supportive regulatory framework, and (3) transition assistance that facilitates a switch to low-carbon production. To further refine our thinking, we built a simple theory of change and constructed a CEA as a plausibility check of this strategy’s cost-effectiveness. [39]


Step 3: Longlist Organizations


Once we have identified a promising impact strategy, our investigation turns practical. We longlist organizations to map out the existing universe of funding opportunities under a given impact strategy.


The specific criteria we use to add and roughly evaluate organizations varies based on what we are assessing. At this point, they are largely related to the organization’s focus.  For example, we examine how closely the organization’s work aligns with the previously identified impact strategy. In some cases, an organization’s mission and operations are highly aligned with a strategy. In other cases, an organization may only be partially focused on the identified strategy, or may have a variation on the strategy we have identified. At this stage, we also mark which organizations on our long list can both accept small-dollar donations and be legally recommended by Giving Green. [40]


Though it is not in line with our value of transparency, we do not publish organization longlists. We seek to be positive and collaborative members of the climate philanthropy community, and do not believe that we would substantially increase our impact by listing organizations that do not receive a top recommendation status. See Key Uncertainties for more.


Example: Based primarily on desk research and external input, we developed an initial longlist of 17 organizations focused exclusively or partially on decarbonizing heavy industry. [41] Of these, we prioritized four organizations for additional investigation. [42]


Step 4: Evaluate Funding Opportunities


From our longlist, we select a subset of organizations for which to evaluate specific funding opportunities. Though we do not think it is useful to have a pre-specified number of opportunities we assess for a given impact strategy, we generally evaluate (at most) three to five organizations. This allows us to balance research efficiency with the desire to make comparison-based assessments within a given impact strategy. For these organizations, we follow the same shallow dive and deep dive format and analysis outlined above.


We assess an organization’s theory of change to determine whether we believe it is effective in implementing a broader climate impact strategy. For example, we may believe that an organization is not well-placed to successfully advocate to industrial corporations to decarbonize, but that it will likely be effective in its advocacy to governments to procure low-carbon industrial goods. While it may fail via one theory of change pathway, its success via another suggests we may still have some confidence in its overall theory of change to decarbonize industry.


In some instances, there may be organization-specific details that also allow us to construct an organization-specific CEA. For example, there may be past instances of an organization’s specific policy advocacy efforts for which we have some cost data and effect evidence. In these cases, we may attempt to construct organization-specific CEAs, though this depends on the degree to which we believe past precedent should inform our understanding of future cost-effectiveness. In cases for which this is not possible, we rely on other assessment tools, which can sometimes include higher-level CEAs of impact strategies.


As noted above, we think this stage of our analysis may be less important for two reasons: (1) the majority of our impact comes from selecting highly cost-effective impact strategies and (2) it is relatively more difficult to assess specific organizations’ effectiveness. When assessing impact strategies, there may be journal articles on the topic, experts that have devoted years of thought to the question, and/or robust cost comparison data. While it is almost certainly the case that specific funding opportunities vary in cost-effectiveness, we generally think the differences between shortlisted organizations may often be relatively small and/or difficult to assess compared to differences in impact strategies’ cost-effectiveness. This is because shortlisted organizations may be relatively similar in approach, as well as the fact that our final recommendation decisions often rely on more qualitative measures that do not always enable high-confidence quantitative comparisons (e.g., key informant interviews assessing leadership effectiveness).


After completing a deep dive, we decide whether an organization should receive a top recommendation status. 


Example: From our shortlist of four organizations focused on industry decarbonization, we removed one organization due to lack of communication response. [43] For the remaining three, we conducted shallow dives. We assessed that two of these organizations had relatively low funding need, and one of these organization’s specific impact strategies may also have had low effectiveness. [44] This caused us to conclude that the cost-effectiveness of a marginal donation to either of these organizations would likely not be within the range we would consider for a top recommendation. [45] Our initial shallow dive of Industrious Lab suggested that it could be highly promising, and our deep dive analysis confirmed our prior thinking. Our deep dive helped us also identify some key uncertainties in our assessment. Ultimately, we decided to select Industrious Labs as a top recommendation.


Step 5: Publish Recommendations

At this final stage, we publish summaries of our deep dives that seek to answer the question: Why is this likely to be among the most cost-effective funding opportunities?


Recommendations do not contain additional information beyond what we’ve included in our funding opportunity deep dives. Instead, they serve as summaries that explain the fundamental reasons why we believe a specific funding opportunity is highly cost-effective, and why we are especially excited to classify it as a top recommendation.


Example: We published our Industrious Labs recommendation in November 2022.


Recurring Step: Reassess Existing Recommendations


As an ongoing step, we update our existing recommendations annually. We seek to answer the question: has anything changed about an organization or its context that would cause us to longer list it as a top recommendation? We assess implications for an organization’s scale, feasibility, and funding need, and maintain or remove our recommendation, accordingly. 


We update our list of recommended nonprofits annually. As a heuristic, more recent recommendations receive a light-touch update whereas older recommendations (two or more years since our last major update) receive more in-depth updates. In our reassessments, we explore what’s changed about the organization and its impact strategy context since our last review, and we consider perspectives and resources we may have overlooked. Questions we consider include:


  • Have there been organizational or contextual changes that cause us to be less (or more) confident of an organization’s theory of change?

  • How effectively has the organization used Giving Green-directed funds so far? With this information, how well do we think the organization can use Giving Green-directed funds in the future?

  • Does it seem harder or easier for this impact strategy to attract funding without a Giving Green recommendation?


We decide whether to maintain a recommendation based on our updated assessments of the organization’s scale, feasibility, and funding need.


Example: We have not yet reassessed our investigations of Industrious Labs or the general impact strategy of decarbonizing heavy industry. Since Industrious Labs is a relatively young and fast-growing organization, we expect our reassessment to primarily focus on funding need and evidence of effectiveness. For decarbonizing heavy industry, in general, we plan to assess whether philanthropic or government funding for this strategy has increased substantially, which may suggest lower funding need than previously.


Key Uncertainties


Our research process has evolved over time, and we continue to have uncertainties about our approach. These include: whether and how we should prioritize a diversity of recommendations, how to best define and estimate human suffering due to climate change, an appropriate balance of research breadth versus depth, how to avoid false precision, and how to balance transparency with other considerations.


Our research process has evolved over time, and we expect to continue to make changes as we grow and mature in our analytical approach. We have made some mistakes, and continue to have uncertainties about our current process. See below for some of our key uncertainties.


Diversity of recommendations

Our theory of change accounts for the preferences of our target audiences, which means that the diversity and number of our research and recommendations are affected by our target audiences. For example, part of our motivation for prioritizing research into deforestation was due to our impression that forests attract a relatively large amount of philanthropic funding. [46] If our target audience has strongly held preferences for donating to forest-based mitigation efforts, it could be the case that a recommendation linked to reducing deforestation might ultimately influence more funds and have more impact than a lower-preference recommendation, even if the lower-preference recommendation is technically more cost-effective. [47] It may also be the case that having high-interest research/recommendations provides an entry-point to crowd in additional funding to initially lower-interest recommendations. [48] Finally, we also think that fewer recommendations may make sense from a behavioral perspective, in order to reduce analysis paralysis that could cause would-be donors to not give. Since our own reasoning is based primarily on theory and qualitative target audience feedback, we may conduct more research on this in the future to inform our understanding. [49] We may also consider ways in which our Giving Green fund, formally launched in 2023, allows us to adopt new funding channels (e.g., direct grantmaking, requests for proposals, new organization seed funding, etc.).


Defining and estimating human suffering due to climate change

Our mission is to reduce human suffering due to climate change. However, this is difficult to define and analyze, so we often use atmospheric greenhouse gas levels or radiative forcing over different time periods as a rough proxy. We also use heuristics to prioritize mitigation efforts that we think will be relevant in high-suffering scenarios. We think this is an imperfect process, and is especially relevant when comparing funding opportunities for (a) near-term climate pollutants (e.g., methane) versus (longer-lived) CO2, (b) short permanence vs long permanence interventions (e.g., enhanced soil carbon management versus Climeworks direct air capture), and (c) near-term reductions versus future reductions (e.g., immediate carbon removal versus policy advocacy for nuclear fusion research). [50] This uncertainty varies in importance across our research, so we plan to continue to consider ways to improve and refine this on a case-by-case basis. We may also spend more time generally trying to improve our estimate of human suffering due to climate change, but do not expect this to be a major research focus for 2023.


Balancing research breadth versus depth

Since our mission is to reduce human suffering due to climate change, we face an extraordinarily broad field of potential opportunities to investigate. At each stage of our research process, we must balance breadth and depth in an effort to assess a wide array of opportunities with a high degree of rigor. We are uncertain whether we are striking the right balance, and plan to continue assessing this based on our research capacity and the opportunity landscape.


Avoiding false precision

Truth-seeking and transparency are two of our organizational values that drive us to have high reasoning transparency in all our work. However, we have historically struggled with how to (a) share the full depth of reasoning that goes into our research process and (b) communicate the degree of confidence we place in various statements and tools. For example, our CEAs are highly quantitative tools. For some of our assessments, we ultimately decide to not place much weight or confidence in them, and state as much. [51] We think this may still be confusing for some readers, and could be interpreted as false precision. For all of our research, and especially for highly quantitative analytical techniques such as CEAs, we are uncertain how to best practice high reasoning transparency while avoiding false precision. We plan to continue to experiment with different techniques to avoid false precision. [52]


Balancing transparency with other considerations

As a heuristic and value, we aim to be highly transparent in all our work. However, we think there are three reasons we are not fully transparent: confidentiality, research capacity, and positive collaboration:

  • Confidentiality: As part of our research process, we sometimes have the opportunity to receive confidential data or speak with stakeholders who request off-the-record conversations. In these instances, our first priority is to respect confidentiality. While we wish we were always in a position to share any information we receive, we think this tradeoff is probably worth it. We have found confidential information to be quite useful, as it can provide back-end direction and insight that directly informs our research and reasoning.

  • Research capacity: We are a small team with limited capacity to conduct a large amount of research. In its early stages, our research can be tentative, confusing, or just plain wrong. To make this research high-enough quality to make it easily and accurately interpretable would require substantial time that we think is better spent on decision-relevant research. However, we also want to be transparent about our overall process. To balance this, we publish a high-level research prioritization dashboard, (even if we do not always publish the extent of our prioritization reasoning), as well as shallow or deep dives on impact strategies we ultimately decide to not prioritize.

  • Positive collaboration: As mentioned below, we do not publish a list of organizations we have assessed and decided to not classify as a top recommendation. We do this to balance transparency with two of our other values: collaboration and humility. We seek to be positive members of the climate philanthropy community, and believe that making the case for top recommendations is more impactful and more in line with our values than highlighting organizations that, though they may be quite effective, do not ultimately receive a top recommendation status.


We are unsure whether we have always made appropriate transparency tradeoff decisions. Though increased research capacity would allow us to practice higher transparency, we think we will continue to balance concerns of confidentiality and positive collaboration.


Endnotes


[1] For more on our organizational strategy and processes, see How Giving Green Works | Our Mission. We loosely define “cost-effective” in terms of high scale, feasibility, and funding need (see below). We more formally define “cost-effective” as how much an additional dollar in philanthropic funding could avoid greenhouse gas emissions, remove atmospheric greenhouse gases, and/or reduce an equivalent amount of radiative forcing. For example, as a benchmark for our top recommendations, we consider an opportunity if its estimated cost-effectiveness is plausibly within an order of magnitude of $1/tCO2e avoided or removed (i.e., less than $10/tCO2e). See Key uncertainties for additional commentary on this proxy estimate and benchmark.


[2] For more on our research, see Climate change mitigation strategies research | Giving Green. For recommendations, see Give to high-impact climate nonprofits | Giving Green.


[3] For more on our Australia research process, see Research Process and Prioritization | Giving Green. For recommendations, see Australian Climate Policy, Our recommended organization | Giving Green. For our business strategy decisions, see How to Think Beyond Net Zero | Giving Green.


[4] Though we review climate targets and policies to roughly assess progress, we don’t necessarily place value on whether or not an impact strategy is “on track” or not according to a specific target. This is because we believe many targets are developed to serve as useful political and aspirational tools and goals.


[5] For example, we review Climateworks Funding Trends 2022 as one input into our assessment of philanthropic spending.


[6] “We also draw on complexity theory, evolutionary economics, and cybernetic theory to develop a technology typology that helps explain systematic differences in technologies’ experience rates by distinguishing between technologies on the basis of (1) their design complexity and (2) the extent to which they need to be customized.” Malhotra and Schmidt 2020.


[7] Since our mission is to reduce human suffering due to climate change, we think that limiting our initial universe of options by geography or approach might exclude highly cost-effective impact strategies for arbitrary reasons. However, at later stages we may use these sorts of criteria to help us prioritize certain research opportunities over others. We also consider the comparative advantages of our staff to efficiently and effectively conduct research. For example, since the common language among our staff is English, we may find it difficult to assess opportunities in certain geographies.


[8] We roughly estimate that industry accounts for 29% of global greenhouse gas emissions and receives around 3% of climate philanthropic funding from foundations. Industry greenhouse gas emissions: ~29% (2016). See Emissions by sector - Our World in Data, “Global greenhouse gas emissions by sector” figure, industry + energy use in industry. Philanthropic funding focused on industry: ~3% (annual average, 2017-2021). See Climateworks Funding Trends 2022, figure 4. Calculation: $55M / $1.7B = 3%.


[9] On the other hand, this could also suggest a hypothetical scenario in which philanthropic funding is less useful, perhaps because private sector incentives are well-aligned to make progress on climate change impact strategies.


[10] For example, philanthropic funding could support a non-profit to advocate for increased government spending on early-stage R&D, demonstration sites, etc.


[11] We developed these indicators by generally speaking with different climate researchers and philanthropists about their assessment/funding criteria, as well as reviewing publicly available frameworks such as Holden Karnofsky’s 2013 Importance, Tractability, and Neglectedness (ITN) framework, MacAskill et al 2022’s Significance, Persistence, and Contingency framework, and GiveWell’s research criteria. We do not adopt these specific terminologies because (1) they have formal and varied formulas that we do not use and (2) we think our terminology may be more easily interpretable by a broader audience.


[12] We think the future of climate change is difficult to predict, and it is our general impression that there is fairly large disagreement and uncertainty over a wide range of future climate scenarios. We use a 2100 timeline to very roughly balance two broad scenarios: (1) mitigation efforts are relatively successful, and human suffering due to climate change is concentrated through 2100; (2) mitigation efforts are relatively unsuccessful, and human suffering due to climate change increases over time, including beyond 2100. We are highly uncertain about the future of climate change, but broadly think scenario 1 is more likely and would cause less suffering than scenario 2. We think this based on a medium-depth review of the available literature, as well as speaking with various climate scientists and generalist forecasters. For example: “Assessment of current policies suggests that the world is on course for around 3 °C of warming above pre-industrial levels by the end of the century — still a catastrophic outcome, but a long way from 5 °C” Source: Hausfather and Peters 2020; “warming can be kept just below 2 degrees Celsius if all conditional and unconditional pledges are implemented in full and on time. Peak warming could be limited to 1.9–2.0 degrees Celsius (5%–95% range 1.4–2.8 °C) in the full implementation case—building on a probabilistic characterization of Earth system uncertainties in line with the Working Group I contribution to the Sixth Assessment Report6 of the Intergovernmental Panel on Climate Change” Source: Meinshausen et al 2022.


To compare the effects of non-CO2 greenhouse gas emissions to CO2 emissions, we use a 75-year global warming potential (GWP). This allows us to roughly estimate the amount of energy the emissions of 1 ton of a non-CO2 greenhouse gas will absorb through 2100. For additional background on GWP, see Understanding Global Warming Potentials | US EPA. We believe GWP is an imperfect approach to assessing global temperature effects, as well as human suffering due to climate change. For example, see Allen et al 2018 (“Using conventional Global Warming Potentials (GWPs) to convert [short-lived climate pollutants] to ‘CO2-equivalent’ emissions misrepresents their impact on global temperature”). We use GWP75 because it is relatively simple to calculate and communicate, and because we believe our heuristics help us address two of the main flaws of GWP: GWP does not account for (1) varying potency within a given time period and (2) any potential effects beyond the GWP duration. Our heuristic, “is this problem or opportunity relevant across a wide range of climate scenarios, including scenarios with relatively high human suffering?” is meant to downgrade mitigation efforts that are less likely to apply in longer-term (2100+) scenarios with relatively high human suffering. In practice, we think these mitigation efforts are primarily limited to (a) single-instance reductions in near-term term climate pollutants (e.g., methane) and (b) CO2 removal with less than 100-year permanence.


[13] We think the future of climate change is difficult to predict, and it is our general impression that there is fairly large disagreement and uncertainty over a wide range of future climate scenarios. For example, the IPCC outlines five Shared Socioeconomic Pathways (SSP), but does not make claims as to the likelihood of different scenarios occurring (Source: “...the developers of the SSPs make no claim as to the relative likelihood of any scenario coming to pass.” Explainer: How ‘Shared Socioeconomic Pathways’ explore future climate change - Carbon Brief). Another major uncertainty is the degree to which “tipping elements” (“conditions beyond which changes in a part of the climate system become self-perpetuating. These changes may lead to abrupt, irreversible, and dangerous impacts with serious implications for humanity” Source: McKay et al 2022) exist and/or are likely to occur. All else equal, we think that impact strategies that remain relevant across a wide range of scenarios have a greater likelihood of reducing the most human suffering due to climate change. Strategies that apply to worser-case scenarios may be especially impactful, since we think reducing climate change in worser-case scenarios will reduce relatively more suffering than similar efforts in better-case scenarios. For example, we think that avoiding a 0.5 °C global temperature increase from 1.5 °C to 2.0 °C will reduce relatively less suffering than avoiding a similar temperature increase from 2.5 °C to 3.0 °C. However, given uncertainties across different climate change scenarios and what we believe to be a moderate likelihood that we do not encounter worser-case scenarios (for example, “Assessment of current policies suggests that the world is on course for around 3 °C of warming above pre-industrial levels by the end of the century — still a catastrophic outcome, but a long way from 5 °C” Source: Hausfather and Peters 2020; “ warming can be kept just below 2 degrees Celsius if all conditional and unconditional pledges are implemented in full and on time. Peak warming could be limited to 1.9–2.0 degrees Celsius (5%–95% range 1.4–2.8 °C) in the full implementation case—building on a probabilistic characterization of Earth system uncertainties in line with the Working Group I contribution to the Sixth Assessment Report6 of the Intergovernmental Panel on Climate Change” Source: Meinshausen et al 2022).


[14] Though this ranking is qualitative to allow for high degrees of uncertainty in our initial assessments, we think scale is generally a criteria for which we can ultimately have a relatively quantifiable metric. We generally consider scale to be low if we believe the problem or opportunity could affect less than 2% of future expected greenhouse gas emissions through 2100, medium for 2% to 5%, and high for greater than 5%.


[15] Time frame is important when thinking about scale because (1) different greenhouse gases have different atmospheric lifetimes, (2) different interventions have varying levels of permanence, and (3) time frame impacts whether we prioritize mature ready-to-scale technologies over emerging technologies. For example, prioritizing a longer time frame may favor strategies that focus on permanently removing longer-lived climate pollutants (e.g., some direct air capture projects) over single-instance avoidance of near-term (sometimes referred to as “short-lived”) climate pollutant emissions (e.g., refrigerant destruction). See section Key uncertainties, Defining and estimating human suffering due to climate change, for additional commentary.


[16] Industry greenhouse gas emissions: ~29% (2016). See Emissions by sector - Our World in Data, “Global greenhouse gas emissions by sector” figure, industry + energy use in industry


[17] We describe our assessment as low/medium/high to increase readability and avoid false precision. Since these terms can be interpreted differently, we use rough heuristics to define them as percentage likelihoods we think a strategy can, on average, be successful. Low = 0-60%, medium = 60-80%, high = 80-100%.


[18] We are more concerned about false negatives than false positives during our initial assessment. Namely, we can catch false positives at later stages in our analysis whereas false negatives drop out of consideration, meaning we’ve lost the ability to rectify our mistake.


[19] See Decarbonizing Heavy Industry | Giving Green for additional detail


[20] “Energy-intensive industries (EIIs) produce basic materials, such as steel, petrochemicals, aluminum, cement, and fertilizers, that are responsible for around 22 percent of global CO2 emissions (Bataille 2019).” Unlocking the “Hard to Abate” Sectors | World Resources Institute.


[21] For example, industry accounts for 29% of global greenhouse gas emissions and receives around 3% of climate philanthropic funding from foundations. Industry greenhouse gas emissions: ~29% (2016). See Emissions by sector - Our World in Data, “Global greenhouse gas emissions by sector” figure, industry + energy use in industry. Philanthropic funding focused on industry: ~3% (annual average, 2017-2021). See Climateworks Funding Trends 2022, figure 4. Calculation: $55M / $1.7B = 3%. This might suggest that industry has a relatively high philanthropic funding need, since its percentage of philanthropic funding is lower than that of its emissions. This is a very rough heuristic, as there could be many other factors that mean this is not the case (e.g., limited philanthropic funding opportunities).


[22] For example, see Climate Tech’s Four Valleys of Death and Why We Must Build a Bridge.


[23] An approach might be low-interest due to relatively low scale or feasibility. It may also be low-interest due to a limited evidence base, minimal track record, political sensitivity, or lack of private sector incentives.


[24] For example, see Climateworks Funding Trends 2022, figure 4. From 2017-2021, Indonesia received an annual average of $25M in philanthropic funding, around 2% of country-specific funding (calculation: $25M / ($1.7B - $505M) = 2%. However, it is the world’s fifth largest emitter, equivalent to ~4% of global greenhouse gas emissions (see WRI: Indonesia Climate Change Data 2019). This might suggest that Indonesia has a relatively high philanthropic funding need, since its percentage of philanthropic funding is lower than that of its emissions. This is a very rough heuristic, as there could be many other factors that mean this is not the case (e.g., limited philanthropic funding opportunities).


[25] We describe our assessment as low/medium/high to increase readability and avoid false precision. Since these terms can be interpreted differently, we use rough heuristics to define them as percentage likelihoods a climate philanthropy funding need is within the range we would consider for a top recommendation. Low = 0-60%, medium = 60-80%, high = 80-100%.


[26] “2021 saw sizable [philanthropic] funding increases for forests and carbon dioxide removal.” Climateworks Funding Trends 2022.


[27] “In the last year, the federal government has committed to spend more than $580 billion to combat climate change through the passage of the Bipartisan Infrastructure Law (BIL) and the Inflation Reduction Act (IRA). Within this $580 billion, there is significant funding supporting development and deployment of carbon dioxide removal…” WRI: Carbon Removal in the Bipartisan Infrastructure Law and Inflation Reduction Act, 2022.


[28] “Following the 2022 passage of the Inflation Reduction Act (IRA), Evergreen Collaborative is now planning to work on implementing bills and state-level policy…” Evergreen Collaborative: Deep Dive | Giving Green.


[29] Philanthropic funding focused on industry: ~3% (annual average, 2017-2021). See Climateworks Funding Trends 2022, figure 4. Calculation: $55M / $1.7B = 3%. This might suggest that industry has a relatively high philanthropic funding need, since its percentage of philanthropic funding is lower than that of its emissions. This is a very rough heuristic, as there could be many other factors that mean this is not the case (e.g., limited philanthropic funding opportunities).


[30] Bezos Earth Fund: The Bezos Earth Fund made its first grants to “Decarbonizing the Economy” in November 2020. “Bezos Earth Fund, Our Programs” n.d. Example of Bezos Earth Fund grant: “Rocky Mountain Institute (RMI) today announced that it has received a $10 million grant from the Bezos Earth Fund to help significantly reduce greenhouse gas (GHG) emissions in both U.S. buildings and in energy-intensive industrial and transport sectors.” "RMI Awarded $10 Million from The Bezos Earth Fund to Accelerate Decarbonization of Buildings and Industry" 2020. Microsoft Climate Innovation Fund: “We’ll focus on areas such as direct carbon removal, digital optimization, advanced energy systems, industrial materials, circular economy, water technologies, sustainable agriculture, and business strategies for nature-based markets.” "Climate Innovation Fund" n.d. Giving Green note: We note that ClimateWorks Foundation’s reported annual average spending may have increased simply because ClimateWorks learned of more foundations already donating to decarbonizing heavy industry. However, we do not think this is the case.


[31] One of these opportunities eventually received a top recommendation status. See Industrious Labs: Deep Dive | Giving Green for additional details.


[32] See Climate change mitigation strategies research | Giving Green for our shallow and deep dives.


[33] In general, we are most focused on climate benefits and our decision making does not typically place substantial weight on co-benefits (benefits not directly related to climate). However, there are instances in which a co-benefit may (a) attract additional funding (e.g., Good Food Institute’s work may also have positive animal welfare effects) and/or (b) reduce human suffering due to climate change in ways unrelated to greenhouse gases (e.g., BURN stoves allow households to reduce spending on cooking charcoal). In these cases, we think it is useful to consider and highlight co-benefits. For adverse effects, we believe there are many highly cost-effective impact strategies that do not have substantial adverse effects. Therefore, we deprioritize strategies that we think might have large and/or inequitably shared adverse effects.


[34] We describe our certainty as low/medium/high to increase readability and avoid false precision. Since these terms can be interpreted differently, we use rough heuristics to define them as percentage likelihoods the assumption is, on average, correct. Low = 0-60%, medium = 60-80%, high = 80-100%.


[35] We think this is most likely the case for two main reasons: (1) many climate funders explicitly or implicitly value certainty in their giving decisions, so this means less-certain funding opportunities are relatively underfunded; and (2) we think some of the most promising pathways to scale (e.g., policy influence and technology innovation) are also inherently difficult to assess due to their long and complicated causal paths.


[36] We use rough benchmarks as a way to compare the cost-effectiveness of different giving opportunities. As a benchmark for our top recommendations, we consider an opportunity if its estimated cost-effectiveness is plausibly within an order of magnitude of $1/tCO2e avoided or removed (i.e., less than $10/tCO2e).


[37] See Activism: Cost-Effectiveness Analysis | Giving Green for additional explanation.


[38] See Decarbonizing Heavy Industry | Giving Green, Table 1. Though our initial assessment was more informal, this table encapsulates the different strategies we generally considered.


[39] Theory of change: See Decarbonizing Heavy Industry | Giving Green, “Theory of change for decarbonizing heavy industry”. CEA: See Decarbonizing Heavy Industry | Giving Green, “What is the cost-effectiveness of decarbonizing heavy industry?”.


[40] Giving Green is part of IDinsight Inc., which is itself a charitable, tax-exempt 501(c)(3) organization. Our 501(c)(3) status precludes us from supporting or opposing political campaign activities and engaging in extensive lobbying.


[41] Redacted, see “Key uncertainties” section, “Balancing transparency with other considerations” topic for additional commentary.


[42] Redacted, see “Key uncertainties” section, “Balancing transparency with other considerations” topic for additional commentary.


[43] Redacted, see “Key uncertainties” section, “Balancing transparency with other considerations” topic for additional commentary.


[44] Redacted, see “Key uncertainties” section, “Balancing transparency with other considerations” topic for additional commentary.


[45] We use rough benchmarks as a way to compare the cost-effectiveness of different giving opportunities. As a benchmark for our top recommendations, we consider an opportunity if its estimated cost-effectiveness is plausibly within an order of magnitude of $1/tCO2e avoided or removed (i.e., less than $10/tCO2e).


[46] For example, see Climateworks Funding Trends 2022, figure 4. From 2017-2021, “forests” received an annual average of $140M in philanthropic funding, around 8% of overall funding (calculation: $140M / $1.7B = 8%.


[47] As a thought experiment, consider a hypothetical scenario where a donor has a strong preference for forest-based mitigation efforts. If we make a forest recommendation that can avoid emissions for $10 per tCO2e, the donor will donate $100, resulting in 10 tCO2e avoided. If we make a non-forest recommendation that can avoid emissions for $5 per tCO2e, the donor will donate $10, resulting in 2 tCO2e avoided. Thus, making a forest recommendation results in more tCO2e avoided, even if the non-forest recommendation is technically twice as cost-effective. To guard against making recommendations that vary dramatically in cost-effectiveness, we use rough benchmarks. For our top recommendations, we consider an opportunity if its estimated cost-effectiveness is plausibly within an order of magnitude of $1/tCO2e avoided or removed (i.e., less than $10/tCO2e).


[48] For example, GiveDirectly (an unconditional cash transfer non-profit organization) believes that its launch of a U.S.-specific COVID-19 response also helped it draw in more funding for its international work. We have not investigated this claim in detail, but believe it is a reasonable and plausible inference. “In April 2020, we launched Project 100, a U.S. COVID-19 response…For the past decade, our core mission has been to reach people living in extreme poverty. While many millions in the United States are in poverty, they’re typically not facing extreme poverty as it is officially defined (living below $1.90/day)...We’ve driven over $70M to international programs from donors who initially gave to U.S. projects — more than our revenue for any year before 2020.” Working in the U.S. helped us get more money to international recipients | GiveDirectly.


[49] For example, we may look more closely at web traffic, donor data, and/or solicit target audience feedback.


[50] “[Near-term (also referred to as short-lived) climate pollutants] persist for a short time in the atmosphere but can be extremely potent in terms of their global warming potential compared to long-lasting greenhouse gases such as CO2.” World Bank: Short-Lived Climate Pollutants, 2014.


[51] For example: “This CEA includes highly subjective guess parameters and should not be taken literally. In particular, we estimated the change in likelihood that advanced reactors would move from a low- to a high-innovation scenario due to advocacy efforts, the change in that probability that could be attributed to nonprofits, and the number of years that advocacy moves a high-innovation scenario forward compared to the counterfactual. We have low confidence in the ability of our CEA to estimate the cost-effectiveness of NGOs’ US policy advocacy, community engagement, and technical assistance but view it as a slight positive input into our overall assessment of supporting advanced reactors.” Nuclear Power | Giving Green.


[52] For example, we may reduce the complexity of our CEAs, since complexity can sometimes be associated with false precision. We may instead publish fewer full-length CEAs and more simplified BOTECs (“back of the envelope” calculations), even if we used more complex CEAs to inform our research.

bottom of page