Growth in the use of InVEST models over time. Model use occurred in 102 different countries with 44% of all use occurring in the U.S. For non-U.S. countries, there were 14,301 model runs with 43% of use occurring in 5 countries: the UK (1554), Germany (1491), China (1209), France (1074), and Colombia (780). Credit: Posner et al.
Since NatCap’s founding, staff have worked relentlessly to create and share tools that will give people the information they need to understand exactly how nature is benefitting them, and how that could change under different future scenarios.
Ten years into the project, is NatCap’s software really being put to work? If so, where? And what factors might determine its use?
These questions are at the heart of a new study, published in the journal Ecosystem Services, which analyzes NatCap’s InVEST (Integrated Valuation of Ecosystem Services and Tradeoffs) software usage over two years. Although NatCap has tracked downloads of InVEST and other software for quite some time, digging into information on actual model runs provides a whole new level of understanding.
Capacity and capacity-building are key
The study showed that InVEST models were run more than 40,000 times between 2012 and 2014. The tool has been used most often in the United States, which accounts for nearly half of the runs (44%). But InVEST has users in 103 other countries as well, with the UK, Germany, China, France, and Colombia rounding out the top of the list. Growth in the use of InVEST models over time. Model use occurred in 102 different countries with 44% of all use occurring in the U.S. For non-U.S. countries, there were 14,301 model runs with 43% of use occurring in 5 countries: the UK (1554), Germany (1491), China (1209), France (1074), and Colombia (780). Credit: Posner et al.
Outside the US, a major predictor of where InVEST use is strong and rising is whether or not NatCap has conducted an on-site course to train users in NatCap’s approach and tools. Usage typically doubled for a sustained period after trainings. (The authors compared a 13-week window before a training to subsequent 13 week windows after the training; spikes during the trainings themselves were excluded from the analysis). Use rates are also stronger in countries with higher capacity, defined by factors like average GDP and access to the internet.
“This study shows that if we want to develop tools that people use in real world conservation, it’s important to engage the people who are going to use them and to build capacity,” said lead author Stephen Posner of the University of Vermont. “Tool developers need to do more than just produce tools, make them available, and assume they will be used,” Posner said.
“I was really excited to see these results,” said Anne Guerry, NatCap’s Chief Strategy Officer, who oversees training and capacity building efforts. “I’ve always looked at download statistics and wondered if people were downloading our software, taking one look, and never going back in, or if they were becoming frequent users. And, from a training team perspective, this information is a real treasure trove that can help us to understand where and when we might be most successful at building capacity. And, of course, it is validating to see that there is a significant—and sustained—bump in usage after a training. We’re always working to further target and improve our capacity-building efforts and we can use this kind of information to learn as we go. My next dream is to understand more not only about the number of model runs, but also about user experience, the questions being asked and answered, and whether or not our software made a difference in local decisions.”
This is the first study to track ecosystem services software usage. While many other ecosystem service modeling tools exist, NatCap is the only tool-provider known to the authors to have tracked this information.
“The inspiration for the study came from going to a Natural Capital Project meeting in 2011, before there was a symposium, and being really impressed by the kind of work they were doing, and being curious about the impact they have,” Posner said. “Once I learned they traced data on who was using these models, I saw a huge opportunity to get that information,” he said.
The take-home for software developers, he said, is that “it’s really important to consider the audience. Who’s going to use it, how will they use it, and why? And how can we learn from information about the users to improve the design of the tool and broaden its reach?”
The study’s authors include Stephen Posner, Taylor Ricketts, and Insu Koh, all of the University of Vermont’s The Gund Institute for Ecological Economics and Rubenstein School of Environment and Natural Resources; and Gregory Verutes and Doug Denu, both of The Natural Capital Project at the Stanford Woods Institute of the Environment.