**This blog entry orginally appeared on the website oceanspaces.org.**/p>

We’ve reached a turning point in our citizen science blogging experiment on Facing West. Until now we’ve been exploring, here in these virtual pages, a wide range of topics relevant to the challenge of connecting citizen science with management. At the same time we’ve been venturing out into the reality of citizen science along the Central Coast, learning whatever we can about the 31 citizen science groups that investigate marine-related issues throughout this region. Now it’s time to begin presenting some of the specific results from that exploration.

Regular readers will recall that all of this–the blog, the research and engagement on the Central Coast–is part of the California Citizen Science Initiative. Our goal with this initiative is to systematically think about the ways these citizen science groups can or do play a role in marine protected area monitoring and management. We’re collaboratively envisioning what citizen science will look like as part of cost-effective, sustainable monitoring of California’s MPAs.

In the coming weeks and months we will present a series of case studies based on our experiences getting to know some of the many citizen science programs operating in the Central Coast. These longer pieces will pull together ideas from phone interviews with program coordinators and focus groups with volunteers. For the sake of consistency, we will try to follow a common structure for each case, to aid in drawing out larger conclusions from across these examples. So let’s take a moment in this post to consider each of the main headings within our structure–how we are defining it, and its relevance to the broader goals of the Citizen Science Initiative.

Program Participation
What we mean:
We are interested in the design elements that promote or limit involvement of citizen scientists in the various activities associated with a program. We will look at the different roles that participants can play, the ways in which participation is constrained or expanded through policies and recruitment practices, and the volunteer pools targeted through partnerships or communications strategies. We will also look at what motivates volunteers to participate in these programs, and the incentives that help keep them engaged.

Why it matters:
Aspects of participation may be direct determinants of program sustainability and credibility. These issues also directly impact the temporal and geographic scalability of the program. Understanding incentives and motivation can also help us think through the potential for programs to adapt their methods to MPA monitoring needs, or in turn how a relationship with MPA monitoring might impact volunteer recruitment and retention.

Meeting the Mission by Balancing Goals
What we mean:
Citizen science groups are founded and developed to meet many different missions, sometimes multiple missions within one group. Programs often have a mix of goals under three main categories: rigorous scientific data, science education, and environmental advocacy. Here we will examine the emphasis that citizen science programs place on particular goals, as well as apparent synergies and tradeoffs among different goals.

We also want to consider what hard realities might dictate the balance of goals. Who is steering the boat in that fine balance? What criteria are used in deciding where along the balancing act the program is perched? And overall, has the balancing act served the program well or been a challenge? What lessons can other programs take when attempting to navigate these challenges?

Why it matters:
Engaging citizen science programs in MPA monitoring might mean asking them to implement their program or utilize their data in a new way. Talking through how they actively balance multiple goals can be a helpful exercise, both for the programs themselves, and for us as we think through how best to support the involvement of citizen science in MPA monitoring.

Data Types Good for a Citizen Science Approach
What we mean:
Sometimes species identification is too difficult for even an expert. But other times, people taking photos of sick sea stars off their private dock provides the spatial coverage necessary. Are there any characteristics of the types of things that citizens are good at taking data on? Are certain things more amenable to volunteer excitement?

Why it matters:
The perennial question during methods development for citizen scientists, the question of what data can be taken by your volunteer pool can take some trial and error. So thinking about the many potential indicators of ocean health that might be monitored through MPA monitoring, which ones could citizen scientists help with and which ones have been shelved by groups already?

Data Uses
What we mean:
We want to know how data produced by a program have been used, and about the details of that process: who are the users, why do they want these data, and what does it take for them to use it effectively? Do users access raw data, or more developed products from the program’s data? What kinds of relationships are needed for this to work? Are uses the result of deliberate program design, or are they serendipitous? We also want to know about the program’s aspirations. Who are the intended audiences of the data, and what successes and challenges has the program experienced in trying to reach them? We have observed three common venues for data use: research, education, and management and will organize thoughts by these categories.

Why it matters:
Getting a handle on how data are used and in what form is part of figuring out the capacity for citizen science to contribute to MPA monitoring. We can learn valuable lessons from the experiences of citizen science groups who have formed relationships with other users of their data. Programs may also follow different kinds of research paths from idea to application, which can provide insights for other groups hoping to make their data more useful.

Scientific Credibility
What we mean:
In this section we will investigate how citizen science programs work to demonstrate the credibility or reliability of their methods, data and results. We’re also interested in how users themselves evaluate those issues.

There are many ways of achieving credibility, and we will pay close attention to the strategies adopted by different kinds of programs. To organize our thinking about this issue, we will address four main themes that are commonly mentioned in academic literature dealing with credibility of citizen science. This is not a checklist, but helpful means of establishing credibility:

verification of data quality: this may range from broad scale comparisons with other data sets, to ways of demonstrating and implementing day-to-day oversight.
raw data transparency and access: open access may eliminate suspicion of bias, but also enable misleading uses of data.
clarity of communications: even with good data, communication of results are crucial in building credibility. This relates to the capacity needed for analysis and communications of results, as well as strategies of reaching target audiences.
willingness and capability to adapt methods: methods tailored to a particular application are more likely to answer the questions at hand. Adapting methods slightly to address current management needs while maintaining integrity of the long-term dataset shows both scientific skill and attention to detail that lends credibility to the data.

Why it matters:
Standard norms and practices for assessing the credibility of academic research may not always be appropriate for citizen science. Lessons from the case studies may point toward rules of thumb that can be applied when considering the use of citizen science in MPA monitoring.

Program Sustainability
What we mean:
Here we are interested in learning what it takes for programs to last over many years, and even decades. Issues that frequently arise when discussing program sustainability include funding sources, staff and volunteer longevity, and the various forms of physical and social capital that programs must maintain. We also want to understand what steps newer programs can take, or are taking, to plan for long-term operations.

Why it matters:
Long-term datasets, and the programs that support them, are extremely important to MPA monitoring. While this is not to say that short term citizen science efforts cannot be useful, lessons on sustainability can help us to think through the challenge of implementing MPA monitoring over decades.

Looking Toward the Future
What we mean:
Beyond understanding how individual programs are currently operating, we want to understand their future plans and goals.

Why it matters:
Whether a program’s goals relate sustainable funding sources, building in new activities or partnerships, expanding geographic focus, or increasing educational impact, we want to explore how involvement in MPA monitoring might be helpful in meeting those goals. Looking forward also allows a moment to be optimistic about both the state of the environment being monitored and engage in some forward-looking thinking on the challenges of making a citizen science program run.