Greater adoption of mobile app reporting is also one of
the conditions for making the capture–recapture design
more effective. With th is design, the problems with match-
ing errors and the conditional independence assumption
still need solutions. The role of noncoverage bias due to
excluding private fishing access sites in the probability
samples also requires some investigation for this
application.
Some data, especially for those who are without the
resources that are needed to collect high-quality data from
probability samples, seems better than no data at all. Per-
haps there are situations in which this holds, such as some
of the supple mentary uses where population inferences are
not necessary. Nevertheless, the absence of data may be
better than poor-quality data in many cases. For example,
a decision on whether to restrict fishing for a particular
species might be the outcome of the analysis of fishing sur-
vey data. If the estimates from a survey are seriously
biased, it might lead to restricting fishing when it should
be open. The opposite result of allowing fishing when it
should be restricted could have even more grim conse-
quences. When the policy has such consequences, it is
important to have estimates and confidence intervals for
those estimates that can be trusted. Currently, the bes t
general approach to providing such estimates is by using
probability samples.
Our view is that nonprobability samples should first be
studied and evaluated in situations where the effects of
wrong decisions are not serious. Baker et al. (2013) refer
to this idea as “fit for use.” Currently, the application of
nonprobability sampling for fishing effort and catch sur-
veys have not proven that they are fit for use.
One method that has been used in social science
research to examine the quality of nonprobability sam-
pling is to produce estimates from both probability and
nonprobability samples and compare them with each other
or with benchmark estimates from some gold standard.
Callegaro et al. (2014) do a comprehensive review of such
studies. Yeager et al. (2011) experimented with using the
same instrument in multiple nonprobability samples and
in a probability sample. By highlighting issues that would
otherwise go undetec ted, such comparisons have value
even if they do not generalize directly to other applica-
tions. The existing studies have shown that the probability
samples typically produce estimates with smaller biases
and that the estimates from nonprobability samples vary
significantly from each other in terms of average absolute
bias.
Jiorle et al. (2016) did a small comparison of estimates
from a mobile app to estimates from a probability sample
in a fishing survey setting. However, these comparisons
were not the equivalent of those done in the social science
research noted above. They made only a few comparisons
and considered only a few species that have very limited
variation in catch rates and in very limited geographic
areas. Furthermore, the comparisons were to observations
from a probability sample not estimates from the sample.
Thus, this method of evaluation has not been explored
much for fishing surveys. Without robust evaluation of
nonprobability samples for fishing surveys, there is no evi-
dence that they are fit for use in providing population-
level inferences for recreational fishing effort and catch.
ORCID
J. Michael Brick
https://orcid.org/0000-0003-3490-8925
REFERENCES
Ahrens, R. 2013. Assessing and refining the collection of app-based
angler information in relation to stock assessment. National Oceanic
and Atmospheric Association, Marine Recreational Information Pro-
gram FY-2013, Washington, D.C.
Alabama. 2019. 2019 Alabama exempted fishing permit status report.
Available: https://www.outdooralabama.com/sites/default/files/2019%
20AL%20DCNR%20EFP%20Update%20No.%204.pdf. (November
2021).
Andrews, R. 2021. Evaluating nonresponse bias in the MRIP Fishing
Effort Survey. Available: https://apps-st.fisheries.noaa.gov/pims/main/
public?method=DOWNLOAD_FR_DATA&record_id=2018. (August
2021).
Andrews, R., J. M. Brick, and N. Mathiowetz. 2014. Development and
testing of recreational fishing effort surveys: testing a mail survey
design. Available: https://www.st.nmfs.noaa.gov/pims/main/public?
method=DOWNLOAD_FR_PDF&record_id=1179. (February
2021).
Baker, R., J. M. Brick, N. A. Bates, M. Battaglia, M. P. Couper, J. A.
Dever, K. J. Gile, and R. Tourangeau. 2013. Summary report of the
AAPOR task force on non-probability sampling. Journal of Survey
Statistics and Methodology 1:90–143.
Bethlehem, J. 2010. Selection bias in web surveys. International Statistical
Review 7:161–188.
Brick, J. M., W. R. Andrews, and N. A. Mathiowetz. 2016. Single-phase
mail survey design for rare population subgroups. Field Methods
28:381–395.
Callegaro, M., A. Villar, D. S. Yeager, and J. A. Krosnick. 2014. A criti-
cal review of studies investigating the quality of data obtained with
online panels based on probability and nonprobability samples. Pages
23–50 in M. Callegaro, R. Baker, J. Bethlehem, A. S. G
¨
oritz, J. A.
Krosnick, and P. J. Lavrakas, editors. Online panel research: a data
quality perspective. Wiley, Chichester, UK.
Chen, Y., P. Li, and C. Wu. 2020. Doubly robust inference with non-
probability survey samples. Journal of the American Statistical Asso-
ciation 115:2011–2021.
Coleman, J. S. 1958. Relational analysis: the study of social organiza-
tions with survey methods. Human Organization 17:28–36.
Groves, R., F. Fowler, M. Couper, J. Lepkowski, E. Singer, and R. Tour-
angeau. 2011. Survey Methodology. Wiley, Hoboken, New Jersey.
Jiorle, R. P., R. N. Ahrens, and M. Allen. 2016. Assessing the utility of
a smartphone app for recreational fishery catch data. Fisheries
41:758–766.
Kalton, G. 2019. Developments in survey research over the past 60 years:
a personal perspective. International Statistical Review 87:S10–S30.
Keusch, F., and C. Zhang. 2017. A review of issues in gamified surveys.
Social Science Computer Review 35:147–166.
48 BRICK ET AL.