March 9, 2021 From rOpenSci (https://ropensci.org/blog/2021/03/09/commcall-stats/). Except where otherwise noted, content on this site is licensed under the CC-BY license.
A week ago we held a Community Call discussing rOpenSci Statistical Software Testing and Peer Review. This call included speakers Noam Ross, Mark Padgham, Anna Krystalli, Alex Hayes, and John Sakaluk.
This post provides a ready reference and description of this community call, which introduced the system being developed for peer review of explicitly statistical software, along with a couple of the automated software tools for use by developers and reviewers of statistical software.
After a welcome from Stefanie Butland, Anna Krystalli gave an overview of the context and importance of our new tools from an editorial perspective.
Noam Ross then introduced the statistical software review project, members of its advisory board, and the standards-based system which will be used to assess and review statistical software.
Mark Padgham then briefly described the two main tools intended for use by developers and reviewers: the
autotest package for automated testing of software to ensure robust responses to unexpected inputs throughout development, and the
srr (software review roclets) package for documenting within code itself how and where it complies with both general and category-specific standards for statistical software.
The call then moved on to a “hands-on” demonstration of how these packages can be used in practice. John Sakaluk showed
autotests's capabilites on his
dySEM package. John developed
dySEM for his own use, and would now like to refine and extend the package for more general use, ideally working towards submission to our peer-review system.
John described the usefulness of
autotest in explicitly revealing aspects of his code which could be improved for more general usage, and in particular that,
one of the things that’s really useful for me here, as a self taught and newbie developer, is I find myself adding to my package development list almost every time that I open it up in terms of wish-listing new functionality. And what’s really nice about this [
autotesttool] is this can help me set some targets for priority items just for tightening up the programming of the existing functions. – John Sakaluk
Alex Hayes then described his experiences from initial review of his
fastadi package, and of the role standards can play in software improvement and assessment, noting in particular the usefulness of standards as contextual “touchpoints” for review, and how the
srr package tracks these standards through the development process.
Here we’ve organized the video content by speakers and questions, including links to the specific time points in the video as well as to questions and answers in the collaborative notes document. We hope that by preparing this summary, more people will be able to benefit from this information.
Anna Krystalli - The editorial perspective (video)
Noam Ross - Project introduction (video)
Mark Padgham - Introducing autotest and srr packages (video)
John Sakaluk - Using autotest on a package-in-development (video)
Alex Hayes - Using srr while preparing a package for review (video)
Anna Kyrstalli - Moderates questions (video)
(Anna Krystalli) Suggestion of RStudio addin for srr (video)
(Joss Langford) We are just beginning to re-code some existing packages - so have the advantage of starting with a blank sheet AND a good specification AND well tested code snippets. We’re new to this community - what advice would give on both engagement and coding? (video | document)
(Charles Sweetland) Does autotest take into account dependencies and dependency changes? (document)
Not sure how you might contribute? Contact us ([email protected]) and tell us what you’re thinking. We are particularly keen to help people from underrepresented groups find ways to get involved.