Start Submission Become a Reviewer

Reading: Introducing Massively Open Online Papers (MOOPs)

Download

A- A+
Alt. Display

Methods

Introducing Massively Open Online Papers (MOOPs)

Authors:

Jonathan P. Tennant,

IGDORE, ID
X close

Natalia Bielczyk,

Stichting Solaris Onderzoek en Ontwikkeling, Nijmegen, NL
X close

Bastian Greshake Tzovaras ,

Center for Research and Interdisciplinarity (CRI), Université de Paris, INSERM U1284, FR
X close

Paola Masuzzo,

IGDORE, ID
X close

Tobias Steiner

Community-led Open Publication Infrastructures for Monographs (COPIM), Centre for Postdigital Cultures, Coventry University, GB
X close

Abstract

An enormous wealth of digital tools now exists for collaborating on scholarly research projects. In particular, it is now possible to collaboratively author research articles in an openly participatory and dynamic format. Here we describe and provide recommendations for a more open process of digital collaboration, and discuss the potential issues and pitfalls that come with managing large and diverse authoring communities. We summarize our personal experiences in a form of ‘ten simple recommendations’. Typically, these collaborative, online projects lead to the production of what we here introduce as Massively Open Online Papers (MOOPs). We consider a MOOP to be distinct from a ‘traditional’ collaborative article in that it is defined by an openly participatory process, not bound within the constraints of a predefined contributors list. This is a method of organised creativity designed for the efficient generation and capture of ideas in order to produce new knowledge. Given the diversity of potential authors and projects that can be brought into this process, we do not expect that these tips will address every possible project. Rather, these tips are based on our own experiences and will be useful when different groups and communities can uptake different elements into their own workflows. We believe that creating inclusive, interdisciplinary, and dynamic environments is ultimately good for science, providing a way to exchange knowledge and ideas as a community. We hope that these Recommendations will prove useful for others who might wish to explore this space.
How to Cite: Tennant, J.P., Bielczyk, N., Tzovaras, B.G., Masuzzo, P. and Steiner, T., 2020. Introducing Massively Open Online Papers (MOOPs). KULA: knowledge creation, dissemination, and preservation studies, 4(1), p.1. DOI: http://doi.org/10.5334/kula.63
236
Views
44
Downloads
29
Twitter
  Published on 20 Apr 2020
 Accepted on 03 Mar 2020            Submitted on 22 Aug 2019

Introduction

A wealth of digital tools have emerged in recent years for collaborating on scholarly research projects (Bosman and Kramer 2015). Scholars as well as journalists, librarians, and other professionals have found an abundance of new opportunities to create and contribute to projects in a more participative manner than with traditional scholarship. This trend is reflected in the growing numbers of collaborators on research papers, and it is becoming increasingly common to see enormous author lists numbering in the hundreds, while single-authored papers are becoming rarer in many disciplines. With modern technologies, it is now possible to collaboratively author research articles in novel openly participatory and dynamic formats, leading to the production of what we here call Massively Open Online Papers (MOOPs). We define a MOOP as an article co-created through a radically open, pluralistic, and continuous participatory contributing process, not bound within the typical constraints of a predefined contributor group.

MOOPs are distinct from traditional multi-authored articles in that they are not bound by normal constraints on who can participate; hence, they are ‘open’ and have the potential to become ‘massive’ through an unconstrained number of collaborators (in our experience, the number generally ranges from 10 to around 100). A MOOP is a goal-oriented method of organised and interactive creativity designed to co-produce new knowledge through the efficient generation, synthesis, and capture of ideas. Peer refinement during this process is a constant and iterative form of evaluation where anyone has the capacity to correct mistakes and make recommendations, akin to a Wiki-style workflow. The original idea for a MOOP-like process goes back more than a decade, with one of the first being a public invitation1 by Robert Hoffmann to collaborate through the WikiGenes initiative on a paper to be published in Nature Genetics (Hoffmann 2008). Since then, a number of new technologies have emerged that help to foster increased communication and engagement within the scientific community, suggesting a great future potential for MOOPs.

There have been a number of excellent advice articles dedicated to collaborative development of research projects and manuscript writing (e.g., Frassl et al. 2018; Vicens and Bourne 2007; Weinberger, Evans, and Allesina 2015). However, collaboration at scale in an online authoring environment can easily become messy and overwhelming for individuals without careful planning, guidance, and structure. Here, we provide recommendations for a highly open and participatory interactive process of collaboration using digital tools and environments, discuss potential issues that come with working with large and diverse authoring communities, and provide possible solutions should these arise. We describe processes and issues that are similar to those encountered within free and open-source software (FOSS) communities (Katz et al. 2018), as well as Wiki-style forms of content generation, both of which aim to be highly participatory, organised, and synergistic, with the shared goal of creating a functional product.

First, we will discuss in more detail when a MOOP might be a better alternative than traditional methods of collaborative writing, and then we will provide examples of the successful MOOPs upon which we are basing our recommendations. Subsequently, we will provide the list of our ten recommendations, which we list in the order that most closely parallels the process of writing a MOOP (Figure 1). Throughout these recommendations, we also include other pieces of advice that we refer to as pro tips. We have opted to include these as separate tips because they do not quite fit under the umbrella of any of our recommendations but are nonetheless important considerations that can help to simplify a complex process. Finally, we identify problems related to each recommendation that might arise while writing the MOOP and offer possible solutions to those issues. Our ten recommendations are provided in an approximately linear order so as to reflect the MOOP process as much as possible, but we recognise that many of these steps will occur concurrently and vary in order for different projects and preferences.

Figure 1 

Suggested workflow for developing a MOOP.

When might a MOOP be the best option for writing?

Given the diversity of potential authors and projects that can be brought into this process, we do not expect that our recommendations will address every possible project. They are based on our own experiences and will be most useful if different groups and communities can adopt different elements into their own workflows.

There are several situations in which a MOOP is a more flexible approach for writing compared to traditional collaborative writing processes. For example, a MOOP can help to overcome any associated logistical complications for research projects that involve the coordination of a large number of authors from different research institutes across a wide range of time zones. Many authors working in different places at different times can complicate tracking contributions and article versions, but the structure and workflow of a MOOP can help avoid confusion among contributors. Disciplines in which single-authored texts still dominate (e.g., the humanities) might not find the MOOP approach useful for shorter argumentative articles; however, authors across disciplines might find MOOPs useful for longer-form texts, such as books, book chapters, and review articles. In addition, projects that are focussed on interdisciplinary meta-analyses might find a MOOP suitable as it is easier to synthesise information from contributors and data sources from a wide range of backgrounds, especially if the project ideas happen to be topical and there is not much published research. MOOPs are also well suited to collaborative projects with a wide and non-specialised target audience in mind. For instance, a MOOP can be a valuable strategy for community science projects, where large groups of non-academic participants generate data and research outputs together and are often interested in publishing their findings.

Overall, we believe that creating more inclusive, interdisciplinary, and dynamic collaborative environments is ultimately good for advancing scholarship and providing a useful way to exchange knowledge and ideas online. Some of the tips we present here can be easily applied to collaborative efforts in general. We hope that these recommendations will prove useful for others who wish to explore this method further.

Examples of Successful MOOPs

There is no single successful strategy for designing and managing a MOOP. Below, we present three examples of recent successful MOOPs that different co-authors have been part of as both project leads and collaborators (Table 1). It is our experiences leading and collaborating on these projects that we base the following reccommendations and discussion on. In our experience, the number of people involved in a MOOP can range from 10 to around 100. Variations in scale dictate that the recommendations we offer be flexible so that teams can adapt them as the number of contributors increases.

Table 1

Overview of several MOOPs that the authors of this article have been involved in and critical elements of their development process.

MOOP:
Storyline:
Bielczyk et al., 2019 Tennant et al., 2019 Greshake Tzovaras et al., 2019

Title Effective Self-Management for Early Career Researchers in Natural Sciences Ten Hot Topics around Scholarly Publishing Open Humans: A Platform for Participant-Centered Research and Personal Data Exploration
Idea fostered through Online discussion within the Organization for Human Brain Mapping Student and Postdoc Special Interest Group A public Twitter discussion Slack conversations between community science contributors
Initial stage of development The lead author (Natalia Bielczyk) wrote the initial draft of the manuscript in Google Docs and assembled a core working group of 13 members, who reviewed and edited the draft in multiple rounds of contributions. The lead author (Jonathan Tennant) distilled the ten topics from the Twitter discussion and translated them into an article structure in Google Docs, then invited those who had engaged in the Twitter discussion to contribute. The lead author (Bastian Greshake Tzovaras) drafted an outline on Overleaf.
Open contribution phase The lead author created a Google group and announced through Twitter that it was open for all forms of contributions in spring 2019. Additionally, the lead author invited four researchers to edit the final version of the manuscript in Google Docs. The lead author shared the Google Doc on social media once the core team and outline had been established. The Overleaf document in which the paper was written was shared by the lead author on Twitter with a larger audience, and the ~500 members of the community Slack were directly invited to contribute.
Duration of the open contribution phase 6 weeks, plus an extension of 1 week About 2 months About 6 months
The decision on the authorship All the members of the working group (besides the lead author) were invited to be the joint second authors of the work and all the active members of the Google group were invited to co-author this work. Anyone who contributed to the article was permitted to add themselves to the author list. Anyone who contributed to the article and as a developer of the community science platform was invited to be an author.
The decision on the target journal The lead author approached various editors through email and got interest from one of them. The lead author made the decision to submit the article as part of a relevant special issue, which was in mind and communicated from the beginning. The lead authors suggested target journals and discussed with the larger contributor group. The decision was ultimately made democratically by consensus.

Recommendations

1. Choose diverse members for a core team of experts

Unlike a formally planned project, building an openly collaborative team online is more dynamic. A critical first step to simplify the process involves developing what we call a core team of experts, or a working group, usually no more than five individuals. The project leader, who instigates and oversees the MOOP, should consider reaching out explicitly to relevant experts first. This recruitment method is useful for several reasons. First, it shows that you acknowledge their expertise on a topic, and second, it allows you to make expected contributions and outcomes clear from the beginning. This acknowledgement and transparency will hopefully encourage them to engage with the MOOP in some capacity, whether they agree to be a core member or not, and your project will benefit from their expertise.

As with a more traditional collaboration, at this stage you want as diverse and expert a team as possible, comprised of members who can function well together in a group. Consider whether there is equal representation of gender and geographical origin in your core group (as much as possible) and assess whether the team lacks a member with expertise on a certain aspect of the topic. If there are gaps in representation or expertise, consider inviting another member to the team. Creating broader and more heterogeneous professional circles is becoming increasingly popular via social media, and those networks can be used to recruit colleagues beyond your local research environments and ‘inner circle’ for any number of collaborations.

At this stage, care should be taken by the core team to consider the power dynamics within the MOOP. The core team might need to make decisions around authorship, venue of publication, and conflict resolution (e.g., whether or not someone needs to be dismissed from contributing). The core team members might choose to vote only among themselves to attain a majority or consensus on these matters—thus retaining a greater level of authority over the project than other participants—or it might be the case that these things are more democratically decided by all participants (e.g., all participants have equal voting rights). In either case, to ensure fairness within a MOOP, establish effective communication channels and arbitration processes from the outset so that all participants understand their level of decision-making power and the steps they can take to resolve potential disputes. Ideally, someone with previous experience in these processes, either in a MOOP or otherwise, would be valuable to have as a member of the core team. Building trust and conveying expectations prior to the MOOP writing process is key.

Pro tip: Make sure you know who you are inviting to the core team

Are you inviting someone you know professionally or socially or someone whose name you know from previously published works? If you have not engaged much with members of your core group before, it is good to have a preliminary call to set the agenda for how the MOOP will develop and get a clear picture of everyone’s preferred work style, area and level of expertise, and preferred mode and tool of engagement; for example, whether they prefer to use comments, social media, email, or other primary communication methods. More famous researchers are often the most experienced but, at the same time, also the busiest of people in their respective fields. You need to make sure that all the core team members can allocate enough time for the project; otherwise, decision-making and progress will be hampered.

2. Identify contributor types and who has editorial control

In a typical research article, one or several authors will take on the key role of being the lead author(s) and editor(s) of the document. In a MOOP scenario, the lines between editors, commenters, and authors can be blurry. Setting the core tasks, delegating workload, and contributing different content types become much easier once contributor roles and the project structure have been well defined by the core group.

Ideally, there should to be at least one person who has editorial control, in order to settle any potential content-related disagreements (see Recommendation 8) and to provide authoritative oversight on comments and suggested edits made by other collaborators. Usually, this can be the project leader, but it can be partially delegated to several co-authors if required (typically the core working group). Note that the lead author might not always be the one who leads or instigates the project. Leadership is demonstrated by the lead author(s) by facilitating the dynamic evolution of the content, encouraging and empowering individuals to contribute, and providing considerate and empathetic support where needed. Other participant types can include original text contributor, copyeditor, image designer, data analyst, and any other necessary roles typical to research articles.

Potential hurdle: It can sometimes be difficult for contributors to decide if they want to be mentioned in the MOOP as a co-author or just listed in the acknowledgements for their feedback and input on the manuscript. They might not value their contribution highly enough to authorize co-authorship in their view, or they might only agree with some of the aspects of the MOOP and therefore hesitate to be a co-author.

Solution: There are several ways to approach the issue of individuals’ indecision about their authorship. The first is to make clear from the beginning what the minimal criteria are for a contribution to constitute authorship (see Recommendation 3). The second is to discuss on a case-by-case basis with the project lead the source of the hesitation and if there is a way to reframe the storyline of the manuscript so that the person in question identifies with most of it and agrees to be a co-author. But ultimately, as we expand upon below, if contributors do not want to be named as authors, they can be recognised in the acknowledgements.

3. Make authorship agreements clear

As early as possible in the process, make it clear to contributors what they are agreeing to as participants in a MOOP project, such as whether there is an intention to submit to a journal and what target journal is in mind. Defining authorship rules is a delicate task that should be handled with care due to the impact that authorship and author position can have on the career trajectories of individuals. There is a delicate balance to strike between setting a low enough entry level to attract potential contributors and making sure that relative contributions are acknowledged appropriately. Formal authorship criteria and protocols, such as the CRediT scheme (Brand et al. 2015), can be adapted for this sort of collaborative workflow. What a MOOP essentially entails is a number of dynamic (micro)contributions that, after a continuous process of refinement, eventually become a coherent narrative. Due to the varied nature of contributions, it can be difficult to identify specific criteria in advance that define co-authorship, but outlining those criteria also has the benefit of encouraging and rewarding different forms of participation (Holcombe 2019).

The core group should also identify, in advance, rules for verifying the identies of contributors. This precaution can help to avoid issues of ghost-writing, malevolent behaviour such as inserting inappropriate comments or signing under a different name, or possible attempts to sabotage the paper. There is the risk that individuals without sufficient expertise could attempt to hi-jack papers for their own personal gain (e.g., theft of intellectual property). Due to their inherently open nature, a MOOP is more vulnerable to this than a traditional article. A simple way to address potential issues with authorship is to have a linked database or table where contributors can add and edit their contributions over time as the project evolves. Their entries in the database or table can be associated with, for example, their ORCID, for verification, and aligned with the CRediT scheme.

As early as possible after establishing criteria for authorship and rules for identity verification, the core team should define the structure of the paper. This step will help simplify assigning participants to different sections and limit the number of people working on the same parts of the text. Part of this can involve assembling small working groups responsible for smaller modules of the project. This common understanding of the responsibility of each contributor can be effective in maintaining ‘high-performing collaborative research teams’ (Cheruvelil et al. 2014). Another solution is to have an online group or forum where each section of the project can be discussed in a separate thread. This solution was successfully implemented in group research projects by Bielczyk et al. (2019) and Lurie et al. (2018). With this approach, everyone has the option to subscribe or unsubscribe to specific topics; a main author might want to be notified about all discussions, while a contributor to a specific section might only be interested in follow-up replies to their own suggestions.

There are issues with MOOP projects regarding how to manage expectations around workload capacity and existing time constraints, especially when a MOOP attracts contributors from diverse backgrounds, such as non-academic contributors. While the accomplishment of being an author on a published, peer-reviewed paper is a strong incentive in modern academic culture, some people—both within and outside academia—might choose to contribute as an element of culture of sharing and ‘gift-giving.’ Consequently, there are steps that need to be taken early on to help to manage expectations. This includes, where possible, having a public project timeline or roadmap, which also contains internal deadlines for different deliverables necessary to complete the project. The precise deliverables (e.g., total page length, number of references or words) as determined by the core team should be communicated at the start of the project so that participants are aware of the size and scope of the MOOP, with amendments and adaptations along the way as necessary. Having a firm timeline will also help to constrain the size of the project from becoming too large as the number of participants increases.

The author order is still important because of the linear nature in which authorship is assigned in the seventeenth-century format of most journals. Due to the dynamic nature of contribution in a MOOP, determining author order based on intellectual contribution (or other criteria) in these situations is a fairly daunting—and often pointless—task. Author order can be determined in any way as agreed upon by the group—randomly, alphabetically, through a chess tournament, according to the order in which authors joined the project, or by mathematical equations (see e.g., Lakens, Scheel, and Isager 2018), etc.—as long as there is consensus and an explicit note of author contributorship, which can include information on the decision for author order. While some participants might only contribute to the project informally and not wish to receive recognition, others might treat it as a part of their daily professional activities and pay closer attention to authorship. The author order of the MOOP will likely require refinement throughout the project as new collaborators contribute.

Potential hurdle: Differentiating between contributorship and authorship. For example, some people might only offer comments rather than direct edits on the text and yet still want to be considered an author.

Solution: Make it clear from the beginning what constitutes both authorship and contributorship and assess participants’ work as the MOOP develops. Some comments might trigger a change in the dynamics of a manuscript and therefore leave a lasting impact on the content which may justify listing the contributor as a co-author. Make it explicit that if people contribute but do not want to be named as an author, they can be recognised in the acknowledgements, or even not be included at all should they prefer. Make sure to keep track of who is commenting on your MOOP. Disagreements over co-authorship can be referred to the project leader or core team.

4. Reflect on your choice of platforms

Collaborative authoring

A number of freely available authoring platforms and services facilitate real-time open and online collaboration on research articles and have a wide variety of functions and features (Table 2). The range of tools and services available to researchers changes constantly, and we anticipate that more software and platforms will emerge in the future as the value and nature of collaboration evolves. It is necessary to be aware of and distinguish between commercial versus free and open-source software (FOSS) services and services that target a mainstream audience versus those that were developed for researchers, with research-specific features for data privacy and security, intellectual property protection measures, and licensing.

Table 2

A summary of selected tools for collaborative authoring of research manuscripts. RTF = Rich Text Format, LaTeX = The LaTeX Project, https://www.latex-project.org/, R = The R Project for Statistical Computing https://www.r-project.org. All features described are those from the free versions of these services.

Collaborative authoring tool: Feature: Authorea CryptPad Google Docs Overleaf HackMD GitHub/GitLab

Free access for users2 Yes Yes Yes Yes Yes Yes
Open-source platform No Yes No No Yes, CodiMD Yes
Version control Yes Yes Yes Yes Yes Yes
Multiple format types Yes No No Yes Yes Yes
Formatting for direct journal submissions possible Yes No3 No Yes No (only gists/markdown templates) No (with few exceptions)
Interactive group editing Yes Yes Yes Yes Yes Yes
Automatic chapter/figure numbers Yes No No Yes No No
Reference management Yes No Yes, via Zotero plugin Yes No Not as default
Flexible commenting on parts of the text possible No, unless within a LaTeX writing block No, only via chat No Yes Yes No
Exporting the project straight to GitHub possible No, but can be synchronised Yes, can completely be self-hosted No Yes Yes N/A
Authoring format LaTeX, Markdown, RTF RTF RTF LaTeX (with RTF overlay) Markdown RTF, Markdown, R, TeX/LaTeX/… multiple4

Overleaf and Authorea are popular authoring tools specifically for the creation of scholarly works and subsequent online submission to a range of scholarly publishing outlets. A key difference between the two is that Overleaf relies on the LaTeX format (with a rich text option), whereas Authorea supports LaTeX, Markdown, and HTML within the same document. Overleaf and Authorea are proprietary platforms both built on top of Git, a FOSS that enables efficient and distributed workflows. Both services can handle input from multiple users simultaneously in a single project, with any edits automatically version controlled. All contributions are tracked, recorded, and time-stamped, which allows for easy reproducibility since all steps of creation are documented. Projects can also be shared publicly or made private. In the first case, anyone who has access to the generated public link can contribute to the project if they wish, while in the second case, only directly invited participants can contribute. One additional benefit of using Overleaf or Authorea is the availability of templates for submission and reformatting to multiple academic journals. Preparing manuscripts for submission and reformatting the manuscript to meet the criteria of multiple journals are two tasks that can be performed in both platforms conveniently. Overleaf features in-built connectors to Zotero and other reference management systems,5 while Authorea makes use of the cross-platform BibTeX format to integrate literature from the reference management tool of your choice.6

Due to its instant availability and the comfort of linking your Google account with other related services, and despite the public criticism and concerns surrounding issues of data usage and text mining for commercial purposes, Google Docs has become an increasingly popular authoring tool for researchers. It now provides a connector to the FOSS reference management tool Zotero for easy in-text references that are automatically formatted to the citation style of your choice or that of a specific journal.

An interesting FOSS alternative to proprietary and commercial platforms like Authorea and Overleaf is CryptPad. With a promise to encrypt transmitted data, this office suite puts user privacy first and thus ensures that no third parties will be able to illicitly access what you and your group are doing. It features a whole collection of tools, including collaborative spreadsheet and rich text document editors similar to Google Docs; a Kanban task management board; and modules for small surveys and code sharing as well as online presentations and a whiteboard, which might come in handy for those wanting to use online technology to support their seminars; and more. However, CryptPad has a more minimalistic set of in-text editing features which can hinder some forms of collaborative authoring. This can make working with more complex text formatting, image descriptions, or tables more challenging, and it can be hard to collaboratively work on projects that contain graphics and tables.

GitHub and GitLab can also be used for collaborative authoring. As a noteworthy example of how textual collaboration in scholarly communication using GitHub works, see the The Journal of Open Source Software (JOSS), which primarily runs on the platform. Similarly building on a GitHub-based workflow, Manubot is a toolchain approach that allows for writing, tracking, and converting Markdown-based text into a variety of output formats via a collection of pre-configured tools such as Pandoc (Himmelstein et al. 2019). We have not yet had the chance to try Manubot in more detail, but it appears to be an option for those technically savvy enough to play around with customizing and tweaking their Git workflow. Our experience with GitHub Markdown, however, has been generally positive. For example, the Foundations for Open Scholarship Strategy Development document (Tennant et al. 2019a) was primarily authored using collaborative tools originally designed for software development and written in Markdown, and it was easy to publish this both as a dynamic webpage and as a preprint (see Recommendation 10).

Last but not least, and somewhat closing the gap between collaborative text-editing and code and issue management via GitLab/GitHub, HackMD deserves a mention. For those who know how to write in Markdown,7 HackMD offers a split-screen online editor in which Markdown input is directly rendered to standard text. Users can directly feed HackMD with Markdown files/templates from GitHub/Gitlab snippets (aka ‘gists’) or stored in Dropbox/Google Drive, and export options include HTML output (styled and raw), open document (LibreOffice) files (in beta), and exportation of projects to GitHub. HackMD also offers Markdown templates for open online presentation slides8 as well as a GitBook-style book template.9

Pro tip: Using Zotero integration with groups

When working on a text as a group, sometimes each contributor has their own set of references that they will want to add to the manuscript that is being written. For this, Zotero offers the ‘Group’ feature, which participating Zotero users can easily use to add references to their ‘Group Library.’ This group collection of references, compiled collaboratively as a basis for referencing, can be shared with all contributors as well as readers, making this open data easily accessible for everybody. An example of a group Zotero library is one maintained by the Center for Open Science (https://perma.cc/64GV-2HAT) or the collaborative MOOP library at https://bit.ly/2xn82dA, compiled by the authors of this article.

Data security and privacy

It is important to be aware of the regional and political context of platforms and how this reflects or determines the communities that engage with them. For example, Google Docs is currently prohibited in China, and therefore potential collaborators based there would be automatically excluded. Similarly, European research institutions might have policies in place that prohibit the sharing of preliminary research results on servers outside the European Union, which apply to online service providers that have their servers based outside the EU. Some collaborators may principally reject participation on certain platforms for ethical reasons (e.g., Google Docs due to the aforementioned user privacy issues). It is worth remembering that cloud platform providers might have provisions in place that allow them to remove or delete content in their writing environments without any warning if they deem it to violate their terms of service (for example, Google Docs10). When choosing a platform, one should carefully consider the trade-offs between versatility and user privacy with regards to keeping control over your own (or your funder’s) data.11 Contributors must know in advance who owns the data, under what privacy and security policies, and how and when they are able to share it within the collaborative writing process. A full discussion on the geopolitical constraints to data sharing in international collaborations is beyond the scope of this article (but see, for instance, van den Broek and van Veenstra 2015).

Depending on the MOOP topic and where the contributors are residing, regional, national, and institutional aspects of research data security and data privacy apply. The online platforms presented here differ widely in their respective terms of reference, and we suggest investigating compliance with, for example, the EU General Data Protection Regulation.12 (EU) 2016/679 came to force within the European Union (EU) and the European Economic Area (EEA) in 2018. This law standardises the regulations for storing personal data within the European Union, and breaching it can incur high fines. According to GDPR, personal data should not be available publicly without explicit, informed consent; data should be preserved on servers on the territory of EU/EEA; and users should have the right to have their data erased. Irrespective of where in the world the MOOP team members reside, we suggest always applying the highest standards of data privacy and security for everyone involved and ensuring that the providers of digital tools and services used for collaborative writing and other steps in the research workflow comply with these as well.

Communication

In addition to communicating through the authoring environment, it is also advisable to maintain parallel forms of regular communication. While all platforms mentioned above have commenting functionalities, sometimes this is not enough. Making sure to stay in contact can be useful in maintaining fluent communication between collaborators throughout the duration of a project. Email, communication channels such as Slack, or the variety of open source alternatives, such as Mattermost, Riot, and Zulip, are useful (Table 3). Some of these require more technical skills to set up, and therefore it is important to decide which are most helpful and pragmatic for your purposes and the team members’ preferences. Advance testing by the core working group can help streamline this. Regular calls and meetings are useful to clarify any potential outstanding issues that cannot be resolved in the manuscript itself. Expectations around communication need to be specified by the core group, including how this will be maintained after the initial startup phase (e.g., do participants need to provide regular updates to the core group? How regularly? Will the core group or project leader update all participants about each stage of the publishing process, from peer review to final publication?).

Table 3

A summary of major tools for communicating while collaborating on research manuscripts, each of which at least one of the contributing authors of this paper has experience with. The table highlights a variety of features and aspects of openness that we consider relevant, and it may serve as a preliminary guide to help you make an informed decision for your next project.

Communication tool Slack Mattermost Riot Zulip Twist Gitter Trello

Key function elements Chat Chat (+Issue Board when linked with GitLab/GitHub) Chat Chat Chat Chat Issue Board-based project management
Free for users Yes (with limited functionality) Yes Yes Yes Yes (with limited functionality) Yes Yes (with limited functionality)
Open source No Yes Yes Yes No Yes No
Message archive limit Archive of 10K most recent messages on free plan Unlimited Unlimited Unlimited One month of messages Unlimited 10 Team Boards per free account
Group function Yes (open and closed) Yes Yes Yes Yes Yes Yes
Call function Yes for private; only paid for groups No Yes Yes, meet.jit.si video chat integration (open source) No No Yes, via add-on ‘Power-Up’ extensions incl. Slack, BlueJeans video

Potential hurdle: Some of these services have a seemingly high barrier of entry in terms of technological requirements and skills necessary to be able to communicate (e.g., knowledge of LaTeX or/and Markdown), which can be off-putting to newcomers. As there are multiple available platforms, involvement in multiple projects might imply managing multiple accounts on multiple platforms at a time, which can be distracting and frustrating.

Solution: For Overleaf, there is a free online introduction to LaTeX.13 Kirstie Jane Whitaker has developed a friendly GitHub intro workshop,14 and Kris Shaffer has looked into using a GitHub/GitLab-based production workflow for academics.15 All three are useful introductions to these platforms. An increasing demand for interoperability between service providers has become clear; the Open Science Framework is one example of a platform that integrates with Google Docs, Dropbox, and GitHub so that users can seamlessly collect all relevant files into one digital project. It is also feasible to ‘stitch together’ different sections of a project conducted across various platforms. Either way, we suggest settling early on a core set of tools that are suitable for most people involved.

5. Define a clear article structure from the beginning

Let us assume that you have a project, a platform, and a core team. It is now time to start getting your ideas down on the screen/digital paper. The first step is to define the article structure: the main topics and themes to be covered, their arrangement in the paper, and the subsections within them. A useful approach to trigger the co-writing process is to have a couple of sentences and keywords in each subsection, written by the core group, about what to cover within it; this sort of ‘dump strategy’ can be as rough as needed to start. This will be especially useful for newcomers who might be more unfamiliar with a MOOP-like process and for those who might have issues with writing and do not know where to begin. At this point, as we discuss in Recommendation 6, you will begin to promote the MOOP online and welcome more participants. Make it as clear as possible what the project structure is and who is assigned to each section in order to minimize participation friction. Do not worry about getting this text immaculate right away, as it is likely to evolve as others contribute. Highlight structural elements such as section headings in red to make them stand out if needed. For tools like Google Docs, this outline can be viewed on the left-hand panel if the sections are structured in the form of headings/subheadings, and this feature makes in-article navigation much simpler for everyone involved. Keep a temporary (or permanent) table of contents if necessary.

A personal need for perfectionism may prevent some people from participating due to the inherently continuous and imperfect, always-beta nature of a MOOP. Yet, actively collaborating on such projects can be a good learning experience for perfectionist authors to become comfortable with sharing and collaborating on a project before it has reached a near-complete state. The core team may need to help collaborators struggling to contribute by providing an additional space where ideation is wanted, but also by emphasising that imperfectionism is seen as part of the process. This can entail providing a mechanism external to the collaboration platform (some form of a sandbox) where people can share their ideas to be integrated at a later stage. Asking for contributions on specific issues helps to reduce the resistance to participate.

Potential hurdle: Not everyone might agree on the planned structure, and that structure might be subject to change as the project evolves.

Solution: Simply be aware that often there are multiple routes to achieving the same goal. Remain open to suggested changes to the structure and the potential impact that such changes can have on the overall article.

6. Promote your MOOP online

After you have set up the project structure and assembled a core working group, the next key step is to make it all open. The key difference between a MOOP and a traditional research paper is the former’s fundamental and consistent openly online participative aspect (Dall’Olio et al. 2011). This open participation might start before the project even formally begins (e.g., when a project instigator encourages engagement with a topic on social media and starts recruiting potential collaborators), or it might occur at a later stage, when a MOOP core group puts out a general call for contributors after the project structure has been established, seeking to bring in new perspectives throughout the writing and editing process. Open sharing and collaboration are fundamental to FOSS, Open Education, Wikis, and other similar communities and concepts (Willinsky 2005; Tennant et al. 2020). This open model encourages us to break out of our personal social bubbles by allowing anyone inside and outside the research community to freely participate in the knowledge generation and dissemination process. This participation can potentially be via any social media or networking platform, whether geared to primarily academic audiences or not. Make sure to use a welcoming language (e.g., non-gendered), and be conscious that there are many audiences who could be receiving, and be receptive to, your messaging. Be aware that this opens up the MOOP to other forms of expertise beyond just academic researchers (e.g., workers from the private sector or NGOs, librarians, hobbyists). Directly contacting people with relevant interests and expertise can also help to increase participation. There are many social media platforms that are typically used by specific geographic and demographic audiences (e.g., Baidu is popular primarily in China, and VK in Russia), and therefore your choice of where to share the invitation to participate will dictate which audiences you reach from around the world. Using a variety of communication channels will enable a wide range of potential contributors to be reached. Below we list popular social media platforms known to us and apply selected criteria of openness to help inform readers’ decisions about platforms to use for future projects (Table 4).

Table 4

A summary of major social media services used by researchers.

Social media tool Twitter Facebook LinkedIn Instagram Mastodon Researchgate Academia.edu

Free to use Yes Yes, but you agree to give away your personal data for customized advert and marketing profiling Yes, extra features for premium users Yes, see Facebook Yes Yes, see Facebook; extra features for premium users Yes, see Facebook; extra features for premium users16
Presence of researchers Yes Yes, but often non-professionally Yes Yes Slow increase Yes Yes
Open source No No No No Yes No No
Inclusion in altmetrics scores17 Yes Yes Yes No No No No
For profit Yes Yes Yes Yes No Yes Yes
Group functions No Open, closed, and secret groups Yes No Yes Yes No
Access to platform data via Application Programming Interface (API) or tool? Tool and API API Tool No API None of the above None of the above

The benefits to openly sharing a MOOP via social networks are potentially enormous (Bik and Goldstein 2013; Greenhow and Gleason 2014). Bringing together diverse groups (e.g., demographically, geographically, or intellectually) to share ideas and existing knowledge is clearly beneficial for the continued production of knowledge (Grijs 2015; Hsiehchen, Espinoza, and Hsieh 2015). Ideally, a more diverse group of contributors will offer many intersecting, overlapping, and conflicting perspectives that will allow them not only to establish common ground but also to critically engage with and challenge each other. However, groups can also tend to make more extreme or polarised decisions than individuals, particularly for evaluations of risk or attitudes, and this should be something to be mindful of throughout the MOOP process (Moscovici and Zavalloni 1969).

Pro tip: The stop loss

If you join a MOOP as a participant, you need to know that every group has a different dynamic. Some teams frantically communicate; others talk online sporadically and only when absolutely necessary. Some teams might use a lot of slang and insiders’ jokes, while other teams communicate in a much more formal manner. Some teams have a very clear hierarchy; others are subject to decentralized governance structures, and you can volunteer to lead subprojects at any moment. If for some reason you feel that you do not fit in with the team, you do not feel comfortable, or you do not understand the intentions of some other participants, you can consider a few options. You can talk to the project leader to clarify the scope of your responsibilities within the project and then focus on the content of your contribution. You can assemble a small group to work closely together on a number of tasks and subsequently report the progress to the rest of the MOOP group when milestones are reached. However, if these strategies do not improve your level of comfort, it is okay to not participate. It is always hard on the first day when you are the new kid on the block, but as a rule of thumb, if you still do not feel comfortable with the group dynamic after a week or two, you might feel that the best choice for yourself and for others is leaving this particular MOOP group.

Potential hurdle: An open call to contribution/co-authorship might lead to two types of self-selection bias. First, followers who notice the call will likely be in the same bubble/community as the initiator of the project. Second, even if the call is shared widely, certain people might find it easier to opt in, such as those with more extroverted personalities or those with deeper knowledge of or more experience with a subject.

Solution: Contributors from typically underrepresented groups might be more likely to join when explicitly welcomed and invited. Some people might feel like they are not experts on topics on which they are in fact quite knowledgeable, and helping people feel secure that they have something to contribute can encourage involvement. This is just one active step to ensure a welcoming environment.

7. Be actively aware of the different forms of inclusivity

Writing a MOOP can be very intense: any number of people might be writing, editing, and commenting on the document in real time, or there might be multiple parallel instant messaging threads. For newcomers, it can also be difficult, for a number of reasons, to get started. Given the open model of contributing to a MOOP, contributors to your project might come from a variety of cultures and can include people coming from outside the traditional academic establishment. Despite having a high level of expertise on the project topic, contributors might not be familiar with the typical structure and style or the particular editorial conventions of academic articles, but they are lacking knowledge simply due to different backgrounds. Consequently, they might need additional support to navigate the requirements of the format. Many academic tools for writing articles, such as citation managers and LaTeX editors, can be unfamiliar and require additional guidance. For example, Greshake Tzovaras et al. (2019) was written in LaTex through Overleaf, which was perceived as a barrier for efficient contribution by some authors. When selecting the tools for your MOOP, keep considerations in mind such as selecting a newcomer-friendly tool chain or offering additional support for newcomer participants when needed.

If parts of the paper are already written, it may be difficult for newcomers who join at later stages in the process to decide where they can best contribute. Depending on the MOOP, the call for contributors might stay open until as late a stage as project completion. A safe way for a newcomer to contribute might be to comment on existing text, but this process can quickly get confusing with a large number of authors, especially for contributors who are new to the project and do not know the other contributors. Not everyone will be comfortable operating in what can often be a high-energy environment, which is why the core team should be on top of the game with moderating and clearing the commenting sections. Discussions focussed on specific sections can be quickly resolved in a call with the contributors involved.

Be mindful of the amount of communication channels that are utilized in order to avoid ‘platform fatigue.’ Mental health amongst researchers is already threatened (e.g., Evans et al. 2018), and participation in a MOOP should not add to that. It makes sense to have one place to write the manuscript and another place for discussions, but adding even more channels can make participants feel powerless/undercut, uninformed, and sidelined. If more channels need to be added, use only the most feasible and practical ones for a given group based on all members’ agreed-upon preference.

Finally, a potentially off-putting aspect of a MOOP is participants’ fear that their ideas will be stolen, either with or without credit, if they are shared publicly prior to any sort of formal publication. However, since there is a record of who is making contributions and sharing ideas, if this dark scenario happens, it should be addressed in a similar way as cases of ‘scooping’ when it comes to preprints (Sarabipour et al. 2019). The risk of scooping can also be mitigated by building a team with high integrity and close personal relationships. For example, when working on a project together, participants can meet online regularly and share general life advice and support, engaging on a more personal level. This personal connection can help prevent any unfair behaviors.

Pro tip: Be explicitly inclusive of regional diversity

It can be very easy for a MOOP to become dominated by contributors from North America and Western Europe. It is good to be mindful of how this can introduce bias and be off-putting for contributors from the rest of the world. One way to actively overcome this obstacle is to, at as early a stage as possible, make sure that a wide geographic diversity is represented among contributors and that relevant literature from geographically diverse authors is consulted and included in the reference list.

Of MOOPs and MOOTs

Another important aspect of diversity to consider is the question of linguistic variety. Participants should have an informed discussion about which language to create a MOOP in. If a group is primarily sharing information in, for example, English, this automatically makes it more difficult for non-native English speakers in a potential audience to engage with the project (see Tietze and Dick 2013). One potential solution to this is the optional expansion of your envisioned MOOP project towards what we here identify as a Massively Open Online Translation, or MOOT. To do so, it is crucial to have motivated volunteers who are willing to take the important task of transforming a given set of information into a new linguistic context and to enable those working on a MOOT to do this in a digital environment of their choosing.

For example, within the Open Science MOOC (Massive Open Online Course),18 some members are exploring a variety of options to link the requirements of translation management with the underlying open approach to content development already established via GitHub (Tennant et al. 2019b). Among the existing plethora of online translation management solutions out there,19 they have found the localization management platforms Crowdin and Weblate to be particularly promising because these platforms offer free plans for dedicated FOSS projects.20 Keep in mind, though, that both services are owned by for-profit companies, and the free open-source tiers might be discontinued in the future, so an option to export any project’s data (translation files) should always inform the choice of translation platform. A somewhat different, and fully open, approach to MOOT collaboration is that of translatewiki: a FOSS platform that offers a central space where translators and project managers can convene. Staying true to the Wiki logic, translatewiki enables registered users to translate content of any listed project,21 so users can either offer translation capacity or add a new project and define the necessary parameters of translation and then let the translator community work on this project.

8. Handle conflicts quickly and professionally

Given that MOOPs invite contributions from people with a variety of professional backgrounds, contributors may not be completely familiar with the norms of writing a MOOP, even if they are familiar with the standard rules for writing research articles. Since a MOOP is almost entirely an online process, and online interactions sometimes foster more callous or hostile behaviour, participants need to consciously ensure that they treat their collaborators with the same level of professional kindness and human courtesy that they would treat them with in real life. When you are editing or commenting, be aware that it is someone else’s work, which should be respected and valued no matter how much you disagree with it. Take care when deleting/editing others’ work (use ‘suggesting edits’ functions where available), and treat ‘human-content’ interactions the same as you would ‘human-human’ ones. Also, it is worth bearing in mind a key rule of management: praise in public, criticize in private. If a participant shares inappropriate content or their work is poor quality, always make sure to speak to them in private when explaining the situation. However, when praising another MOOP participant, do this in public as part of creating a positive collaboration atmosphere.

There are times when conflicts arise from the expression of different opinions or viewpoints, especially over contested ideas. While intellectual conflict is generally good for scholarship, there are times when an impasse might be reached. Conflict resolution will be situation-dependent and based on the leadership structure. With a core team in place and an established Code of Conduct, it is possible to have an arbitration. Tensions often arise during peer-to-peer work, so if possible, have someone with editorial experience act as arbitrator to help resolve these disputes. No matter what, just because there is a disagreement does not mean combative behaviour should be allowed.

Pro tip: Formulate, share, and enforce a Code of Conduct

A Code of Conduct is a document that summarizes rules of communication and indicates which behaviors are not welcome and which are preferred, usually written by the core team at the beginning of a project. Consent to the rules covered by the Code of Conduct should be a prerequisite for participants to take part in the project and not tacitly assumed. This document should also guide newcomers on how to communicate in order to best integrate with the community and how to solve potential conflicts (e.g., the Contributor Covenant22). It should remain open to constructive feedback and criticism and be iteratively evaluated over time by the community. A Code of Conduct is only as valuable as its enforcement, so the core team should be committed to following through on handling transgressions proportionally and transparently.

Potential hurdle: In the comments there is a disagreement about a fundamental element of the paper between several participants.

Solution: Achieving consensus is never easy, especially on controversial topics and in scholarly research where such controversy is critical for driving progress. If a situation looks like it might not be resolved by itself, sometimes the best thing to do is for the project leader to take the discussion private and arbitrate between the opposing sides in order to find a resolution. For example, the parties involved might come to a consensus by finding better wording, phrasing something in a more balanced way, or moving controversial passages to another section (or deleting them entirely if deemed unnecessary to the central point of the paper). Outlining both arguments and sorting them into different sections of the article might also be a solution.

9. Maintain motivation until the end

Having a lot of people participating in a project can get quite difficult at times. This is why good rules, governance, and preparation are crucial to the process. If there are a number of people helping, an appropriate and efficient division of labour should help to execute projects even faster. Make sure project participants have mutual respect and empathy as well as common goals and motivations; these qualities might be more important than the number of participants. It might be easier to work with 30 people who have compassion than three people who are more competitive in nature.

If the project runs on for a long time, MOOP participants might feel that the process of completion is seemingly endless or that the project might never be completed. Human motivation also decays with time: even if participants are highly motivated for the first few weeks, their engagement slowly decreases afterwards. Therefore, the project might go into a stagnation phase if certain goals have not been met up to this point. It remains critical to maintain motivation with regular communication, demonstrable progress, and achieved milestones. Having a well-developed internal communication strategy and project roadmap can be key here.

It is also paramount to create an optimal level of comfort for all participants, which can be facilitated by any participants familiar with the process but especially by the core team. There are two ways to go about this. First, have a clear set of guidelines for how to contribute, such as the contributing guidelines that accompany many FOSS projects. Second, be aware when confusion arises among contributors, and have a guidance protocol at hand to help. For example, the project leader or members of the core group could have a personal email address that people could use to discuss and address any concerns. This approach might not work at scale, though, and having a formal protocol will be useful for larger projects. Having these things established early on can help to maintain motivation as projects advance over time.

10. Finish your MOOP

The final step, once the project has been completed, is to prepare your MOOP for submission to a journal for peer review. Before this point, make sure that all authors have agreed that the article is ready for publication and have agreed on the publication procedure (for example, there are copyright considerations, journal choice, and potential article-processing charges that all need to be considered). Individual contributors should make sure that their efforts have been recorded and recognised so that they can receive appropriate credit for them.

As a MOOP goes through peer review, depending on the extent and rigour of input already provided, the review process might be largely redundant. If 30 people have been continuously authoring, editing, and reviewing a project, what difference will two to three more make? This does not mean the peer review process is irrelevant in a MOOP, just that the ‘principle of many eyes’ means that most errors should have been spotted by the time the project is completed—akin to a ‘bazaar’ model of production (Raymond 2001). On the other hand, the MOOP could have many incoherent sections that feel a bit mashed together at the end. If this is the case, then having an external pair of eyes to check the narrative and structure is important. Either way, it is usually worthwhile to share your MOOP as a preprint. Most journals now ‘allow’ you to share a preprint in parallel to undergoing the review process, which is an excellent method of rapidly getting your work made public with an authoritative stamp, making it openly available, and soliciting wider feedback than you would receive from the typically highly constrained review process.

If after undertaking this brilliantly web-enabled dynamic authoring and iterative collaborative review process, you still choose to go through the laborious traditional publishing stage, then there is one potential obstacle to be aware of. Not all journals and editors might like the MOOP process. Essentially all of the communication and informal review have been done in the open beforehand, which could affect the likelihood of your MOOP being accepted for publication. On the other hand, journals might like the process, as it reduces their relative workload. If the possibility of a journal’s negative response concerns you, it is best to contact your target journal as early on in the process as possible. However, we are collectively unaware of a single journal that at present forbids submission of a MOOP as part of a formal policy, and we have never experienced it as MOOP authors. This lack of negative response from journals is probably due to the relatively recent advent and current rarity of MOOPs. It might be useful in the future for journals to provide an explicit stamp, metadata tag, or badge showing that an article was produced as a MOOP.

Potential hurdle: Working on a MOOP requires flexible management when it comes to managing people but tight management when it comes to managing deadlines because there is always some aspect of the project that might be further improved or extended. When a large number of people are editing the same text, there are always divided opinions with respect to whether the output is satisfactory or not. It is easy to launch a project and get a number of enthusiastic people on board; it is much harder to wrap up a project and make the participants feel satisfied about how they invested their time.

Solution: At some point, either the project leader or core team needs to make the final decision that the project is ready for closure or submission.

Conclusions

We have outlined 10 core recommendations for helping to write Massively Open Online Papers, or MOOPs (Figure 1), and hope that these prove useful in inspiring new forms of online collaboration. In many cases, these recommendations reflect what might be viewed as an ideologically FOSS or Wiki-style development workflow for research articles, and we anticipate that more services and tools will emerge in the future that take advantage of their strengths in combining continuous authoring and ‘peer review.’ However, we acknowledge that at the moment this is still quite an exploratory process, and the social dynamics associated with such tools remain complex. There are many aspects of MOOPs not covered in this manuscript, and we encourage future ventures to find optimal strategies for MOOP-like authoring models. For example, all of the authors here are from Western Europe and therefore have broadly similar cultural approaches to writing that might not be reflected in other parts of the world. We would welcome further discussion around the cultural differences associated with online collaborative authoring. In the future, it may be beneficial to have a database of available MOOP initiatives, where individuals can also register their interests in contributing.

Notes

1Robert Hoffman, “Public invitation to contribute to…Principles for the post-GWAS functional characterisation of risk loci,” WikiGenes Collaborative Publishing, accessed December 23, 2019 on Internet Archive, https://web.archive.org/web/20130110163114/http:/www.wikigenes.org/e/pub/e/85.html. 

2See e.g., “Tip of the Week: Overleaf and Reference Managers,” Blog, Overleaf, updated 2020, https://www.overleaf.com/blog/639-tip-of-the-week-overleaf-and-reference-managers. 

3See “Cite,” Authorea Help, updated 2020, https://support.authorea.com/en-us/category/cite-r1vanh/. 

4Here, we mean ‘free’ as in it is accessible either with an account that you have to sign up for or without registration. Some of the listed services may offer pro features with costs involved. 

5CryptPad has an in-built template feature, but since citation styles, etc. do not work with the plain-text editor, its use for journal submissions is rather limited. For more, see https://cryptpad.fr/faq.html#keywords-template. 

6Cf. GitHut, Carlo Zapponi, last modified 2014, https://githut.info/. 

7See this Markdown Cheatsheet, GitHub, Adam Pritchard, last modified May 20, 2017, https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet. 

8See “Make Presentation Slides with HackMD,” HackMD Tutorial Book, HackMD, https://hackmd.io/c/tutorials/%2Fs%2Fhow-to-create-slide-deck. 

9See “How to Create a Book,” HackMD, https://hackmd.io/s/how-to-create-book. 

10Louise Matsakis, “Google Docs Is Randomly Flagging Files for Violating Its Terms of Service,” Motherboard, October 31, 2017, https://www.vice.com/en_us/article/zmz3yw/why-is-my-google-doc-locked-terms-of-service-bug. 

11On that note, see also Chris Hartgerink, “OK Google: Delete My Account (No Wait. No Really.),” Medium, February 15, 2018, https://medium.com/read-write-participate/ok-google-delete-my-account-no-wait-no-really-a0f8bbd26265. 

12“Data Protection,” European Commission, https://ec.europa.eu/info/law/law-topic/data-protection_en. 

13“Free Online Introduction to LaTeX,” Overleaf, updated 2020, https://www.overleaf.com/learn/latex/Free_online_introduction_to_LaTeX_(part_1). 

14“A Friendly Github Intro Workshop,” GitHub, https://kirstiejane.github.io/friendly-github-intro/. 

15Kris Shaffer, “Push, Pull, Fork: GitHub for Academics,” Hybrid Pedagogy, May 26, 2013, http://hybridpedagogy.org/push-pull-fork-github-for-academics/. 

16See Sarah Bond, “Dear Scholars, Delete Your Account at Academia.Edu,” Forbes, January 23, 2017, https://www.forbes.com/sites/drsarahbond/2017/01/23/dear-scholars-delete-your-account-at-academia-edu/#3c7926882d62. 

17“Sources of Attention,” Altmetric, https://www.altmetric.com/about-our-data/our-sources/. 

18Open Science MOOC, https://opensciencemooc.eu/. 

19For a variety of open solutions, see “18 Open Source Translation Tools to Localize Your Project,” Opensource.com, Red Hat, updated 2019, https://opensource.com/article/17/6/open-source-localization-tools. 

20Weblate: free for open source projects under an open license, although some limitations apply; Crowdin: open source for software, plus a free academic license option ‘if your project has educational purposes’ (“Plans, Pricing, and Free Trial,” Crowdin. https://crowdin.com/pricing#annual). 

21Including many dedicated FOSS platforms/services such as OpenStreetMaps, MediaWiki, and Dissem.in—so if you fancy helping them out, have a go at it. 

22“A Code of Conduct for Open Source Projects,” Contributor Covenant, Coraline Ada Ehmke, last modified 2014, https://www.contributor-covenant.org/. 

Acknowledgements

We are grateful for the contributions by Chris Hartgerink, Veronika Cheplygina, and Johanna Havemann to the previous version of this article shared as a preprint (https://doi.org/10.31222/osf.io/et8ak). Samantha MacFarlane helped to improve the style and clarity of this paper beyond recognition, and we are deeply grateful for her editorial skill. We wish to thank Tony Ross-Hellauer (aka Master of Google Docs) for coming up with the term MOOP and everyone who has been part of one of these collaborations in the past. The authors also wish to extend their thanks to Barbara Rivera-Lopez for useful discussion on this topic and to the anonymous reviewers at KULA for providing valuable input that helped to shape this paper for the better. This article also went through a round of peer review at PLOS Computational Biology, and we are thankful to reviews from Giovanni M. Dall’Olio and two anonymous reviewers there for their additional insight. Finally, our thanks to Alberto Pepe from Authorea for helping to clarify some of Authorea’s features.

Competing Interests

The authors have no competing interests to declare.

Author Information

JPT: Palaeontology and Open Scholarly Communication. Jonathan Tennant unfortunately passed away on April 9th 2020.

NB: New methods for data analysis in connectomics, mentoring & career development.

BGT: Bioinformatics, Community Science.

PM: Data Science, Data Stewardship, Open Scholarly Communication.

TS: Cultural Media Studies, and the intersections of Open Education, Open Science and Open Scholarship.

References

  1. Bielczyk, Natalia, Ayaka Ando, AmanPreet Badhwar, Chiara Caldinelli, Mengxia Gao, Amelie Haugg, Leanna Hernandez, Kaori Ito, Daniel Kessler, and Daniel Lurie. January 2019. “Effective Self-Management for Early Career Researchers in the Natural Sciences.” Working paper, OHBM Student and Postdoc Special Interest Group. DOI: https://doi.org/10.17605/OSF.IO/W6EMK 

  2. Bik, Holly M., and Miriam C. Goldstein. 2013. “An Introduction to Social Media for Scientists.” PLOS Biology 11(4): e1001535. DOI: https://doi.org/10.1371/journal.pbio.1001535 

  3. Bosman, Jeroen, and Bianca Kramer. 2015. “101 Innovations in Scholarly Communication: How Researchers Are Getting to Grip with the Myriad of New Tools.” Impact of Social Sciences (blog), LSE Blogs. November 11, 2015. http://blogs.lse.ac.uk/impactofsocialsciences. 

  4. Brand, Amy, Liz Allen, Micah Altman, Marjorie Hlava, and Jo Scott. 2015. “Beyond Authorship: Attribution, Contribution, Collaboration, and Credit.” Learned Publishing 28(2): 151–55. DOI: https://doi.org/10.1087/20150211 

  5. Cheruvelil, Kendra S., Patricia A. Soranno, Kathleen C. Weathers, Paul C. Hanson, Simon J. Goring, Christopher T. Filstrup, and Emily K. Read. 2014. “Creating and Maintaining High-Performing Collaborative Research Teams: The Importance of Diversity and Interpersonal Skills.” Frontiers in Ecology and the Environment 12(1): 31–38. DOI: https://doi.org/10.1890/130001 

  6. Dall’Olio, Giovanni M., Jacopo Marino, Michael Schubert, Kevin L. Keys, Melanie I. Stefan, Colin S. Gillespie, Pierre Poulain, et al. 2011. “Ten Simple Rules for Getting Help from Online Scientific Communities.” PLOS Computational Biology 7(9): e1002202. DOI: https://doi.org/10.1371/journal.pcbi.1002202 

  7. Evans, Teresa M., Lindsay Bira, Jazmin Beltran Gastelum, L. Todd Weiss, and Nathan L. Vanderford. 2018. “Evidence for a Mental Health Crisis in Graduate Education.” Nature Biotechnology 36(March): 282–84. DOI: https://doi.org/10.1038/nbt.4089 

  8. Frassl, Marieke A., David P. Hamilton, Blaize A. Denfeld, Elvira de Eyto, Stephanie E. Hampton, Philipp S. Keller, Sapna Sharma, et al. 2018. “Ten Simple Rules for Collaboratively Writing a Multi-Authored Paper.” PLOS Computational Biology 14(11): e1006508. DOI: https://doi.org/10.1371/journal.pcbi.1006508 

  9. Greenhow, Christine, and Benjamin Gleason. 2014. “Social Scholarship: Reconsidering Scholarly Practices in the Age of Social Media.” British Journal of Educational Technology 45(3): 392–402. DOI: https://doi.org/10.1111/bjet.12150 

  10. Greshake Tzovaras, Bastian, Misha Angrist, Kevin Arvai, Mairi Dulaney, Vero Estrada-Galiñanes, Beau Gunderson, Tim Head, et al. 2019. “Open Humans: A Platform for Participant-Centered Research and Personal Data Exploration.” GigaScience 8(6). DOI: https://doi.org/10.1093/gigascience/giz076 

  11. Grijs, Richard de. 2015. “Ten Simple Rules for Establishing International Research Collaborations.” PLOS Computational Biology 11(10): e1004311. DOI: https://doi.org/10.1371/journal.pcbi.1004311 

  12. Himmelstein, Daniel S., Vincent Rubinetti, David R. Slochower, Dongbo Hu, Venkat S. Malladi, Casey S. Greene, and Anthony Gitter. 2019. “Open Collaborative Writing with Manubot.” PLOS Computational Biology 15(6): e1007128. DOI: https://doi.org/10.1371/journal.pcbi.1007128 

  13. Hoffmann, R. 2008. “A Wiki for the Life Sciences Where Authorship Matters.” Nature Genetics 40: 1047–51 DOI: https://doi.org/10.1038/ng.f.217 

  14. Holcombe, Alex O. 2019. “Contributorship, Not Authorship: Use CRediT to Indicate Who Did What.” Preprint, submitted April 18, 2019. PsyArXiv. DOI: https://doi.org/10.31234/osf.io/dt6e8 

  15. Hsiehchen, David, Magdalena Espinoza, and Antony Hsieh. 2015. “Multinational Teams and Diseconomies of Scale in Collaborative Research.” Science Advances 1(8): e1500211. DOI: https://doi.org/10.1126/sciadv.1500211 

  16. Katz, Daniel S., Lois Curfman McInnes, David E. Bernholdt, Abigail Cabunoc Mayes, Neil P. Chue Hong, Jonah Duckles, Sandra Gesing, et al. 2018. “Community Organizations: Changing the Culture in Which Research Software Is Developed and Sustained.” Preprint, submitted November 20. http://arxiv.org/abs/1811.08473. 

  17. Lakens, Daniël, Anne M. Scheel, and Peder M. Isager. 2018. “Equivalence Testing for Psychological Research: A Tutorial.” Advances in Methods and Practices in Psychological Science 1(2): 259–69. DOI: https://doi.org/10.1177/2515245918770963 

  18. Lurie, Daniel, Daniel Kessler, Danielle Bassett, Richard F. Betzel, Prof Michael Breakspear, Shella Keilholz, Aaron Kucyi, et al. “TVWG Resting TVC Review.” Working paper, Time Varying Working Group. July 2018. DOI: https://doi.org/10.17605/OSF.IO/FA6ZR 

  19. Moscovici, Serge, and Marisa, Zavalloni. 1969. “The Group as a Polarizer of Attitudes.” Journal of Personality and Social Psychology 12(2): 125–35. DOI: https://doi.org/10.1037/h0027568 

  20. Raymond, Eric S. 2001. The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary. Cambridge, MA: O’Reilly Media, Inc. 

  21. Sarabipour, Sarvenaz, Humberto J. Debat, Edward Emmott, Steven J. Burgess, Benjamin Schwessinger, and Zach Hensel. 2019. “On the Value of Preprints: An Early Career Researcher Perspective.” PLOS Biology 17(2): e3000151. DOI: https://doi.org/10.1371/journal.pbio.3000151 

  22. Tennant, Jonathan, Jennifer Elizabeth Beamer, Jeroen Bosman, Björn Brembs, Neo Christopher Chung, Gail Clement, Tom Crick, et al. 2019a. “Foundations for Open Scholarship Strategy Development.” Preprint, submitted January 30. https://doi.org/10/gft5hx. DOI: https://doi.org/10.31222/osf.io/b4v8p 

  23. Tennant, Jonathan P., Bruce Becker, Tanja de Bie, Julien Colomb, Valentina Goglio, Ivo Grigorov, Chris Hartgerink, et al. 2019b. “What Collaboration Means to Us: We Are More Powerful When We Work Together as a Community to Solve Problems.” Collaborative Librarianship 11(2) Available at: https://digitalcommons.du.edu/collaborativelibrarianship/vol11/iss2/2. 

  24. Tennant, Jonathan P., Ritwik Agarwal, Ksenija Baždarić, David Brassard, Tom Crick, Daniel Dunleavy, Thomas Evans, et al. 2020. “A Tale of Two ‘Opens’: Intersections between Free and Open Source Software and Open Scholarship.” SocArXiv. DOI: https://doi.org/10.31235/osf.io/2kxq8 

  25. Tietze, Susanne, and Penny Dick. 2013. “The Victorious English Language: Hegemonic Practices in the Management Academy.” Journal of Management Inquiry 22(1): 122–34. DOI: https://doi.org/10.1177/1056492612444316 

  26. van den Broek, Tijs, and Anne F. van Veenstra. 2015. “Modes of Governance in Inter-Organizational Data Collaborations.” ECIS2015 Completed Research Papers. Paper 188. ISBN 978-3-00-050284-2. DOI: https://doi.org/10.18151/7217509 

  27. Vicens, Quentin, and Philip E. Bourne. 2007. “Ten Simple Rules for a Successful Collaboration.” PLOS Computational Biology 3(3): e44. DOI: https://doi.org/10.1371/journal.pcbi.0030044 

  28. Weinberger, Cody J., James A. Evans, and Stefano Allesina. 2015. “Ten Simple (Empirical) Rules for Writing Science.” PLOS Computational Biology 11(4): e1004205. DOI: https://doi.org/10.1371/journal.pcbi.1004205 

  29. Willinsky, John. 2005. “The Unacknowledged Convergence of Open Source, Open Access, and Open Science.” First Monday 10(8). DOI: https://doi.org/10.5210/fm.v10i8.1265 

comments powered by Disqus