Open Social Scholarship Annotated Bibliography

This annotated bibliography responds to and contextualizes the growing ‘Open’ movements and recent institutional reorientation towards social, public-facing scholarship. The aim of this document is to present a working definition of open social scholarship through the aggregation and summation of critical resources in the field. Our work surveys foundational publications, innovative research projects, and global organizations that enact the theories and practices of open social scholarship. The bibliography builds on the knowledge creation principles outlined in previous research by broadening the focus beyond conventional academic spaces and reinvigorating central, defining themes with recently published research.

forward-facing, networked knowledge production discussed in these collections-by scholars including John Maxwell, Ray Siemens, Susan Brown, and Constance Crompton-is also reflected in this bibliography.
In order to encompass the broad field of open knowledge production and circulation, this bibliography extends its focus beyond academic contexts to the multifarious manifestations of community-based research, including citizen science and citizen-scholar projects. Projects are documented in this bibliography as examples of how academic researchers can benefit from partnering with active citizen scholars, such as the large crowdsourced initiative Transcribe Bentham and the Canadian-based Linked Modernisms Ross, Christie and Sayers 2014). The increasingly popular citizen science and citizen scholarship movements draw attention to research partially or wholly conducted by non-experts, typically volunteers who receive training necessary for collecting and interpreting data for a specific research purpose. Many of the resources included in this bibliography address the challenges of public scholarship and use case studies to explore how to develop an ethical, collaborative, and dialogic university-community partnership .
This bibliography also considers the role of open knowledge and technology in community partnerships and global activism. The advent of digital technology has created unprecedented opportunities to mobilize crowds and rapidly share information with a wide, public audience. The effect of these technological advancements can be recognized in a number of notable movements, such as Black Lives Matter and #MeToo. The growth of cyberactivism and use of online tools, such as Twitter, in social protest has had a significant impact on promoting political activism, mobilizing certain portions of society, and enhancing dissemination potentials for activist causes (Sandoval-Almazan and Gil-Garcia 2014). Online platforms provide an opportunity to make social justice movements more visible, and the merging of social justice initatives with online technologies has made knowledge more dynamic. This bibliography considers how technologies facilitate knowledge management and mobilization, as well as how specialized research can play an active role in burgeoning global justice movements.

Intent and History of the Bibliography
The 'Open Social Scholarship Annotated Bibliography' was compiled in 2015-2016 by a collaborative team at the Electronic Textual Cultures Lab (ETCL). 1 This document was inspired by two previous annotated bibliographies authored by ETCL members in collaboration with the INKE Research Group: the initial 'Social Knowledge Creation: Three Annotated Bibliographies' (Arbuckle et al. 2013) and an updated version, 'An Annotated Bibliography of Social Knowledge Creation' (Arbuckle et al. 2017). The 2013 publication provided a snapshot of contemporary scholarship, initiatives, and research technologies related to social knowledge creation. The later iteration of the bibliography updated the materials with publications authored between 2013 and 2016 and expanded its scope by including resources on crowdsourcing, open access, public humanities, digital publishing, and collaborative games. The 2017 bibliography also provided a definition of social knowledge creation: ' acts of collaboration in order to engage in or produce shared cultural data and/or knowledge products' (Arbuckle et al. 2017, 29). This bibliography builds further on the research collected in these two surveys by updating and adding recently published materials on common topics, including crowdsourcing, the history of knowledge production, and the 'Open' movements. Additionally, this bibliography collates several entirely new sections that demonstrate the broader, more outward-facing focus of this document, including 'Community Engagement' and 'Action and Activism.' Given the overlap in subjects, theories, and practices between the 'An Annotated Bibliography of Social Knowledge Creation' and the 'Open Social Scholarship Annotated Bibliography,' replicated entries have been marked with a cross symbol (+). This bibliography aims to capture a particular moment in the open social scholarship movement. Since its compilation, the field has shifted and expanded. As such, this bibliography is necessarily not exhaustive, and future iterations would benefit from an even wider scope that includes additional and even more diverse open social scholarship resources. In particular, this material could be expanded to include literature on open social scholarship in minority communities, which has grown in prominence since this document's conception.
The authors of the 'Open Social Scholarship Annotated Bibliography' enacted social knowledge creation practices in the assemblage of this bibliography by collaboratively setting the intellectual direction of the work, compiling resources, and annotating them. Research was carried out on platforms that facilitate collaborative research, including Google Drive and Zotero. As scholarship in this area is being rapidly and This category also explores the motivations for institutions and researchers to openly publish, or refrain from publishing, their data (Murray-Rust 2008;Piwowar and Vision 2013).

Section II. Community-based and Collaborative Forms of Open Knowledge
This section addresses collaborative forms of knowledge production in contemporary society and the ubiquity and accessibility of digital tools that facilitate knowledge production. Instead of focusing on collaborations within field-specific or academic contexts, publications in this section focus on interdisciplinary collaborations, on university-community collaborations, and on knowledge production by citizens not affiliated with universities, including crowdsourcing, citizen science, and citizen scholarship. These modes of research have drastically expanded the scope of questions that can be asked. Although the steepest increase in citizen science is seen in the social sciences, it contributes to other disciplines as well. The Cornell Lab of Ornithology (CLO), for example, focuses on environmental studies projects and has been practicing citizen science for over twenty years. The CLO has thousands of participants gathering tens of millions of observations each year, demonstrating the power of crowdsourcing and the impact of nonacademic participants on research projects. In their article 'Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy,' CLO researchers provide a model for setting up a successful citizen science project ). While the article evinces that open social knowledge was being practiced long before the digital age, its focus is on how open knowledge is currently practiced and systematized. For example, many funding agencies require research organizations and individuals to have a public-facing element to their projects. This can be enacted in multiple ways, such as having community members involved in the project (through citizen science or crowdsourcing) or by openly publishing the data and results.
The 126 entries in this section are divided into five categories with between 9 and 54 annotations each: The 'Community Engagement' category focuses on university representatives who are invested in creating and maintaining partnerships with community members, often in the form of goal-oriented projects that benefit the broader society. Resources in this section detail the benefits of these partnerships-both for the university and the community-as well as challenges that may arise during this collaboration and how to overcome them. In order to ensure that working with outside groups is professionally rewarding, authors argue for the need for university administrations to formally recognize university-community partnerships. A number of resources also discuss the role of technology in community engagement and collaboration. The 'Citizen Science' category includes research initiatives that are partially or wholly conducted by nonscientists, in most cases by volunteers who receive the necessary training to collect and interpret data for a targeted research investigation. The authors generally argue that the rise of citizen science is due to advances in technology that allow the collection of data by non-professionals. Another factor is that funding agencies are increasingly seeking the public's approval of scientific research endeavours, since taxpayer dollars often fund these initiatives. Moreover, authors unanimously agree that, if done properly, citizen science can go a long way in educating the public, supporting scientific research, and improving the ecological environment through targeted nature-based research. The third category, 'Crowdsourcing,' refers to projects built on information gathered by large groups of individuals through digital means. Crowdsourced data is quickly becoming a common element of many academic projects. The resources collected in this category define crowdsourcing and offer a rich depiction of existing crowdsourcing practices, as well as suggestions for optimal implementation. The 'Collaborative Scholarship' category addresses the rise of disciplinary and interdisciplinary research partnerships. An extended study of collaboration throughout the life cycle of the seven-year INKE project is communicated through a series of articles that explore how this collective evolved over time and offer advice about how to develop and maintain productive team relationships, how to effectively integrate new team members into a project, and how to deal with unexpected challenges that may arise in collaborative environments

Section III. Knowledge in Action
Resources in this category investigate how knowledge is mobilized and implemented in real-world settings. Instead of defining knowledge as static-something that sits on shelves and in machines-authors in this category trace the dynamism of knowledge and how it is made useful to others. This focus can also be seen as a shift away from overly specialized niches of knowledge to a more practical approach that investigates to what end knowledge is created and to what extent it is utilized. This section also addresses and assesses the impact of technology on society at large, including the public voice in political movements and decisions. It explores how technology facilitates communication and mobilizes crowds-both virtually and in real life-to partake in various actions and activist movements. The role of technology and open knowledge in facilitating social justice in various scenarios is also addressed. The 75 entries in this section are divided into five categories with between 9 and 23 annotations each: 1. Knowledge Mobilization 2. Data Management 3. Prototyping 4. Social Justice and Open Knowledge Facilitated by Technology 5. Action and Activism The 'Knowledge Mobilization' category includes works that reference the dissemination of research output, as well as knowledge engagement by groups outside of the pertaining research team. Notably, Colin R. Anderson and Stéphane M. McLachlan advocate for knowledge mobilization as a practice that opposes the models of knowledge transfer that often reign in academic environments and manifest a hierarchical transmission of knowledge (2015). This top-down structure is challenged by giving voice to typically marginalized groupsmostly those outside of academia-by establishing productive channels of communication (Anderson and McLachlan 2015). Authors in this category acknowledge the value of implementing knowledge mobilization strategies and delve into possibilities, problems, and solutions, using concrete examples that employ a variety of theoretical frameworks. The 'Data Management' category hones in on effective methods for organizing data and documents through the application of systematic mechanisms. Collected and annotated resources in this category address metadata and database management, as well as data visualization. Overall, the core foci of this section are data lifecycles, infrastructural mechanisms, and effective governance of digital information. Scholarly prototyping, a field that has proliferated over the last two decades, is addressed in the 'Prototyping' category. By experimenting with conventional forms of scholarly communication, the research prototypes in this category offer alternative modes of production, presentation, and dissemination that are supported by the digital medium. Although they have different end goals, all the prototypes in this category are experimental and innovative in their respective fields. The 'Social Justice and Open Knowledge Facilitated by Technology' category engages with the effects that the digital medium has on social justice operations. Authors argue that open knowledge is a tool for social justice and demonstrate how it can advance diverse scholarly fields, as well as society more generally. Overall, this category explores the various technologies and approaches that enable the development of open knowledge and social justice. Finally, the 'Action and Activism' category describes how digital media impact the scope, outreach, and visibility of activist groups and movements.

Forms of Open Knowledge and their History
Many institutions have historically privileged the open circulation of knowledge. The resources in this category include historiographical accounts of the development of the public library system in the Western world, with a particular focus on the United Kingdom and the United States (Besser 2004;Hamlyn 1946;Harris 1999;Jordan 2015;Kelly 1966Kelly , 1973. Historically, it was the Philosophical Transactions (1665), the oldest and longest running scientific journal, that pioneered the debates and arguments involved in the decision of making privately circulated knowledge accessible to a public who was predominantly interested in partaking in this knowledge acquisition (Willinsky 2006). Resources detail the rise of the philosophy of public access in seventeenth-, eighteenth-, and nineteenth-century institutions. Open knowledge is a historically based value system with a long tradition (Hamlyn 1946), and the historical publications included exemplify how knowledge was discussed and debated through publication. A number of the resources were written in response to previous research or as a means of summarizing an ongoing debate, thereby emphasizing the conversational foundations of the publications. Overall, this category demonstrates a strong British and American commitment to circulating knowledge products broadly (Besser 2004).
A. Annotations + Besser, Howard. 2004. "The Past, Present, and Future of Digital Libraries." In A Companion to Digital Humanities, edited by Susan Schreibman, Ray Siemens,and John Unsworth,Oxford: Blackwell.
Besser provides a history of digital libraries and argues for their continued importance in humanities disciplines. Libraries, archives, and museums can use high quality digital surrogates of original material from different repositories so that they appear to be catalogued within the same collection. The author notes that libraries have long upheld ethical traditions, clientele service, stewardship, and sustainability in addition to facilitating use of their collections. Besser details the philosophies of metadata. To correct current problems facing digital libraries, the author suggests that web architecture should no longer violate conventional library practices of providing relative location information for a work, as this impinges on the ability of users to access the material. Burke discusses the various agents and elements of social knowledge production with a specific focus on intellectuals and Europe in the early modern period (until c. 1750). He argues that knowledge is always plural and that various types of knowledge develop, surface, intersect, and play concurrently. Burke relies on sociology, including the work of Émile Durkheim, and critical theory, including the work of Michel Foucault, as a basis to develop his own notions of social knowledge production. He acknowledges that the church, scholarly institutions, the government, and the printing press have all had a significant effect on knowledge production and dissemination, often affirmatively but occasionally through restriction or containment. Furthermore, Burke explores how both 'heretics' (humanist revolutionaries) and more conventional academic structures developed the university as a knowledge institution. + Eisenstein, Elizabeth L. 1979. The Printing Press as an Agent of Social Change. Cambridge: Cambridge University Press.
Eisenstein highlights the role of the printing press as an agent of social change by adopting a historical approach that investigates the shift from script to print. She studies the implications of this transformation on three time periods specifically: the Renaissance, the Reformation, and the rise of modern science. One of the central thoughts of the book is what Eisenstein terms the 'Unacknowledged Revolution' that took place after the invention of the printing press, a time when public access to print media facilitated the growth of public knowledge and formulation of individual thought. Another achievement of print was the standardization and preservation of previous knowledge, which was a much more challenging endeavor in the manuscript generation. According to Eisenstein, this shift marked a crucial step in the development of humankind. By focusing on dissemination, standardization, preservation, and their effects on historical processes, Eisenstein provides a coherent argument about the social effects of the historical transition to the print medium.
Bath and Schofield reflect on the rise of the e-book by contemplating the various moving parts involved in its history and production. They focus on, and contribute to, the scholarly engagement with e-books, and they provide a comprehensive survey of theorists, including Johanna Drucker, Elizabeth Eisenstein, N. Katherine Hayles, Matthew Kirschenbaum, Jerome McGann, D.F. McKenzie, and Marshall McLuhan. Bath and Schofield integrate these theorists into a larger argument that suggests that both a nuanced understanding of book history and a comprehensive familiarity with digital scholarship are necessary to fully grasp the material and historical significance of the e-book. The authors conclude with a call to book history and digital humanities specialists (a.k.a. 'scholar-coders') to collaborate and develop new digital research environments together. Belojevic, Nina. 2015. "Developing an Open, Networked Peer Review System." Scholarly and Research Communication 6(2): n.p. DOI: https://doi.org/10.22230/src.2015v6n2a205.
Belojevic presents the Personas for Open, Networked Peer Review wireframe prototype: an open, networked peer review model initiated by Belojevic and Jentery Sayers in 2013 that was further developed by the Electronic Textual Cultures Laboratory, in partnership with University of Victoria Libraries, the Humanities Computing and Media Centre, and the Public Knowledge Project. In this environment, articles undergo open peer review and can be commented on by a specific group of reviewers or the public. The prototyping process followed an approach similar to the one described in Katie Salen and Eric Zimmerman's Rules of Play: Game Design Fundamentals in which they outline common game design principles. Belojevic describes how the project moved from iterative prototyping to agile development, an approach that permits researchers to break down the project into smaller chunks. This approach allows stakeholders to ensure that their goals are being met at every stage and scholars and researchers to maintain the quality of the project. Further research will focus on determining the aspects of agile development that are adaptable for the project in order to facilitate a balance between project development and deliverables, while being flexible enough to pursue and integrate novel insights that may appear during the prototyping process. + Borgman, Christine. 2007. Scholarship in the Digital Age: Information, Infrastructure, and the Internet. Cambridge, MA: MIT Press.
Borgman lays out research questions and hypotheses concerning the evolving scholarly infrastructure and modes of communication in the digital environment. She deduces that the inherent social elements of scholarship endure, despite new technologies that alter significantly the way scholarship is performed, disseminated, and archived. Scholarship and scholarly activities continue to exist in a social network of varying actors and priorities. Notably, Borgman focuses on the ' data deluge'-the increasing amount of generated data and data accessed for research purposes. Meditating on the influences of large data sets, as well as how these data sets will be preserved in keeping with library and archival conventions, forms a significant node in the book. Overall, Borgman synthesizes the various aspects of contemporary scholarship and reflects on the increasingly pervasive digital environment. + Bowen, William R., Matthew Hiebert, and Constance Crompton. 2014. "Iter Community: Prototyping an Environment for Social Knowledge Creation and Communication." Scholarly and Research Communication 5(4): n.p. DOI: https://doi.org/10.22230/src.2014v5n4a193.
Bowen, Crompton, and Hiebert discuss the features and challenges of Iter Community, a collaborative research environment. They also discuss A Social Edition of the Devonshire Manuscript, focusing on its human and computer social engagement. The authors organize the article into three sections: 1) a historical and conceptual framework of Iter Community, 2) an update on the state of Iter Community (at writing), and 3) a perspective on A Social Edition of the Devonshire Manuscript. They conclude that Iter Community's vision is to provide a flexible environment for communication, exchange, and collaboration, which will evolve with its participants' priorities and challenges. . "An Entity By Any Other Name: Linked Open Data as a Basis for a Decentered, Dynamic Scholarly Publishing Ecology." Scholarly and Research Communication 6(2): n.p. DOI: https://doi.org/10.22230/src.2015v6n2a212. Brown and Simpson propose that linked open data enables more easily navigable scholarly environments that permit better integration of research materials and greater interlinkage between individuals and institutions. They frame linked open data integration as an ecological problem in a complex system of parts and relationships. The different parts of the ecology co-evolve and change according to the relationships in the system. The authors suggest that tools are needed for establishing automated conditions; for evaluating the provenance, authority, and trustworthiness of linked open data resources; and for developing tools that facilitate corrections and enhancements. The authors explain that an ontology negotiation tool would be a most valuable contribution to the Semantic Web. Such a tool would represent an opportunity for collaboration between different sectors of the knowledge economy and would allow the Semantic Web to develop as an evolving space of knowledge production and dissemination. Brown, Susan, and John Simpson. 2014. "The Changing Culture of Humanities Scholarship: Iteration, Recursion, and Versions in Scholarly Collaboration Environments." Scholarly and Research Communication 5(4): n.p. DOI: https://doi.org/10.22230/src.2014v5n4a191.
Brown and Simpson discuss versions and versioning in contemporary scholarship, archiving, and data preservation. They present dynamic textuality, collaborative textuality, granulated and distributed textuality, and interdependent textuality. They also discuss technical considerations in order to highlight the cyclical influence between culture and technology (sections study control, cost, collaboration, conflicts and management, and representation, mostly within the context of digital texts). Brown and Simpson explain that collaborative digital objects are subject to modification, remediation, and revision, as 'textuality is increasingly granular, distributed, and interdependent' (n.p). The authors conclude that the dynamic aspect of the culture of scholarship allows for the community to contribute to the sustainability of cultural scholarship and record. . "Open Access Publishing and Scholarly Values." Dan Cohen (blog). https://dancohen. org/2010/05/27/open-access-publishing-and-scholarly-values/. Archived at https://perma.cc/78MH-PK43.
Cohen builds on the notions of the supply and demand side of scholarly communication, as well as the value system of scholars, in order to make a case for increasing open access scholarship and being more receptive to scholarship that does not adhere to the conventional academic publishing system. According to Cohen, the four sentiments that stand in the way of embracing open access scholarship are impartiality, passion, shame, and narcissism. Cohen uses impartiality in relation to the pressure scholars feel to publish in established, toll access venues for a number for reasons, including legitimate concerns such as career growth.
He argues that open access publishing can take place in parallel to more conventional forms of academic publishing. Cohen also criticizes the commercial apparatus of the publishing system that takes advantage of scholars and their labour and passion, which is expressed in writing. He argues for the need to reorient the ways in which scholarship is produced and published in order to increase access, and also to break the chain within a system that is exploiting academics. Cohen argues that the act of accepting certain digital medium aspects and rejecting others is a 'shameful hypocrisy ' (n.p.). The examples he provides are, firstly, using the digital medium as the primary source for research yet avoiding it as a means of publishing and, secondly, talking about access and the need for academics to be more inclusive but avoiding the existing necessary steps towards implementing these notions into practice. Finally, the narcissistic factor is related to the reputability of publishing in typical venues; Cohen counters this by saying that open access publishing is likely to get better readership and to spread ideas more widely, and could also be added to the CV. The author concludes by inviting us to envision and enact a more straightforward and virtuous model for scholarly communication. Crompton, Arbuckle, and Siemens address the process of building a digital social edition of a manuscript that involves consultation with, and contribution from, various communities. The article is based on a case study of A Social Edition of the Devonshire Manuscript, a Wikibooks edition of the first known miscellany that features women and men writing together in English. Since A Social Edition of the Devonshire Manuscript is published on Wikibooks, it includes a traceable revision history and is available for collaborative work. The Wikibooks platform also provides a safety net in case users tamper with the content in bad faith; the authors detail a user incident that was easily reversible due to the Wikibooks versioning options. Work on A Social Edition of the Devonshire Manuscript involved a series of consultations with various communities and advisory boards, primarily through Skype and Iter-based interactions, but also through social network platforms such as Twitter. Researchers in the Electronic Textual Cultures Lab (ETCL) often carried out suggested changes to the Wikibooks edition that arose from these consultations. One of the primary outcomes of this process involved rethinking the authority of the editor within a project that also involves citizen scholar contributions and a number of researchers contributing to the work to different extents. The authors regard this project as an example of a process-driven digital social edition that practices traced versioning and involves various communities working and contributing to a project that is, in its own way, meaningful to all. This is done in conjunction with experimentation with the digital medium. + Crompton, Constance, Raymond G. Siemens, and Alyssa Arbuckle, with the Devonshire Manuscript Editorial Group. 2015. "Enlisting 'Vertues Noble & Excelent': Behavior, Credit, and Knowledge Organization in the Social Edition." Digital Humanities Quarterly 9(2): n.p. http://www.digitalhumanities.org/dhq/ vol/9/2/000202/000202.html.
Crompton, Siemens, and Arbuckle consider the gender factors involved in social editions, drawing on their experience developing A Social Edition of the Devonshire Manuscript: a Wikibooks edition of the sixteenthcentury multi-author verse miscellany, the Devonshire Manuscript. The authors argue that while the Wikimedia suite can often devolve into openly hostile online spaces, Wikimedia projects remain important for the contemporary circulation of knowledge. The key, for the authors, is to encourage gender equity in social behavior, credit sharing, and knowledge organization in Wikimedia, rather than abandoning it for a more controlled collaborative environment for edition production and dissemination. . "Social Interaction Technologies and the Future of Blogging." In Blogging in the Global Society: Cultural, Political and Geographical Aspects, edited by Tatyana Dumova,and Richard Fiordo,. Hershey, PA: Information Science Reference.
Dumova addresses the social potential of blogging centres-specifically, the ways in which blogs permit people to engage in social interactions, build connections, and collaborate with others. She argues that blogging should not be studied in isolation from the social media clusters that function together to sustain each other. She also notes that blogging is an international phenomenon, since over 60% of all blogs created after the 1990s are written in languages other than English. Dumova broadly traces the development of blog publishing platforms. She concludes that network-based peer production and social media convergence are the driving forces behind the current transformation of blogs into increasingly user-centric, user-driven practices of producing, searching, sharing, publishing, and distributing information.
Erickson, Lagoze, Payette, Van de Sompel, and Warner ruminate on transforming scholarly communication to better serve and facilitate knowledge creation. They primarily target the current academic journal system; for the authors, this system constrains scholarly work, as it is expensive, difficult to access, and print biased. Erickson et al. propose a digital system for scholarly communication that more effectively incorporates ideals of interoperability, adaptability, innovation, documentation, and democratization. Furthermore, the proposed system would be implemented as a concurrent knowledge production environment instead of a mere stage, annex, or afterthought for scholarly work. . "Open Access Publishing and Scholarly Communication in Non-Scientific Disciplines." Online Information Review 39(5): 717-32. DOI: https://doi.org/10.1108/OIR-04-2015-0103.
Eve presents an overview of the current open access debate in non-scientific (STEM) disciplines. Eve argues that non-STEM disciplines have consistently lagged behind in their approach to open access policies and practices. He attributes this gap to a variety of economic and cultural factors, and argues that these specific challenges or objections have stunted the growth of open access in these disciplines. Eve suggests that his article is far too short and biased to do justice to the complexity of the issues he raises; however, it is his hope that the insights therein spur action and critical appraisal from the community at large. Academia needs to consider what is needed from a scholarly communications infrastructure and simultaneously build pragmatic and non-damaging transition strategies in order to utilize open access dissemination to its full advantage. . "CommentPress: New (Social) Structures for New (Networked) Texts." Journal of Electronic Publishing 10(3): n.p. DOI: https://doi.org/10.3998/3336451.0010.305.
Fitzpatrick meditates on the current state and future possibilities of electronic scholarly publishing. She focuses her consideration on a study of CommentPress, a digital scholarly publishing venue that combines the hosting of long texts with social network features. Fitzpatrick argues that community and collaboration are at the heart of scholarly knowledge creation-or at least they should be. Platforms like CommentPress acknowledge the productive capabilities of scholarly collaboration and promote this fruitful interaction between academics. Although Fitzpatrick admits that CommentPress is not the only, or best, answer to the questions of shifting scholarly communication, she celebrates its emergence as a service for the social interconnection and knowledge production of authors and readers in an academic setting. + Fitzpatrick, Kathleen. 2011. Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: New York University Press.
Fitzpatrick duly surveys and calls for a reform of academic publishing. She argues for more interactivity, communication, peer-to-peer review, and a significant move toward digital scholarly publishing. Fitzpatrick demonstrates how the current mode of scholarly publishing is economically unviable. Moreover, tenure and promotion practices based primarily on institutional modes of scholarly publishing need to be reformed. Fitzpatrick acknowledges certain touchstones of the academy (peer review, scholarship, sharing ideas) and how these tenets have been overshadowed by priorities shaped, in part, by mainstream academic publishing practices and concepts. She details her own work with CommentPress and the benefits of publishing online with an infrastructure that enables widespread dissemination as well as concurrent reader participation via open peer review. + Fjällbrant, Nancy. 1997. "Scholarly Communication-Historical Development and New Possibilities." In Proceedings of the IATUL Conference. Indiana: Purdue University Library. http://docs.lib.purdue.edu/ iatul/1997/papers/5/.
In order to study the widespread transition to electronic scholarly communication, Fjällbrant details the history of the scientific journal. Academic journals emerged in seventeenth-century Europe, and the first journal, Journal des Sçavans, was published in 1665 in Paris. According to Fjällbrant, the scholarly journal initially developed out of a desire for researchers to share their findings with others in a cooperative forum. As such, the journal had significant ties with the concurrent birth of learned societies (i.e., the Royal Society of London and the Académie des Sciences in Paris). As their primary concern was the dissemination of knowledge, learned societies began seriously experimenting with journals. Fjällbrant lists other contemporaneous forms of scholarly communication, including the letter, the scientific book, the newspaper, and the anagram system. The journal, however, emerged as a primary source of scholarly communication because it met the needs of various stakeholders: the general public, booksellers, libraries, authors who wished to make their work public and claim ownership, the scientific community invested in reading and applying other scientist's findings, publishers who wished to capitalize on production, and academic institutions that required metrics for evaluating faculty. . "Meeting Scholars Where They Are: The Advanced Research Consortium (ARC) and a Social Humanities Infrastructure." Scholarly and Research Communication 5(4): n.p. DOI: https://doi.org/10.22230/src.2014v5n4a189.
Grumbach and Mandell investigate the Advanced Research Consortium (ARC) infrastructure in the context of scholarly engagement, focusing on digital project peer review, aggregation and search, and outreach services. The authors emphasize the importance of meeting scholars where they are for the sake of success and productivity. They also outline the histories of Nineteenth-century Scholarship Online (NINES) and ARC, showing how these have assisted the scholarly community through the areas of focus listed above. The article concludes with a note that the ARC nodes' directors are not necessarily digital humanists, which helps bring together conventional scholarly and new digital infrastructures. + Jones, Steven E. 2014. "Publications." In The Emergence of the Digital Humanities, 147-77. New York: Routledge.
Jones explores the current state of scholarly publishing and the role of the digital humanities. He argues that now, more than ever, academic practitioners are able to take the means of producing scholarly work into their own hands. Rather than relying on scholarly communication systems already in place, researchers can now experiment with different modes, media, and models of publication. Jones considers digital publishing and engagement of academic work to be symptomatic of the deep integration and interplay of computational methods with contemporary scholarship in general, and with digital humanities in particular. . "Faith-Based Electronic Publishing and Learning Environments as a Model for New Scholarly Publishing Applications." Scholarly and Research Communication 5(4): n.p. DOI: https://doi. org/10.22230/src.2014v5n4a188.
Lane explores the popular eTheology platforms Olive Tree and Logos, and the possibilities for the uptake of their information management and design models. Lane details the advantages of popular, or non-academic, digital knowledge spaces and argues for their potential application to secular electronic publishing. The most advantageous element of this proposal may be the suggestion to tailor applications to communities of users-which Olive Tree and Logos do, as described in the article-in order to develop a more integrated and dynamically engaged scholarly publishing system that includes user analysis. . Building Expertise to Support Digital Scholarship. Council on Library and Information Resources. https://www.clir.org/pubs/reports/pub168/ pub168.
Lewis, Spiro, Wang, and Cawthorne investigate the necessary expertise for robust and sustainable digital scholarship (DS). The authors list the components of expertise, laying out their methods (such as site selection and interviews) and findings (including analysis, study limitations, and challenges). Lewis et al. discuss the skills, competencies, and mindsets important for digital scholarship and list the factors upon which digital scholarship depends. Additionally, the authors study the characteristics of organizations that enable continuous learning to nurture expertise and knowledge creation. They examine DS expertise in a global context, the role of the research library and campus computing, and the challenges faced by digital scholarship organizations. Based on their observations of successful programs, the authors offer some recommendations for digital scholars, leaders of digital scholarship organizations, university and host organizations, organizations that fund digital scholarship, and the digital scholarship community. They conclude that sharing and communication among individuals allows for remarkable learning in digital scholarship. . "Imagining the New Media Encounter." In A Companion to Digital Literary Studies, edited by Raymond Siemens,and Susan Schreibman,. Oxford: Blackwell.
Liu introduces a volume edited by Siemens and Schreibman that brings together narratives about the new media encounter, as told from the perspective of scholars, theorists, and practitioners working in the intersection of literary studies and digital new media. He offers a narrative of the origin and development of new media and its encounters with socio-political, historical, and subjective registers, ultimately claiming that its dynamic and manifold nature elicits numerous encounter narratives. Liu draws on a number of central theorists to point to the manifold, often juxtaposing characteristics of new media that further complicate its clear-cut definition. Given this context, Liu explains that the goal of this volume is to introduce the various stories of the new media encounter, and the messiness and imaginative possibilities integral to it. Essays in this volume fall under three main sections-'Traditions,' 'Textualities,' and 'Methodologies'-and work together to address the potentials of new media in scholarly and cultural contexts. + Lorimer, Rowland. 2013. "Libraries, Scholars, and  Lorimer briefly details the last forty years of scholarly publishing to explicate the current state of affairs. He asserts that a reorganization of the academic publishing infrastructure would greatly encourage forthright contributions to knowledge, especially concerning academic journals and monographs. The splitting of the university press from the university (except in name), coupled with funding cuts and consequent entrepreneurial publishing projects, has hampered the possibilities of academic publishing. By integrating all of the actors of digital scholarly communication in an inclusive collaboration-libraries, librarians, scholars on editorial boards, technologically inclined researchers, programmers, digital humanists, and publishing professionals-digital technology could bear significant benefits for the future of scholarship and knowledge creation.  From his perspective within the Canadian Institute for Studies in Publishing at Simon Fraser University, Maxwell ruminates on the current state of university-level training in publishing studies, as well as its future role. He considers the shifting economy and the rise of digital media and practices to be major factors influencing the current Canadian academic and nonacademic publishing scene. Maxwell suggests that the university has a pivotal role to play in reinvigorating publishing by encouraging a supportive community of practice as well as openness to creativity, innovation, and flexibility. Overall, Maxwell underlines the importance of academic publishing studies in the evolving publishing scene. . "Beyond Open Access to Open Content." Scholarly and Research Communication 6(3): n.p. DOI: https://doi.org/10.22230/src.2015v6n3a202.
Maxwell calls for radical openness in scholarly publishing-that is, moving beyond the ideas of open access towards a cultural transformation. He argues that as the humanities is re-imagined in the light of digital media, there is an increasing need for old practices to be thrown away instead of merely reconceived. For Maxwell, the print-based journal economy relies on limited access in order to maintain a profit. The economics of open access, however, could adapt a new system of openness to shifting market demands, opened by the web, that depend upon interconnection and interlinkage. Maxwell turns to agile publishing and its mission statement of 'release early, release often' as an example of a more open movement. He questions how scholarly work can be remixed, combined, reassembled, taken apart, and inscribed through an iterative process. Maxwell  O'Donnell, Hobma, Cowan, Ayers, Bay, Swanepoel, Merkley, Devine, Dering, and Genee present a research mission summary for the group behind the Lethbridge Journal Incubator and detail how this project provides graduate students with early experience in scholarly publishing. The Lethbridge Journal Incubator trains graduate students in technical and managerial aspects of journal production under the supervision of scholar-editors and professional librarians. The project introduces students to the core elements of academic journal production workflows and provides training in copyediting, preparation of proofs, document encoding, and the use of standard journal production software. Using circle graphs, the authors demonstrate the significant increase in research time devoted to production tasks that improve research ability or knowledge. For O'Donnell et al., the key innovation of the Lethbridge Journal Incubator is its alignment of journal production sustainability with the educational and research missions of the university. The authors attribute the slow growth of open access to attitudes among those who pay for the production and dissemination of research. By unlocking the training and administrative support potential of the production process, the Lethbridge Journal Incubator promotes access within the University of Lethbridge.  ReKN is a large-scale collaborative project that spans the University of Victoria, the University of Toronto, and Texas A&M University. The authors detail the planning phase of ReKN, a project that aims to centralize and integrate research and production in a single online platform that will serve the specific needs of early modern scholars. The authors aim to develop and implement ReKN as a dynamic, holistic scholarly environment. For a further update, please see , an article that reflects on the first six months of ReKN development that is also included in this bibliography. . "Transformation through Integration: The Renaissance Knowledge Network (ReKN) and a Next Wave of Scholarly Publication." Scholarly and Research Communication 6(2): n.p. DOI: https://doi.org/10.22230/ src.2015v6n2a199. Powell, Siemens, Bowen, Hiebert, and Seatter explore the first six months of the Andrew W. Mellon-funded Renaissance Knowledge Network (ReKN). The authors focus on the potential for interoperability and metadata aggregation of various Renaissance and early modern digital projects. They examine how interconnected resources and scholarly environments might integrate publication and markup tools. Powell et al. consider how projects like ReKN contribute to the shifting practices of contemporary scholarly publishing. For a detailed exploration of the planning phase of ReKN, please see , also included in this bibliography. . "Exploding, Centralizing and Reimagining: Critical Scholarship Refracted Through the NewRadial Prototype." Scholarly and Research Communication 5(2): n.p. DOI: https://doi.org/10.22230/ src.2014v5n2a151.
In light of the focus of the Implementing New Knowledge Environments (INKE) research team on the ways in which digital environments affect the production, dissemination, and use of established venues for academic research, the NewRadial prototype has been extended for further investigation of this research direction. NewRadial is a data visualization environment that was originally designed as an alternative way to encounter and annotate image-based databases. It allows users to engage with humanities data outside of scholarly paradigms and the linear nature of the printed book, and encourages user contributions through collective commentary rather than isolated annotation. This prototype investigates a number of questions, such as whether the aforementioned venues can coexist in their present form, the ways in which scholarship can be visualized through time and space, how critical ideas are born and how they evolve, and whether the collaborative elements of Alternate Reality Games and Massively Multiplayer Online Role-Playing Games can be adopted into the peer review process and secondary scholarship. This prototype is a response to the established view of a finished work existing in a print-based format and is, rather, a way of experimenting in an interactive and dynamic digital environment that invites dialogue and collaborative curation-as well as numerous alternative narrative opportunities.
as an alternative solution to typical, isolated forms of monographs and linear narrativization. Saklofske, who is a proponent of open social scholarship, argues that this type of scholarship is an essential part of the transformation of scholarly research and communication in a way that would take advantage of the digital medium, rather than propagate conventional forms of knowledge creation into this environment. This type of research platform is also more inclusive and public-facing. + Saklofske, Jon, and  Schriebman, Siemens, and Unsworth edit a collection of essays by practitioners who both address the digital humanities as a separate discipline in a volume for the first time and think of the ways in which the discipline connects to more standard humanities practices. The collection presents various disciplinary perspectives on the digital humanities and describes evolving modes of scholarly research. According to the editors, one of the main principles uniting the various subfields of the digital humanities is that they are as concerned with the practical application of knowledge to concrete environments as they are with the theoretical. Together, the essays offer a historical record of how digital humanities has been practiced and has evolved over the past half-century, its present state, and its potential futures. The editors believe that the discipline has the potential to work with human records on an unprecedented scale, and to recognize patterns and connections that would have otherwise remained unnoticed. . The Transition of Scholarly Communications in Canada. http:// www.moyak.com/papers/scholarly-communications-canada.pdf.
Shearer and Birdsall analyze the impact of technology and economy on the scholarly communication process and outline a conceptual framework for the latter, along with corresponding actors, drivers, and issues. They start by addressing the scholarly communication system, and emphasize the economic and social importance of knowledge. The authors also discuss the actors within the process, which they categorize as researchers, publishers, libraries, and users. Shearer and Birdsall identify technology, globalization, economics, changing patterns of research, increasing quantity of scholarly publications, and public policy as ' external forces [to], or drivers' of, the system (4). They address issues such as changing knowledge needs, alternative publishing models, copyright, licensing, intellectual property, interoperability and technical infrastructure, and access and retrieval. The authors conclude that there are many transformations occurring in the scholarly communication system that may have various impacts that are yet to become clear, which calls for a multidisciplinary research agenda. Siemens's introduction to this report focuses on the rethinking of scholarly communication practices in light of new digital forms. He meditates on this topic through the framework of ad fontes-the act, or conception, of going to the source. Siemens argues that scholars should look at the source, or genesis, of scholarly communication. The source, for Siemens, includes more than the seventeenth-century inception of the academic print journal: it also includes less formal ways of communicating and disseminating knowledge (i.e., verbal exchanges, epistolary correspondence, and manuscript circulation). In this way, scholars can look past the popular, standard academic journal and into a future of scholarly communication that productively involves varied scholarly traditions and social knowledge practices.  ' (n.p). They address the need for designing new knowledge environments, taking into consideration the evolution of reading and writing technologies, the mechanics and pragmatics of written forms of knowledge, and the corresponding strategies of reading-as well as the computational possibilities of written forms due to emerging technology. The authors highlight the importance of prototyping as a research activity and outline corresponding research questions, which target the experiences of reading, using, and accessing information, as well as issues of design. They discuss their research methods, which include digital textual scholarship, user experience evaluation, interface design prototyping, and information management. Siemens et al. conclude that the various reading interface prototypes produced by INKE allow the transformation of methods of engagement with reading materials. Siemens, Raymond G., and David Moorman, eds. 2006. Mind Technologies: Humanities Computing and the Canadian Academic Community. Calgary: University of Calgary Press.
Siemens and Moorman edit a collection of essays that result from an awareness of the increasing role of the computer in the academy and its central role in enhancing humanities research and pedagogy in particular. The collection focuses on how scholars in Canada utilize computational methods, described here within the Humanities Computing framework, in the arts and humanities. This collection was preceded by various discussions and planning, with the Mind Technologies conference sessions-hosted by the Consortium for Computing in the Humanities/Consortium pour ordinateurs en science humaines (COCH/COSH) and the Social Sciences and Humanities Research Council of Canada (SSHRC) at the University of Toronto-being one of the most central and recent. The essays describe the various terminologies and research directions that fall within Humanities Computing and offer a broad range of applications in academic contexts. Siemens, Raymond G., and Kenneth Price, eds. 2013. Literary Studies in the Digital Age. New York: Modern Language Association.
Price and Siemens bring together an anthology of essays that address the changing modes of knowledge production and dissemination in the digital age. In their introduction, the authors offer an explanation of digital humanities and digital literary studies and how they developed historically, as well as their present form and the various branches and opportunities that stem from them. They highlight some of the significant aspects of digital humanities, such as the ability to bridge theory and practice, as well as the collaborative element at the heart of most research carried out in the field. The authors also point to the various aspects that have been affected by the digital turn, such as textual editing, access to primary materials, and the development of online databases that can feed various projects, among others. The essays also address various ways in which computers can assist literary studies and the ways in which these technologies can deal with humanities-oriented questions and concerns. . "HCI-Book? Perspectives on E-Book Research, 2006." Papers of the Bibliographical Society of Canada/Cahiers de la Société bibliographique du Canada 49(1): 35-90. http://web.uvic.ca/~siemens/pub/2011-HCI-Book.pdf.
Siemens, Dobson, Ruecker, Cunningham, Galey, Warwick, and Siemens examine the book in various domains as the electronic book emerged, with a specific focus on the role and importance of digital and analog books in humanities scholarship. They contextualize electronic book research, aiming to understand and describe principles of humanistic interaction with knowledge objects. The authors lay out core strategies for designing those objects, and study principles of evaluation and implementation of new technologies. They also investigate human-computer interaction possibilities and those of the electronic book. The authors take various elements into consideration, including audience, interface and design, and form and content. When studying readers and users, the authors consider user studies and usability assessment, the importance of user studies in the humanities, and previous studies of humanities users. They also examine features of books and e-books, such as tangibility, browsability, searchability, referenceability, hybridity, sustainability, and access-and investigate uses of books and e-books, as well as digital archives. The authors' research also examines aspects of the books and e-books (material, symbolic, and formal) and develops prototypes in various directions. They conclude by noting that team members, having worked together for six years, have been able to create the relationships and processes necessary to work through the challenges of multidisciplinary research collaborations. . "Back to the Future." Journal of Electronic Publishing 18(2): n.p. DOI: https://doi. org/10.3998/3336451.0018.204.
Stein considers the digital book as a place rather than an object or tool-a place where readers gather, socially. He details the experiments with social platforms conducted at his Institute for the Future of the Book, including the creation of an online social edition of McKenzie Wark's Gamer Theory and current work with SocialBook. SocialBook is an online, collaborative reading platform that encourages readers to comment on the text and interact with each other. Stein gestures to historic social reading practices, and infers that platforms like SocialBook are closely aligned to these traditions. . Starting A New Scholarly Journal in Africa. Public Knowledge Project. https://pkp.sfu. ca/files/AfricaNewJournal.pdf.
Stranack outlines the factors involved in starting a high-quality, sustainable academic journal. He situates this discussion within an African context, arguing that African knowledge production is crucial for local African communities and for the academic community at large. He presents the benefits of starting a journal on personal, institutional, disciplinal, and national levels, and addresses certain challenges involved, such as the necessary financial input and the need for time and people that will dedicate themselves to such an undertaking. In this booklet, Stranack addresses the elements involved in starting an academic journal; his discussion spans the types of existing journals and methods, economic models for journals, the open access versus limited access debate, and tips on how to promote and sustain a successful journal. Stranack concludes by highlighting the merit of a journal that makes valuable contributions to academic society at large. Vandendorpe contemplates Wikisource, a project of the Wikimedia foundation, as a potential platform for reading and editing scholarly books. He comes to this conclusion after considering what the ideal e-book or digital knowledge environment should look like. For Vandendorpe, this artefact must be available on the web; reflect the metaphor of a forest of knowledge, rather than a container; situate the reader at the centre of the experience; and be open, reliable, robust, and expandable. Wikisource, the author concludes, has the potential to meet these criteria. Vandendorpe highlights that Wikisource enables quality editing and robust versioning, and has various display options. He also outlines areas of development for Wikisource to become an ideal candidate for hosting the aforementioned type of knowledge creation.  Velatsianos addresses the extent of enactment of open scholarship in institutions that lack a formal infrastructure to support such research. This is carried out through a case study on Tall Mountain University--a public, not-for-profit North American institution--specifically by working with faculty members. According to the case study, there are a number of ways in which open scholarship is carried out, with certain practices being favoured over others; some examples include open access manuscripts and educational resources, social media, and open teaching/pedagogy. Another finding is that some faculty members publish their materials openly on the internet without attaching open licenses, and that the settings of the platform, as well as the institutional protocol, also affect the extent to which the material is accessible. Despite these findings, Veletsianos states that open scholarship is still a relatively narrow practice at the institution. The author outlines possible limitations of the research, such as open practices that may not have been revealed in the case study and possible limitations of Google Scholar (the search engine used for this research) that may prevent the study from being exhaustive. The study is also descriptive and does not address the motivations behind practicing open scholarship.

Open Access
The open access movement has infiltrated many aspects of today's world and has been increasingly advocated for in academic settings, often under the banner that access to knowledge is a human right, and that knowledge should be openly available rather than hidden behind paywalls. The objective of this category is to present critical publications that focus on the accessibility of information, primarily scholarly research. Authors approach open access from two different vantage points: either by focusing on open access as a theory or as a fundamental human right Suber 2004), or treating open access as a practical problem and offering suggestions on how to successfully implement its principles in a capitalist society . Several articles explore the perception of open access publishing in order to establish why scholars might shy away from sharing their research in this type of venue . Overall, these studies found that despite the community's agreement to and understanding of the benefits of open access publishing, free information is seen as less prestigious and therefore less valuable when it comes to academic promotions and tenure. The majority of publications discuss the roles of scholars and libraries in the open access movement, but some also touch on the importance of policy-makers and the larger community in advocating for change.  . "Open Access to Scientific Publications -an Analysis of the Barriers to Change?" Information Research 9(2): n.p.

A. Annotations
Björk asserts that the rise of the internet has changed how information is disseminated. While the methods may have changed, the economic ramifications of scholarly publishing have stayed the same. This has created a need to rethink the current publishing system. Björk proposes an open access system and defines open access as the means to information that can be read and shared for noncommercial purposes without any payments or restrictions. He also acknowledges, however, that systems are difficult to change; legal barriers, a lack of infrastructure, and the need to develop a new business model stand in the way. Björk argues that, while most people recognize the need for an open access system, it will take more than simple awareness for the system to be put into action. . "Who's Afraid of Peer Review?" Science 342: 60-5. DOI: https://doi.org/10.1126/ science.342.6154.60.
Bohannon details an experiment where a spoof paper was submitted to a myriad of open access journals and uses the results of this experience as evidence for the lack of thorough, robust peer review. Bohannon submitted this spoof paper, using a fake name and institution, to relevant online journals. The spoof paper was designed to be mundane with obvious flaws in its methodology. At the time of publication, Bohannon had submitted the paper to 304 journals. More than half of the journals (157) accepted the paper, 98 journals rejected the paper, and the remaining 49 journals had not responded. Only 36 of the submissions generated any type of review comments that recognized the fatal flaws in the experiments methodology; ultimately, 16 of those papers were still accepted despite the poor reviews. Bohannon closes the article by suggesting that open access journals might not be to blame for these deficiencies and that the same result may have been produced if conducted with subscription journals.
The Canadian Association of Research Libraries (CARL) advocates for open access because of the benefits it grants users, primarily open access to reading and utilizing knowledge. This mode of dissemination benefits funding agencies, since their investment has a maximized return, as well as the researchers, since their scholarship is distributed to a wider audience. CARL aligns its principles with the Budapest Open Access Initiative (BOAI) Declaration and the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities. From BOAI, CARL adopts and propagates the practice of publishing scholarly literature in open access. It also follows the Berlin Declaration in its decision to publish all original scientific research results and related data and metadata in open access. Thus, CARL's vision applies to the output of the scholarly work they fund, with a criteria that emphasizes that copyright is met, and that the product is consistent with highest peer review standards. CARL continues to work to implement open access standards and deal with the challenges that arise with this type of knowledge dissemination. . "Supporting and Enhancing Scholarship in the Digital Age." Canadian Journal of Communication 29(3): 277-300.
Chan argues that the key goal of open access is to maximize the impact of research by reaching the largest number of readers possible. This impact can be measured by counting citation references connected to specific articles. Chan summarizes a study conducted by the Institute of Scientific Information that found, when studying 190 journals, that those with open access and those with proprietary access showed no difference in impact. However, Chan argues that this data is invalid because it took up the journal itself, not the individual article, as its unit of measurement. Conducting his own research, Chan finds that there was, in fact, an impact factor difference of 300% in favour of open access articles. For Chan, knowledge is a public good and must be distributed as openly as possible. Chang examines how librarian authors participate in open access publishing. In particular, the author studies librarians who contribute articles to journals in the field of library and information science. Chang's case study took up 19 open access library and information science journals in which 1,819 articles were published between 2008 and 2013. Of the 1,819 articles, 55.6% of the authors were librarians (the next highest category was scholars at 33.5%), and 53.7% of the articles were co-authored or collaborative. The majority of these partnerships were librarian-librarian collaborations, but librarian-scholar co-authorships ranked second highest. Overall, the results demonstrate that open access publishing offers an opportunity for librarians to move from the more typical research support role into more of a knowledge creation role. Chang concludes that more authors need to publish on open access platforms in order for them to survive. Cohen builds on the notions of the supply and demand side of scholarly communication, as well as the value system of scholars, in order to make a case for increasing open access scholarship and being more receptive to scholarship that does not adhere to the conventional academic publishing system. According to Cohen, the four sentiments that stand in the way of embracing open access scholarship are impartiality, passion, shame, and narcissism. Cohen uses impartiality in relation to the pressure scholars feel to publish in established, toll access venues for a number for reasons, including legitimate concerns such as career growth.
He argues that open access publishing can take place in parallel to more conventional forms of publishing. Cohen also criticizes the commercial apparatus of the publishing system that takes advantage of scholars and their labour and passion, which is expressed in writing. He argues for the need to reorient the ways in which scholarship is produced and published in order to increase access, and also to break the chain within a system that is exploiting academics. Cohen argues that the act of accepting certain digital medium aspects and rejecting others is a 'shameful hypocrisy ' (n.p.). The examples he provides are, firstly, using the digital medium as the primary source for research yet avoiding it as a means of publishing and, secondly, talking about access and the need for academics to be more inclusive but avoiding the existing necessary steps towards implementing these notions into practice. Finally, the narcissistic factor is related to the reputability of publishing in typical venues; Cohen counters this by saying that open access publishing is likely to get better readership and to spread ideas more widely, and could also be added to the CV. The author concludes by inviting us to envision and enact a more straightforward and virtuous model for scholarly communication. . "Publishing in Open Access Journals in The Social Sciences and Humanities: Who's Doing It and Why." ACRL Fourteenth National Conference. http://www.ala.org/acrl/sites/ ala.org.acrl/files/content/conferences/confsandpreconfs/national/seattle/papers/85.pdf. Eve presents an overview of the current open access debate in non-scientific (STEM) disciplines, and argues that non-STEM disciplines have consistently lagged behind in their approach to open access policies and practices. He attributes this gap to a variety of economic and cultural factors, arguing that these specific challenges or objections have stunted the growth of open access in these disciplines. Eve suggests that his article is far too short and biased to do justice to the complexity of the issues he raises; however, it is his hope that the insights therein spur action and critical appraisal from the community at large. Academia must consider what is needed from a scholarly communications infrastructure and simultaneously build pragmatic and non-damaging transition strategies in order to utilize open access dissemination to its full advantage. Fund provides a general outline of the potential for open access to enlarge the scope of conversations in the scientific community. He compares the open access journal industry to the music industry since the turn of the century and to the personal computing industry of the past several decades. In 2008, Spotify introduced a completely new business model that questioned the mechanism of buying music altogether, and the music industry could not continue its former economic policies. He argues that the biggest hurdle for the breakthrough of open access is not the absence of publication channels, nor a lack of cooperation on the publisher's ends, but the imminent structure of the library system, its decision making mechanisms, and the circulation of funds. and non-open access articles that have been archived in a repository due to self-selection or to journal mandate. They challenge the OA Advantage hypothesis that claims that articles published in OA according to self-selection are cited significantly more than those that are published because of mandate. According to the hypothesis, the articles that researchers choose to archive are their 'best' work, or articles with the widest scope or most applicable research, which naturally leads to higher citation hits. By adopting a social science approach to test this claim, Gargouri et al. conduct a comprehensive study on journals that contain articles from all the aforementioned categories. The study found that there is actually no reduction in the OA Advantage for mandated OA articles (60%) in comparison to self-selected OA ones (15%). Another finding is that the impact or number of citation hits was not affected by whether the article was published through self-selection or mandate; the main increase of exposure results from being published in OA. Overall, authors of this article disprove the OA Advantage hypothesis and point to how publishing in OA is a productive means for increased exposure.
The 'Tri-Agency Open Access Policy on Publications' provides a preamble in which the authors discuss the importance of agencies, such as the Social Sciences and Humanities Research Council (SSHRC), in advancing research. It highlights the role played by barrier-free access to research and knowledge, as well as how the internet has contributed to open access, multi-disciplinary, and collaborative scholarship. The webpage contains the policy objective and statement, which addresses peer-reviewed journal publications (online repositories and journals) and publication-related research data. The authors also provide information about the implementation date and compliance with the policy, and a policy review. They conclude with links to additional information resources.  . "E-Publication and Open Access in the Arts and Humanities in the UK." Ariadne 54: n.p. http://www.ariadne.ac.uk/issue/54/heath-et-al/. Archived at https://perma.cc/NKM4-E3T5.
Heath, Jubb, and Robey present an overview of the role of monographs, e-texts, and other e-books in arts and humanities related disciplines. Monographs are still relatively unpopular for open access publication in the humanities since many people find these difficult to read and prefer the printed form. The survey uses as its principal object the activities of the UK's Arts and Humanities Research Council (AHRC) and the Research Information Network (RIN) to highlight a range of issues with open access monographs, journals, repositories, electronic publication of theses, and data publication. The authors point to the success of JSTOR as an open access repository; it is well known, however, that JSTOR requires libraries to pay a substantial subscription fee for access. The survey suggests that there is limited, slow progress to changing attitudes toward electronic publication. The academic community needs to develop a broader and more well-informed dialogue about what its needs are in regard to digital publication, and the issues endemic to publishing a monograph electronically. The authors propose that creating open access data repositories is not enough for attitudes in academia to change; substantial cultural changes in research practices must take place, and researchers should be encouraged to deposit their data as they complete research. The survey covers 14 potential funding streams for open access research data repositories. The authors argue that the lack of full, core funding and a direct funding stream through payment-for-use pose considerable financial challenges to the directors of such repositories. The collections that are maintained without funding are in significant danger of being lost to bit rot and other technological challenges.  Lorimer comments on the state of scholarly publishing in Canada. He argues that, while open access policies are accepted in principle, they are not abided by in practice due to a lack of understanding and a need for the publishing sector to maximize revenue. The dynamics of open access are difficult and many individuals refuse to acknowledge this by claiming that a mere shift in business model will right scholarly production. Lorimer asserts that bold action in the implementation of open access practices across the community would be both irresponsible and self-defeating at this time. Instead, he advocates for deeper engagement by academic consumers who can exert more power in the marketplace as part of the shifting dynamics of print and digital publishing. He suggests seven best practices for proceeding into the new world of scholarly communication. One of these is to maintain effective and efficient records of research studies by disinterested researchers into the full potential and dynamics of open access. Codex was started with the aim of helping librarians demystify the publication process, and to give new authors a chance to publish and gain experience in the field. She acknowledges issues of occasional fraud and scandal in open access publications, but argues that prioritizing accountability of information providers and upholding publisher ethics will help open access to become a mainstream vehicle for scholarly production. Maxwell calls for radical openness in scholarly publishing, that is, moving beyond the ideas of open access towards a cultural transformation. He argues that as the humanities re-imagine themselves in the light of digital media, there is an increasing need for old practices to be thrown away instead of merely reconceived. For Maxwell, the print-based journal economy relies on limited access in order to maintain a profit. The economics of open access, however, could adapt a new system of openness to shifting market demands, opened by the web, that depend upon interconnection and interlinkage. Maxwell turns to agile publishing and its mission statement of 'release early, release often' as an example of a more open movement. He questions how our scholarly work can be remixed, combined, reassembled, taken apart, and inscribed through an iterative process. Maxwell asserts that education, publishing, and scholarship can all be cultures of transformations. McGregor and Guthrie write on open access from their perspective at ITHAKA: a not-for-profit organization that focuses on the wide dissemination of knowledge and is most well known for JSTOR, a large-scale digital library service. McGregor and Guthrie argue that free access alone is not sufficient for ensuring the broad dissemination, uptake, and impact of knowledge. The authors shift focus from access to what they term 'productive use,' and they outline a series of conditions that they deem necessary for a scholarly resource to be considered effective from a knowledge-building perspective. These conditions include literacy, technology, awareness, access, know-how, and training. McGregor and Guthrie conclude that a sustained commitment to these conditions will inevitably heighten scholarly impact and bring the world one step closer to the goal of universal access to knowledge. Meadows argues that open access should merely be the beginning of new trends of openness and access to scholarly resources. She summarizes and evaluates a series of public, low-cost access initiatives started between 1990 and 2014, including Access to Research; Electronic Information for Libraries; the International Network for Access to Scientific Publications; the New School for Social Research's Journal Donation Project; patientACCESS; and Research for Life. Meadows argues that these initiatives are valuable for publishers because they increase access to, and usage of, content beyond core markets. While Meadows acknowledges that open access is definitely not a one-size-fits-all challenge, publishers, small businesses, and medium enterprises all have something to gain from the movement: the opportunity to engage new audiences. in classics that have become standard points of reference for ancient studies. The PSWPC allows scholars to circulate pre-publication works for popular reception. These are, however, non-reviewed and rely solely on the reputation of the authors who have submitted them. Ober et al. note that the certification process has been observed as the primary distinguishing asset of non-open access publication. Inadvertently, they argue that quality standards, determined by users rather than by a few editors, provide evidence in favour of open access in humanities and social science departments. The PSWPC was developed in order to encourage other departments at the two universities to consider open access and digital circulation as economical resources for university publication. O'Donnell, Hobma, Cowan, Ayers, Bay, Swanepoel, Merkley, Devine, Dering, and Genee present a research mission summary for the group behind the Lethbridge Journal Incubator and detail how this project provides graduate students with early experience in scholarly publishing. The Lethbridge Journal Incubator trains graduate students in technical and managerial aspects of journal production under the supervision of scholar-editors and professional librarians. The project introduces students to the core elements of academic journal production workflows and provides training in copyediting, preparation of proofs, document encoding, and the use of standard journal-production software. Using circle graphs, the authors demonstrate the significant increase in research time devoted to production tasks that improve research ability or knowledge. For O'Donnell et al., the key innovation of the Lethbridge Journal Incubator is its alignment of journal production sustainability with the educational and research missions of the university. The authors attribute the slow growth of open access to attitudes among those who pay for the production and dissemination of research. By unlocking the training and administrative support potential of the production process, the Lethbridge Journal Incubator promotes access within the University of Lethbridge. Peekhaus and Proferes conduct the first systematic exploration of North American library and information science faculty's awareness of, attitudes toward, and experience with open access scholarly publishing. Following a thorough literature review, the authors argue that the sustained annual growth in journals in the past five decades has resulted in a contemporary multibillion-dollar scholarly publishing industry that is dominated by a handful of commercial behemoths who receive resources and funding from the wealthiest higher education institutions. Their survey indicates that while 80% of respondents had submitted an article to a subscription-based journal in the last year, only 37% had done the same in an open access journal. Further, just over half of the respondents had ever published with an open access journal. In terms of using institutional repositories, 35% of the respondents had deposited an article. Overall, engagement with open access is related to perceptions of faculty rank and promotion. While experience with open access platforms alleviates some concerns, a substantial bias remains. Prelinger seeks to understand how moving images problematize archival practices, and how the archive can reconcile legacy practices with new cultural functions. He outlines the history of the archive's role in film preservation and how access to film materials has largely been conceded to web services such as YouTube and the Internet Archive. Open access, for Prelinger, is an important asset for film studies, as he notes that the field is of great interest to nonacademic audiences as well. The author is sceptical, however, as he does not see open access as a career-enhancing alternative for scholars who publish in comparatively expensive and limited access journals. Digital literacy goes hand-in-hand with rethinking access and copyright for the film archive. Prelinger argues that archival ethics should generally favour use over the fear of abuse, and that archives should cease to be wholesale repositories that rely on presenters, producers, and scholars to distribute the knowledge contained within them. The Policy RECommendations for Open Access to Research Data in Europe (RECODE) Project Consortium provides an overview of the RECODE project and introduces the five interdisciplinary case studies in open access research data that helped in the examination of important challenges. The report includes a summary of the project findings and general recommendations. In addition, it studies targeted policies for funders, research institutions, data managers, and publishers, and provides practical guides for developing policies. The report also includes resources to further the policy development processes and their implementations. The authors conclude with a list of grouped resources and project partners. Their findings address issues of usability, navigation, and accessibility across three institutional repositories at Argentinean universities. In an online survey conducted by the authors, 1,009 individuals from the three universities responded to their queries about using open access institutional repositories. 81% of the respondents had a positive attitude towards freely disseminating their scholarly works. However, barriers of interface design, organization, terminology, and inconsistent metadata requirements prevented the use of the system. The authors propose a new prototype in order to help alleviate these issues. Suber examines how humanities and social science scholars can promote open access within their own disciplines. He identifies some of the roadblocks of open access publishing in the humanities and social sciences and proposes avenues that circumvent these barriers. Despite the internet creating an opportunity for low-cost distribution of knowledge, the humanities and social sciences have been slow to take up open access practices. Suber argues that this is due to a number of factors: high cost of journals, low funding of research, high rejection rates of journals, low demand for open access (compared to the sciences), and copyright issues. Suber suggests that the following practices be used to navigate or circumnavigate these issues: use software to manage costs of peer review, do without copyeditors, encourage universities to pay processing fees, experiment with retroactive peer review, explore open access archiving, and publish open access books. . "North American Campus-Based Open Access Funds: A Five-Year Progress Report." SPARC. https://sparcopen.org/wp-content/uploads/2016/01/OA-Fund-5-Year-Review.pdf.
Tanenbaum provides an overview of the successes and challenges of campus-based open access funds across North America. The report provides quantitative data to show how the funds have encouraged authors to get involved with open access publishing. It also includes a qualitative analysis of the successes, challenges, level of satisfaction, and communication with faculty and administration. The author notes that launching funds at more institutions will highlight the impact of this mechanism on scholarly communication and adds that 'SPARC anticipates an ongoing involvement in campus-based Open Access Funds' (5). Waters and Meisel summarize the findings of the 2007 annual report of scholarly publishing initiatives. The Mellon Foundation began two initiatives in 2007: the first initiative aimed to increase the capacity of university presses to publish first books by junior scholars in fields with constrained opportunities; the second sought to strengthen the substantive relationship between home institutions and university presses. Waters and Meisel briefly outline historical concerns about the roles and functions of university presses, and efforts to support scholarly publishing. They acknowledge early projects, such as Johns Hopkins University Press's Project Muse, as well as the 1998-99 Mellon Foundation's establishment of Gutenberg-e and History E-Books, which tested the hypothesis that monographs authored for electronic media would be cheaper to produce than those authored for printed media. The goal of the two initiatives aforementioned is to strengthen both humanistic scholarship and the institutions upon which it depends. Willinsky delves into the digital life of scholarly journals that was sparked approximately 340 years after their inception in print. Willinsky states that in 2000, nearly 75% of journals had online editions, and nearly 1,000 peer reviewed journals appeared only in digital form. The ease of accessing information is unprecedented; institutions, however, are simply unable to keep up with their own production of published research-no university can afford to provide access to all information. Willinsky turns to open access by first arguing that it is not a single economic model but rather a collection of economic models that fit different situations. Willinsky argues that open access scholarship is a tradition that goes back to ancient collections, such as the fabled collection of Alexandria, and then to the establishment of massive public libraries in nineteenthcentury America. He argues that digital publishers are responsible for exploring new technologies and economic models to improve public access to materials. Willinsky uses the New England Journal of Medicine as an example of a publisher that grants open access to issues six months after they are published and, on the first Monday of the month, makes the issue immediately accessible at no cost digitally. His discussion focuses on definitions of opening the scholarly community, access, copyright, a history of associations, economic factors, cooperative scholarship, development, the role of the public, the influence of politics, and the establishment of rights, as well as methods of reading and indexing open access materials. Willinsky concludes his study with an index of different models for open access publication, a chart that includes scholarly association budgets, journal management economies, and sections on indexing serial literature and creating metadata for journal cataloguing. Zheng and Li study the awareness of Texas A&M University (TAMU) faculty regarding open access publishing. The authors assess their attitudes toward and willingness to contribute to institutional repositories and investigate their perceptions of newer open access trends and resources. The survey results suggest that tenured faculty have a higher engagement rate with open access journals in their fields. A lack of awareness, however, surrounds processes to deposit materials in institutional repositories: 84% of respondents did not know the institutional repository deposit process at all. Similarly, a quarter of the respondents indicated that they did not know enough about open access to form an opinion on institutional repositories and could not see depositing their work as counting toward merit raises, tenure and promotion, or annual evaluation. Attitudes remain the greatest barrier towards increasing open access publication in academic settings. Adema, Janneke. 2014

Open Source
Open source is an umbrella term that generally refers to the practice of sharing, modifying, and reusing software code freely and houses a number of initiatives, such as the free software and open source movements. These initiatives have a rich history; they are responsible for the open structure of the internet and serve as prominent voices in the defense of user interest in contemporary internet policy debates, such as the battle over privacy-related issues (Kelty 2008). This category covers materials related to the development of open source programs, from its origins with Linux and Apache to its potential for collaborative software development (Godfrey and Tu 2000;Hars and Ou 2001;Lerner and Triole 2002). The resources range from the theoretical to the technical and political. Many of the articles focus on Apache and Linux as the primary models for economic success in the open source software community. In this category, the open source software model is frequently juxtaposed with the commercial model, represented by Microsoft, to reveal comparative successes and areas of improvement (West 2003). Overall, this category traces the historical development of open source and speculates about its potential directions.
Bastian, Heymann, and Jacomy describe the development of their open software tool, Gephi, which is used for graph and network analysis. The software presents large networks in real time using a 3D render engine. This technique uses the graphic card while leaving the CPU memory free for other computing. The user interface is structured into workspaces, through which algorithms, filters, or tools can be easily added, even by those with little programming experience. Visualization modules can be exported as SVG or PDF files, and Rich SVG Export -a powerful SVG exporter -is included with Gephi. Dynamic networks can be played in Gephi as movie sequences. The architecture of the software is interoperable, and data sources can be created easily to communicate with existing software, third party databases, and web services. The authors note that they are searching for better ways to adapt the user interface to users' needs. The program has successfully been used for internet link and semantic network case studies. Bonaccorsi, Andrea, and Cristina Rossi. 2003. "Why Open Source Software Can Succeed." Research Policy 32 (7): 1243-58. DOI: https://doi.org/10.1016/S0048-7333(03)00051-9.
Bonaccorsi and Rossi discuss the questions of motivation, coordination, and diffusion raised by the emergence of open source software. They note that hierarchical coordination emerged without proprietary rights, yet open source systems are diffused in environments dominated by proprietary standards. The authors attempt to understand how an immense group of unpaid programmers have advanced open source technology to its present stage. The hobbyist groups and hacker culture, which consists of programmers trained in engineering and physics fields, are noted as the primary groups participating in the development of open source software. This hybrid business model-whereby companies and software houses produce software, give it away to customers free of charge, and shift the value towards additional services (packaging, consultancy, maintenance, updating, and training)-is suggested as a productive alternative. The recent tendency for open source programs to become more user friendly will enable even wider diffusion into increasingly broad communities. Bruns, Axel. 2008. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang.
Bruns' book on blogs, Wikipedia, Second Life, and other virtual landscapes discusses how creative, collaborative, and ad hoc engagement with content in user-led spaces is no longer accurate. User-led content production is built instead on iterative and evolutionary development models in which large communities make a number of very small incremental changes to established knowledge bases. He uses the concept of 'produsage' to describe changes to user-led content management systems. The comparative significance of distinction between producers and users of content has faded over time. The opening chapters detail open source software development; later ones move to case studies of news blogs, citizen journals, Wikipedia, and what he terms the 'produsage of folksonomies,' referring to knowledge structures that encapsulate economic environments of their own. He discusses 'produsage' in terms of education, video games, and creative structures, and concludes with a chapter on how democracy itself can be re-examined in light of the 'produsage' structure. Childs, Merilyn, and Regine Wagner. 2015. "Open Sourced Personal, Networked Learning and Higher Education Credentials." In Open Learning and Formal Credentialing in Higher Education, edited by Shirley Reushie, Amy Antonio, and Mike Keppell, 223-44. Hershey, PA: IGI Global.
Childs and Wagner claim to present an Imaginarium in their article, which they define as a type of place dedicated to imagination that may struggle to exist in institutional reality. The authors use imaginary reconstruction (fiction writing in research spaces) to construct a plausible and comprehensible text that offers an alternative to current institutional thought and practice. The text weaves in and out of fiction, with the first sections discussing global citizenship and research performed by the United Nations. The authors stage their semi-fictional discussion at the 46 th triennial Australian Labor Party National Conference of December 3, 2011. A same-sex marriage protest in support of dropping queer-phobic Australian legislation occurred outside of the event, and the authors create a character, Ludmilla, to narrate some of their concerns. They follow her involvement in the protest and narrate many of the works she would have encountered prior to that moment in order to provide evidence of the type of person who would benefit from recognition of prior learning (RPL) developments. This includes authentic and service learning, ePortfolios, learning pathways between vocational and university studies, and open learning practices in higher education. Chopra, Samir, and Scott Dexter. 2009. "The Freedoms of Software and Its Ethical Uses." Ethics and Information Technology 11(4): 287-97. DOI: https://doi.org/10.1007/s10676-009-9191-0.
Chopra explains that the difference between free and proprietary software is that the latter restricts user actions through end user license agreement while the former eliminates restrictions on users. He explains the concept of free software, talking about software freedom, the Freedom Zero problem, the ethical use of scientific knowledge, and scientific knowledge and property rights. He then discusses community discourse and Freedom Zero (the freedom to use a software in any way or for any purpose), explaining that Freedom Zero supports deliberative discourse within the development and user communities. When exploring the ethical uses of software, Chopra answers the question of whether Freedom Zero is inaccurate and whether a free software licensor could be liable for granting Freedom Zero. The author concludes that Freedom Zero facilitates a broader debate about software's larger social significance. Dahlander and Magnusson consider the relationship between firms and communities in regard to open source software. Open source software is not directly controlled by firms but resides within communities that form the firms. The authors use Nordic open source software firms as a case study to examine the symbiotic, commensalistic, and parasitic approaches to handling firm-community relationships. Firms release source codes in order to get their product widely adopted, as this increases the likelihood of attracting skilled developers and a higher pace of technological development. The authors collected annual reports, company directories, business and specialist press, and homepages to get an idea of the competitive environment, important milestones, and outside perceptions of four firms: MySQL, Cendio, Roxen, and SOT. The authors note that people working within communities share innovations, which are often not protected by intellectual copyrights. Firms in turn benefit from this and can create and maintain relationships with these communities. claim that open source software is more than just a fad, and that it will come to define the industry rather than be a symptom of it. Fitzgerald, Brian. 2006. "The Transformation of Open Source Software." MIS Quarterly 30(3): 587-98. DOI: https://www.doi.org/10.2307/25148740. Fitzgerald contends that the open source software (OSS) movement has metamorphosed into a mainstream and commercially viable form of publishing, which he labels as OSS 2.0. He argues that describing the open source community as a collective of supremely talented developers who volunteer their services to develop high quality software is a myth. He characterizes the first phase of the OSS movement as Free Open Source Software (FOSS), and creates charts that illustrate what he sees as the defining characteristics of both the earlier and the developing movements. Fitzgerald discusses aspects of FOSS, including product licensing and support. The discussion then shifts to OSS 2.0--of which Fitzgerald notes the design and analysis phases have become more deliberate than FOSS. Market creation strategies, value added service enabling, and product support are discussed. He concludes by suggesting that open source research exacerbates problems when scholars continue to focus their efforts on repeatedly studying project characteristics and developer motivation. Godfrey, Michael W., and Qiang Tu. 2000. "Evolution in Open Source Software: A Case Study." In ICSM '00 Proceedings of the International Conference on Software Maintenance (ICSM '00). Washington, DC: IEEE Computer Society.
Godfrey and Tu note that most studies of software evolution since the year 2000 have been performed on systems developed within a single company using standard management techniques. The authors use the open source Linux operating system Kernel as a case study for further investigations. As of the time of writing, Linux included over two million lines of code and did not have a tightly planned development model. The authors use graphs to demonstrate the growth of smaller core subsystems, arch subsystems, and the driver subsystems in Linux over its lifespan. Kernel has been very successful and the authors comment that the 'black-box' examination has not been enough--researchers must investigate the nature of the subsystems and explore their evolutionary patterns to understand how and why the system as a whole has evolved. Hars and Ou discuss the success of the Linux operating system, which has demonstrated the viability of open source software. The authors discuss both internal factors (intrinsic motivation, altruism) and external rewards (future returns, personal needs) that motivate the development of open source software such as the Linux Kernel. The authors briefly discuss the history of open source software, beginning in the 1950s, when software was sold together with hardware, and macros and utilities were freely exchanged in user forums. Hars and Ou create a timeline from the 1950s until the year 2000, when Novell, Real, and other software companies released versions of their products that run on Linux. The authors also provide pie charts of respondent demographics (programmer types and highest degree earned). Hars  Kelty investigates the history and structure of the internet with a specific focus on free software, defined as the collaborative creation of software code that is made freely available through a specific use of copyright law. He argues that the open structure of the internet can be tied to the Free Software Movement, a social movement that formally originated during the development of the GNU/Linux operating system. Kelty categorizes practitioners who participate in these types of social movements as the 'recursive public,'responsible for reorienting power relations around modes of creation and dissemination of a massive body of virtual information. The Free Software Movement binds together lawyers, hackers, geeks, and professionals from all types of disciplines to form the recursive public that is still actively defending user's interest in the maintenance of an open internet. Kogut and Metiu discuss the open source software production model as one that exploits the distributed intelligence of participants in internet communities. The authors term these broad communities as ' communities of practice.' Kogut and Metiu argue that open source practices avoid the inefficiencies of a strong intellectual property regime and implement concurrent design and testing of modules, thereby saving financial and labour expenditures. Two types of open source models are examined as case studies using a chart that logs different projects, including Zope, Mozilla, MySQL, Python, KDE, GIMP, and GNOME. The authors proceed to a discussion of Linux and its history. Open source is touted as the emergence of a production model ideally suited for properties of information that can be digitally coded and distributed.
Lakhani and von Hippel explore how the mundane, but necessary, task of field support is organized in the Apache web server software. They also discuss the motivations of project participants and note that the community has a strong support system. The effort expended by information providers to develop Apache returns direct learning benefits. The authors suggest that the free, voluntary labour of the open source community is often undertaken with the goal of self-education, rather than materialist concerns with capital. A brief history of Apache is provided as well as an analysis of the field support system. The study collects data on questions posted by information seekers, responses to these questions, growth in Apache web server sites, and a survey of the motivations of various respondents for undertaking such work. The authors conclude with the suggestion that it is important to analyze the micro-level functioning of successful open source projects to understand how and why they work. Lerner and Tirole discuss the general history and cultural impact of the open source movement. The authors discuss major programs, including Linux, Apache, Sendmail, and the Perl language for writing Common Gateway Interface scripts. These programs were designed as a standard means of delivering dynamic content on the web. Lerner and Tirole point out how open source software has challenged economic paradigms of individual incentives, corporate strategies, organizational behaviour, and innovative process. They argue that, contrary to appearances, the open source movement is well accounted for by standard economic theory and point to areas for further research. The authors examine individual motivations for participating, how people assess good projects and leadership, and how these often mix Raymond's manifesto for open source social politics begins with an overview of early programmers. This group of programmers assembled before the term 'programmer' entered the vernacular in its present day meaning. They were heavily associated with scientific batch computing. Open source hacker culture developed out of the rise of interactive computing, which would be propagated on the ARPAnet in the early years. The Massachusetts Institute of Technology (MIT), Stanford, and Carnegie Mellon University were important centres of computer science and artificial intelligence research. Raymond details the rise of Unix software, which was considered by the commercial developers to be a group of upstarts using primitive tools. The author devotes most of his analysis to the proprietary Unix, the early free Unix, and how Linux brought hacker culture from the fringes of public consciousness to its current prominence. Roberts, Jeffrey A., Il-Horn Hann, and Sandra A. Slaughter. 2006. "Understanding the Motivations, Participation, and Performance of Open Source Software Developers: A Longitudinal Study of the Apache Projects." Management Science 52 (7): 984-99.
Roberts, Hann, and Slaughter conduct a study to understand the motivations, participation, and performance of open source software developers. The authors evaluate their model with survey and archival data collected from a longitudinal field study of software developers in the Apache projects. Payment for contributing to Apache projects is positively related to the developer's status motivations but negatively related to use-value criteria. The authors rely heavily on psychology literature and some analysis of Apache and its contributors. Rosenzweig claims that the field of open source software development is notoriously individualistic. He notes that only six percent of over thirty-two thousand scholarly works indexed since 2000 have more than one author, and less than two percent have three or more authors. Rosenzweig argues that the cooperation and freedom of Wikipedia have transformed it into the most important demonstration of the principles of free and open source software movement. He discusses Wikipedia as both a tool for historiography as well as how it can be understood as an expression of history itself. According to the author, professional historians should pay attention to Wikipedia because students do. Wikipedia and Linux demonstrate that there are alternative models to produce encyclopaedias and software than the hierarchical, commercial model represented by Microsoft. Schloss et al. discuss mothur, their comprehensive software package that allows users to use a single piece of software to analyze community sequence data. It can be used to trim, screen, and align sequences, as well as calculate distances, assign sequences to operational taxonomic units, and describe the diversity of eight marine samples. The authors present a table that outlines the features of the software. They also provide a flow chart of the number of tags sampled. In the future, the team hopes to develop computational tools to describe and analyze microbial communities. The authors note that although mothur goes a long way towards improving the efficiency of data analysis, researchers should still take care to ensure that their experiments are well designed and thought out and that their results are biologically plausible. Truscello, Michael. 2003 Von Hippel and von Krogh discuss what they see as the two models of innovation in organization science: the private investment model, which assumes returns to the innovator result from private goods and efficient regimes of intellectual property protection, and the collective action model, which operates with the assumption that under conditions of market failure, innovators collaborate in order to produce public goods. The authors provide a brief history of open source software and include a list of a few case studies. Von Hippel and von Krogh take these projects as examples of the private-collective innovation models for the industry. The authors conclude with the suggestion that interpretation of subtle matter in organizational science will be aided by contextual and behavioural understanding of project activities, as well as a broad set of data and methods. Von Krogh, Georg, Sebastian Spaeth, and Karim R. . "Community, Joining, and Specialization in Open Source Software Innovation: A Case Study." Research Policy 32 (7): 1217-41. Von Krogh, Spaeth, and Lakhani develop an inductive theory of the open source software innovation process. They focus on the creation of Freenet, which is a project that aims to develop a decentralized and anonymous peer-to-peer electronic file-sharing network. They analyze data from multiple sources documenting the Freenet software development process and propose relationships among joining script, specialization, contribution barriers, and feature gifts. The authors provide a reference model and graphical overview of the Freenet architecture, as well as a diagram that evaluates project size based on email activity. In conclusion, the authors have noted that there are no empirical studies or solid theory building on the social aspects of software development.
Von Krogh, Georg, and Eric von Hippel. 2006. "The Promise of Research on Open Source Software." Management Science 52 (7) Weber's book-length study surveys the success of software and source code that is freely distributed. He discusses how property underpins the social organization of cooperation and production in the digital era, and how older models of production can no longer be followed in the advent of the success of systems such as Linux and Apache. The success of open source software in this highly competitive industry subverts several assumptions about how software firms should be run, as well as the distribution and circulation of product. Weber discusses the history of open source in addition to basic definitions of the field and its methods of distribution, circulation, and production. The interactions between open source software and the disciplines of business and law are also examined, with Weber suggesting that these have all changed drastically with the advent of open source code distribution. West, Joel. 2003. "How Open is Open Enough?: Melding Proprietary and Open Source Platform Strategies." Research Policy 32 (7) West and Gallagher discuss open innovation for the generation, capture, and employment of intellectual property within and by firms. Firms need to find creative ways to exploit internal innovation and incorporate external innovation into its developments. Additionally, they need to find ways to motivate outsiders to supply an ongoing stream of external innovations to supplement their own developments. The paradox is that rival firms could easily manipulate these innovations. However, pooled research, product development, spinouts, selling complements, and attracting donated complements are suggested as strategies to tackle the challenges created by the development of open source software for traditional firms. The authors note that more studies need to be conducted on virtual teams, cultural openness, technological modularization, and public/private collaboration. Bastian, Mathieu, Sebastien Heymann, and Mathieu Jacomy. 2009

Open Data
Open data concerns the availability and accessibility of research data to the public. Research data may include government, university, institutional, corporate, and educational materials Stadler, Lehmann, Hoffner & Auer 2012 The authors suggest that current services are inflexible, closed source, and based on closed standards. When this article was published in 2009, the ODK had not yet been developed as a tool, and the authors provide arguments for funding agencies to give consideration to their proposal. As a case study, the authors discuss AMPATH Kenya, the most comprehensive initiative in the country to combat HIV. AMPATH opted to use the ODK to improve their methods of data collection and retrieval. The ODK research team argues that the ability to collect data is key to the success of many organizations in the developing world. Auer et al. outline the basic premises and mission of DBpedia: a community effort to extract structured information from Wikipedia and make the data available for access on the web. DBpedia provides support for sophisticated queries against Wikipedia datasets. Infobox templates, categorization information, images, geo-coordinates, links to external sites, and various language editions of Wikipedia form the nucleus of information that is extracted and queried. DBpedia operates using three mechanisms: linked data, the SPARQL protocol, and downloadable RDF dumps. Their datasets can be accessed royalty-free via the GNU free-documentation license. The authors then provide instructions on how to efficiently search through DBpedia to access relevant materials, a list of third-party user interfaces, and a catalogue of related work. In the future, the research team wishes to further automate the data extraction process to increase the currency of DBpedia's dataset and to synchronize it with changes in Wikipedia. . Linked Open Data: The Essentials. Vienna: edition mono/monochrom. https://www.reeep.org/sites/default/files/LOD-the-Essentials_0.pdf. Bradley et al. use The Spectral Game to frame their discussion of leveraging open data and crowdsourcing techniques in education. The Spectral Game is a game used to assist in the teaching of spectroscopy in an entertaining manner. It was created by combining open source spectral data, a spectrum-viewing tool, and appropriate workflows, and it delivers these resources through the game medium. The authors evaluate the game in an undergraduate organic chemistry class, and the authors argue that The Spectral Game demonstrates the importance of open data for remixing educational curriculum. ** Brown, Susan, and John Simpson. 2015. "An Entity By Any Other Name: Linked Open Data as a Basis for a Decentered, Dynamic Scholarly Publishing Ecology." Scholarly and Research Communication 6(2): n.p. DOI: https://doi.org/10.22230/src.2015v6n2a212.
Brown and Simpson propose that linked open data enables more easily navigable scholarly environments that permit better integration of research materials and greater interlinkage between individuals and institutions. They frame linked open data integration as an ecological problem in a complex system of parts and relationships. The different parts of the ecology co-evolve and change according to the relationships in the system. The authors suggest that tools are needed for establishing automated conditions; for evaluating the provenance, authority, and trustworthiness of linked open data resources; and for developing tools that facilitate corrections and enhancements. The authors suggest that an ontology negotiation tool would be a most valuable contribution to Semantic Web. Such a tool would represent an opportunity for collaboration between different sectors of the knowledge economy and would allow the Semantic Web to develop as an evolving space of knowledge production and dissemination. . "Open Data, Democracy and Public Sector Reform. A Look at Open Government Data Use from Data.gov.uk." Open Data Impacts. http://www.opendataimpacts.net/report/.
Davies explores the use of open government data (OGD) from the United Kingdom website data.gov.uk. Davies begins with a theoretical discussion of open government data by arguing that the digital turn has undermined the government's monopoly on data processing and interpretation. By contrast, the open data movement aspires to promote transparency and accountability by empowering citizens. In this exploratory case study, Davies details who uses OGD, how OGD is being used, and the potential implications OGD has on the public sector. This empirical study uses a variety of research methods and draws on survey, interview, and participant-observation data. Overall, Davies found that OGD was used an overwhelmingly male audience with occupations in the private sector, public sector, and at academic institutions. The use of open data generally fell into five categories: data to fact, data to information, data to interface, data to data, and data to service. This study highlights real-world, practical uses of OGD and lays the groundwork for future research to test the adequacy and applicability of Davies' typologies. Di Noia, Tommaso, Roberto Mirizzi, Vito Claudio Ostuni, Davide Romito, and Markus Zanker. 2012. "Linked Open Data to Support Content-Based Recommender Systems." In Proceedings of the 8th International Conference on Semantic Systems, 1-8. New York: ACM. http://dl.acm.org/citation.cfm?id=2362501. Di Noia, Mirizzi, Ostuni, Romito, and Zanker analyze the open data approach for supporting content-based recommender systems. The research team performs an evaluation of MovieLens: the historical dataset for movie recommender systems. The researchers link this data to DBpedia datasets and perform one-to-one mapping. Evidence shows that 298 of 3,952 mappings in MovieLens have no correspondence with DBpedia. Their content-based recommender system leverages the knowledge encoded in the semantic datasets of linked open data with DBpedia, Freebase, and LinkedMDB to collect metadata (such as actors, directors, or genres) on movies.
Geiger and Lucke explore free usage of stored public sector data. The authors state that it is not enough to simply put data online; data needs to be considered, weighed, and determined if and where it can be published. Geiger  Gorlitz and Staab provide tips for federated data management and query optimization for linked open data. For the authors, complex queries are the only means of leveraging the full potential of linked open data. The authors argue that a federation infrastructure is necessary for linked open data and they provide the architecture for their own model. The basic components for this model are a declarative query language, a data catalog, a query optimizer, data protocol, result ranking, and provenance information. Data source federation combines the advantages of both centralized repositories and explorative query processing for efficient query execution and returning complete results. This model allows for transparent querying of distributed linked open data sources. The authors suggest that the SPARQL standard does not support all requirements to efficiently process federated queries. To improve these, the authors recommend focusing on join order optimization (the optimization of basic graph patterns). Gurstein is supportive of the open data project but maintains that the impact on poor and marginalized communities must be investigated. Policy should ensure that there is a wide basis of opportunity for effective data use. He uses Solly Benjamin's research on the impact of digitization of land records in Bangalore as evidence of the potential for land surveyors, lawyers, and other high ranking officials to exploit gaps in titles, take advantage of mistakes in documentation, and identify opportunities and targets for crimes. Gurstein creates a seven-point framework for making effective use of open data. This should be combined with training on computer/software use, accessible formatting of datasets, interpretation training, and a supportive advocacy network for the community. Hartung, Lerer, Anokwa, Tseng, Brunette, and Borriello present the development of the Open Data Kit (ODK), which contains four tools: collect, aggregate, voice, and build. The collect platform renders complex application logic and supports the manipulation of data types. Aggregate performs a ' click to deploy' server that supports data upload and storage transfer in the cloud. Voice renders application logic using automated phone prompts that the user responds to with the keypad. Build is a drag and drop application designer that generates logic used by the tools. The ODK was created to empower individuals and organizations and allow them to build services for distributing data in developing countries. The authors provide outlines of tool designs and charts of system architecture, a list of the drivers and clients employed by their program, and a list of organizations that support open source applications such as ODK. The tool uses a modular, extensible, and open source design to allow users to choose tools best suited for their own specific deployments.  The International Council for Science (ICSU), the InterAcademy Partnership (IAP), the World Academy of Sciences (TWAS), and the International Social Science Council (ISSC), in the accord from the first Science International meeting, address the opportunities and challenges of data revolution in the realm of global science policy. They explain the digital revolution as a world-historical event, considering the amounts of data produced and their effect on the research industry. The authors characterize big data by volume, variety, velocity, and veracity. Another important element is linked data and its importance for the Semantic Web. The accord also addresses the open data imperative for various reasons. For example, when it comes to 'selfcorrection,' the openness and transparency of relevant data allow testing and reproducibility, whereby in terms of non-replicability, attempts of replicating data has deemed rather unsuccessful, which again calls for transparency in the publication of data and metadata. The document also contains principles of open data, which include boundaries of openness, enabling practices, and responsibilities (of scientists; research institutions and universities; publishers; funding agencies; professional associations, scholarly societies, and academies; and libraries, archives, and repositories). Janssen provides an overview of the current discussion on open government data and the right to information. She argues that the open government data movement has close ties with the Right to Information movement in their promotion of access to government information as a fundamental right and for greater availability of data held by government bodies. Janssen argues that access to government information is a key component of any transparency and accountability process for government activities. Transparency results in better-informed citizens who can contribute to governmental processes and express meaningful views with regards to government policy. Janssen concludes that the two movements should be seen as complementary and argues that they can promote each other through legislation. For example, Janssen, Charalabidis, and Zuiderwijk provide a political analogue to many of the barriers preventing true, open data publication. Open government demands that the government give up control and that the public sector undergo considerable transformation. The authors use systems theory to draw attention to the distinctions between systems that are open to their environment and systems that are closed. The authors deem several points of open access rhetoric as myth: that publicizing data will yield benefits, that all information should be unrestrictedly publicized, that publishing public data is the whole of the task, and that every constituent can make use of open data. Finally, the myth that open data will result in open government is refuted. Janssen, Charalabidis, and Zuiderwijk suggest that open data only becomes valuable through use and that research demands more inquiry into the conversion of public data into services of public value. Johnson argues that scholarly discussions of information justice should subsume the question of open data. His article examines the embedding of social privilege in datasets, the different capabilities of data users, and the norms that data systems impose through disciplinary functions. For Johnson, open data has potential to exacerbate rather than alleviate social injustices. Data sovereignty should trump open data and active pro-social countermeasures need to be taken to ensure ethical practices. Johnson calls for information pluralism, which would embrace, rather than problematize, the messiness of data. He argues that an information justice movement is vital for drumming up the participation necessary to make information pluralism a reality. Johnson calls for further inquiry into how existing social structures are perpetuated, exacerbated, and mitigated by information systems. Kalampokis, Tambouris, and Tarabinis create a stage model for open government data in this article. For the authors, governments have a mandate to enable and facilitate data consumption by both citizens and businesses. A lack of information on available data poses considerable difficulty to the field. The objective of this article is to supplement existing eGovernment stage models by providing a roadmap for open government data re-use and enabling evaluation of relevant initiatives. The stage model is made up of four parts: aggregation of government data, integration of that data, integration of government data with formal nongovernment data, and integration of government data with formal and social nongovernment data. Public agencies are advised to make their data easily and quickly available online. The authors recommend that open government data initiatives should be thoroughly studied to identify important data sets for each stage of the model to be identified. An accessible open data approach within the sciences will allow the disciplines to generate a wealth of tools, apps, and datasets that will facilitate the discovery and re-circulation of data. Murray-Rust argues for the need of open access publishing initiatives in the sciences and provides an outline of several early initiatives in the field. He discusses concepts such as reuse, mash-up, community norms, and permission barriers. Most of the data filed in chemistry, for example, is published as a collection of facts, and open access publishing could help re-orient the method by which data in the sciences is collected. Murray-Rust provides paper extracts with structures from organic chemistry to provide examples of the types of data that could be openly distributed in the sciences. He concludes his observations with the argument that the sciences should adopt Open Notebook science in parallel with formal publications in order to achieve the goal of liberating old data.
Piwowar and Vision argue that reusing data and opting for a data management policy that makes use of open citation are effective means of facilitating science. This type of policy allows these resources to circulate and contribute to discussions far beyond their original analysis. The authors discuss the advantages and challenges of making research publicly available. Piwowar and Vision conduct a small-scale manual review of citation contexts and use attribution, through mentions of data accession numbers, to explore patterns in data reuse on a larger scale. The researchers determine that data availability is associated with citation benefit and data reuse is a demonstrable component of citation benefit. Vision considers open data and the social contract of scientific publishing. He begins with an appraisal of the scientific enterprise's effectiveness for providing scientists with a means to publish their findings and receive credit for their work. To improve upon these standards, however, Vision believes that data needs to be included in the arrangement, and that the printed page can no longer be the unit of measurement for attributing scholarship and research. Vision believes that publishers can assist this process by having journals require data archiving at the time of publication. Un-archived data files are often misplaced, corrupted, and rendered obsolete over time. Vision moves on to a discussion of Dryad: a tool that promotes data citations through assigning unique DOIs and compiling data in a shared repository. He concludes with the suggestion that permanent archives for research data would allow the social contract of publishing to give authors and their data their due.

Knowledge Mobilization
Knowledge mobilization refers to the dissemination of research output, as well as its uptake by groups other than the researcher or researchers who developed it. Authors in this category acknowledge the gap found between the amount of research produced and how much of this knowledge is actually implemented into practice. They offer a number of strategic knowledge mobilization steps to address this issue, including measures such as developing optimal implementation strategies through planned action theories (Graham, Tetroe, and KT Theories Research Group 2007), strengthening the link between research, policy, and practice (Cooper and Levin 2010), and working through the various theoretical models of knowledge utilization (Landry, Amara, and Lamar 2001). Anderson and McLachlan advocate for knowledge mobilization as a practice that challenges the models of knowledge transfer that often reign in academic environments and can manifest through a hierarchical transmission of knowledge from the top down (2015). This hierarchical structure is challenged by giving voice to typically marginalized groups (mostly those outside of academia) through establishing productive channels of communication (Anderson and McLachlan 2015). Other approaches implement fairly novel techniques, such as network analysis, in order to measure knowledge mobilization in community-based organizations-a technique that enables organizations to serve as a reliable voice for the various groups they represent (Gainforth et al. 2014). Notably, most knowledge mobilization strategies unfold in interdisciplinary realms where collaborative practices between different groups are the founding element of successful practices (Cooper and Levin 2010). Landry, Admary, and Lamri conduct research on how faculty members in Canadian universities promote the utilization of their research. Present theories on knowledge utilization generally fall into three categories: instrumental use (knowledge used for decision making and problem solving), conceptual use (knowledge that provides new ideas, theories, or hypothesis), and symbolic use (knowledge used for legitimizing views); however, the authors argue that these categories fail to take into account all the complexities of knowledge utilization, and therefore call for new theories for measuring this process (2001). Authors in this category highlight the value of implementing knowledge mobilization strategies, and delve into possibilities, challenges, and solutions based on concrete examples that employ both new and old theoretical frameworks. Anderson and Colin attempt to create a transformative research paradigm that champions knowledge mobilization over the institutional knowledge transfer model where the scientific community occupies the elite central stage and transmits knowledge from the top down. Primarily, this is done by disrupting power relations and including typically marginalized agents, such as farmers and other community-based researchers, within the scientific conversation. The authors conduct a case study on the Participatory Action Research (PAR) program in the Canadian Prairies in order to highlight the processes involved. They suggest that shifting to knowledge mobilization is a messy but necessary step in order to achieve an inclusive, useful, and reflective scholarship and practice. The authors employ three major strategies in order to bring the academic and nonacademic actors involved in the project closer together. The first is layering, which involves determining the right language, and the level of detail and complexity in a way that would be accessible to all parties involved instead of adhering to alienating jargon. The second communications strategybuilding bridges-works to overcome the boundaries that separate knowledge mobilizers in terms of their ' epistemological, discursive, and disciplinary divides' (8). This can be as simple as meeting in an informal, friendly setting where all parties are encouraged to voice their opinions in a respectful environment. The final knowledge mobilization strategy, transmedia, allows actors to present their ideas through different media formats in order to communicate them more effectively and to a wider audience. Although this study succeeds at bringing academic and community-based researchers into communication with each other, the institutional hierarchy that still operates according to a knowledge transfer model (versus a knowledge mobilization model) often undermines these efforts. Cooper and Levin describe the challenges associated with knowledge mobilization and suggest various methods to overcome them. They define knowledge mobilization as an emerging field that is dedicated to strengthening the links between research, policy, and practice across various disciplines and sectors. The authors assert that gaps between research, policy, and practice are the result of two major factors: the absence of research impact evidence, and the fact that knowledge mobilization is often interdisciplinary, and therefore lacks a formalized system. Cooper and Levin point out that the Canada Institutes of Health Research (CIHR) and the Canadian Health Services Research Foundation (CHSRF) have supported the majority of contributions to knowledge mobilization. They assert that collaborative practices are vital to knowledge mobilization since overall improvement depends on different groups working together. The authors also present the Research Supporting Practice in Education (RSPE) program, which is dedicated to empirical studies in various educational settings. They conclude by providing a list of quick, attainable practices that can improve knowledge mobilization in various environments. Gainforth et al. present a method for measuring the feasibility of utilizing network analysis to trace the flow of knowledge mobilization within a community-based organization. They address the challenges that arise in conducting network analysis in community-based organizations and research environments and provide practical and ethical solutions. Based on a case study conducted on a specific group operating within the organization, authors demonstrate that network analsyis is able to generate a rich description of the processes of a community-based organization that practices knowledge mobilization. The major limitations of the study include the lack of a comparison group in relation to which they can test the efficiency of their method, the fact that network analysis is only able to provide information about a specific moment within a study rather than an ongoing process, and that the researchers were unable to retest their findings with the network analysis instrument and had to take the results at face value. Despite these limitations, the authors assert that network analysis is a rich knowledge mobilization method and is useful for helping community-based organizations attain their goal of being a reliable voice for the various communities they work with. Graham and Tetroe determine the primary planned action theories within the science implementation field. The motivation for this study stems from a desire to remediate the gap found in implementing research into practice, and from recognizing that implementation practices themselves are more successful when situated within a conceptual framework. The study was carried out in the fields of education, health sciences, management, and social sciences, where 31 planned action theories were identified and analyzed for their origin, meaning, logical consistency, generalizability and parsimony, testability, and usefulness. The authors assert that the selection of a model should be based on a review of how its elements relate to the action categories that were derived from their theory analyses, and that the specific needs of end-users should be an integral part of the planning and evaluation process. Graham and Tetroe point out that many models have not yet been tested and urge those who use them to record and share their experiences in order to enrich research in the field. Keen, Peter, and Margaret Tan. 2009 Keen and Tan propose a knowledge fusion framework for leveraging their work in order for it to serve as a vehicle for knowledge mobilization in the knowledge management field. They point out major gaps in existing knowledge and assert that a clear distinction between knowledge management and knowledge mobilization ought to be maintained in order to produce a meaningful discourse instead of perpetuating the blurry definitions of the past. The authors argue that knowledge management itself is axiomatic rather than definitional. Partitioning these multiple domains is necessary for linking them to the existing body of knowledge and for identifying their theories and practices. The four main partitions of knowledge fusion are knowledge management, knowledge mobilization, knowledge embodiments, and knowledge regimes. These partitions are aimed at making links to the existing body of knowledge and practice related to knowledge management. The authors claim that this framework is an attempt to contextualize and shape knowledge management rather than to homogenize it or to serve as a model or theory. Landry, Amara, and Lamari explore the extent to which social science research is used in Canada, how it is distributed across disciplines within the field, and what determines the utilization of this research. Instead of basing their studies on tracing how policy makers employ this knowledge, the authors focus on how individual researchers promote the usage of their research. The authors provide an overview of existing theoretical models of knowledge utilization, including the science push model, the demand pull model, the dissemination model, and the interaction model. They conduct a survey of 3,252 faculty members from 55 Canadian universities, who they asked about the extent of the utilization of their research. The authors also study whether there is a difference in this utilization across social science disciplines; the results show that research carried out in professional social sciences, such as social work and industrial relations, is more frequently used than disciplinary social sciences, such as anthropology, economics, political science, and sociology. They conclude that knowledge utilization primarily depends on the behaviour of researchers and the users' context rather than the research product itself. Their findings also show that existing theories are inefficient in measuring the utilization of research since the process is far more complex than these theories propose. Lavis addresses the processes involved in public policymaking when carrying out institutional arrangements and the need for timely knowledge translation in these cases. The author conducts research in the health sciences and observes that public policymakers are sometimes asked to make fairly quick decisions with the lack of relevant high-quality research at hand. He argues that knowledge translation can make meaningful connections between research and public policymaking in a number of ways: through systematic reviews that address the questions asked by public policymakers; through 'push' efforts by researchers of interested parties that present research about a certain issue to the policymakers; through the 'user pull' method by these same groups that can help policymakers identify the relevant research in relation to a topic they are working with; and through the 'friendly front ends' method that stands for systematic reviews which have a graded-entry format. Lavis strongly advocates bridging the gap between research and policymaking by improving knowledge translation processes. Orlikowski presents a knowledge-in-practice perspective on successful work in complex organizational environments, with a focus on the process involved in the effective distribution of organization in global product development settings. Her main argument is that effective work is the result of properly distributed organizational knowledge that is carried out in everyday practices. She conducts a case study on Kappa, a large software company headquartered in the Netherlands with branches in other countries. Apart from certain variables involved in success, such as creativity, leadership, and strategic positioning, the author argues that overall success is primarily grounded in the way in which employees go about everyday practices related to 'temporal, geographic, political, cultural, technical, and social boundaries they routinely encounter in their work' (256). Specifically, success is decided by the ways that employees navigate and negotiate between these boundaries. According to Orlikowski, the social aspect plays a vital role in successfully managing projects, and frequent communication ensures that participants are aware and up to date regarding their work and work distribution. She concludes that Kappa owes much of its success to knowing how to efficiently distribute product development and organize knowledge in a continuous and stable manner. Phipps shares his perspective on knowledge mobilization as a practitioner who has been delivering various knowledge mobilization services in a university-based setting for over five years. He defines knowledge mobilization as 'a suite of services, actions, and activities that work together to support research outreach and engagement' (2), one that connects researchers and decision makers. Phipps describes six services developed by the knowledge mobilization unit that fall under four general methods: producer push, user pull, knowledge exchange, and co-production. He argues that a successful knowledge mobilization strategy may be achieved when researchers and decision makers communicate effectively and are supported by trained brokers who can utilize the appropriate knowledge mobilization services in order to meet decision makers' needs. Phipps provides general recommendations that may help in a knowledge mobilization action plan, including finding appropriate leaders, collecting data that may serve as basis for evaluation over time, finding grants for seed funding, and hiring the right knowledge brokers. Ward, House, and Hamer attempt to categorize the scholarship on knowledge transfer into an organized conceptual framework. This is done by identifying 28 modes of knowledge transfer literature and subjecting them to thematic analysis that would help identify the processes involved in transferring knowledge into action. According to the authors, the five main components involved in knowledge transfer are problem identification and communication, knowledge/research development and selection, context analysis, knowledge transfer activities or interventions, and knowledge/research utilization. These five components are generally categorized into three knowledge transfer processes, which can be linear, cyclical, or dynamic multidirectional processes. The authors state that the importance and applicability of these components within the conceptual framework is unknown, which is why their study utilizes this framework as a basis for drawing research from various case studies. Ideally, they hope to create a model that can serve as an infrastructure for planning and evaluating the processes involved in knowledge transfer.

Community Engagement
Certain university representatives are invested in creating and maintaining partnerships with community members, often in the form of goal-oriented projects that benefit society in some way. This category primarily focuses on university-community partnerships and how they evolve over time. Earlier sources focus on why such collaboration is important and reasons why it should become a common practice in the university O'Meara, Sandmann & Giles 2006). More recent resources focus on the aftermath of such integration; they discuss the benefits of these partnerships for the university and the community, as well as the obstacles and challenges that arise when representatives of these two groups collaborate and how to overcome them (Barnes et al. 2009;Butin 2010;. A central issue addressed is the need for university administrations to adapt to community engagement by appropriately rewarding students and scholars who engage in such work, and to ensure that working with communities is career-enhancing ( Barnes et al. present an approach to community partnerships developed by and practiced at Michigan State University. These approaches focus on community voices and are developmental, dynamic, and systematic in nature. The authors provide a brief history of university outreach and engagement since the 1980s, as well as a visual diagram of key terms in the University's approach to outreach. This strategy aims to become embedded in stress-asset based solutions, and to build community capacity for collaborative networks. The authors provide a list of challenges in current university partnerships and assess engagement efforts. Future research will examine how scholars, communities and conveners define partnership success. Bennett, in his introduction to the collection, suggests that younger generations are increasingly disconnected from conventional politics and government. However, the percentage of youth involved in civic engagement in non-governmental areas has increased. He explains that communication channels take many forms, including official communication tools and online social community networks. The collection's authors discuss how online networks can inspire conventional political participation, and how digital technologies can be used to expand the boundaries of politics and public issues. In general, the authors suggest that there is a need for a transparent global debate about how digital media reshapes the expectations and prospects of youth in democratic societies. Bowdon, Melody A., and Russel G. Carpenter, eds. 2011. Higher Education, Emerging Technologies, and Community Partnerships: Concepts, Models and Practices. Hershey, PA: IGI Global. Bowdon and Carpenter collect essays from 88 teachers, professors, and community leaders in a book that argues that technologies are being used in increasingly compelling ways to forge partnerships between college students, staff, faculty members, and the communities around them. The authors note that college and high school students are taking a lead in the process of creating valuable partnerships in local and global communities. The chapters include observations on successful partnerships between universities and other groups, as well as on the practical and theoretical meanings that technological tools have for different populations. Other issues addressed include the fact that capacity-building for technology use remains a critical objective in many regions of the world and that the challenges of online education heighten as it increasingly becomes a staple of academic training. Butin's book on the theoretical and practical applications of service learning in community engagement covers a variety of topics that range from the conceptualization of service learning to the establishment of institutional programs that create spaces for service learning in higher education. He provides examples of majors, minors, and certificate programs at a range of institutions that encourage service learning. The book concludes with a range of suggestions about how higher education institutions can embrace a scholarship of engagement and a discussion of current trends in service learning, as well as the implications that these hold for the future. Butin argues that democratic community engagement is a vital aspect of linking colleges and communities, and that service learning is an established institutional method of encouraging such partnerships. Butin articulates a model for an ' engaged' campus that he envisions can be practiced through academic programs focused on community engagement. Certificate programs, minors, and majors provide a complementary vision for the deep institutionalization of civic and community engagement that can help revitalize what he terms an apprenticeship of liberty for students, faculty, and staff. Butin identifies a major problem in the institutional ' engagement ceiling,' which is the low institutionalization of sustained investment for civic engagement in education (1). He concludes his study by suggesting that the egalitarian, horizontal, and equally legitimate model of knowledge construction is missing in higher education because academic knowledge and its development, critique, and expansion are understood as the purview of highly specialized researchers. Community engagement, according to Butin, needs to be done in academic spaces that foster and strengthen the very qualities that academics are looking for in community partnerships. . "When Engagement is Not Enough: Building the Next Generation of the Engaged Campus." In The Engaged Campus: Certificates, Minors, and Majors as the New Community Engagement, edited by Dan Butin, and Scott Seider, 1-11. New York: Palgrave Macmillan.
Butin discusses the practical applications of majors, minors, and certificate programs within institutions and their potential to reform the relationship between community and institution. It is clear, Butin argues, that the theoretical arguments of the last quarter century have questioned every assumption, enactment, and orientation of community and engagement. He argues that the community engagement movement in its present state still lacks the rigorous scholarship necessary for its incorporation into higher education. The next direction of community engagement in higher education, according to Butin, must engage in efforts at border crossings and must embrace critical academic spaces. This includes moving away from what he sees as an ineffectual model of 'hallway activists' where theory and practice are disjointed. Butin and Seider edit this collection of essays that argue for the vital role of higher education in both citizenship and the creation of rich civic and community life. A central concept to this collection is conceiving the goal of education as an aspirational idea for democracy, as well as personal, social, and political responsibility for a more just and equitable world. Reflection is a major concept and practice in the type of service learning discussed in the essays. The book focuses on service learning programs, experiential learning, and the role of interdisciplinary, active, and engaged research. The editors and authors seek to dismantle the boundaries between action and knowledge and create a model for publically engaged campuses through certificates, majors, and minors in community partnerships. Cantor and Lavine claim that today's system of tenure and promotion comes at a high price that is costly to communities and deprives them of relationships with educational partners. The authors note a gap between the appraisal of creative scholars who are committed to the public good and those who are promoted. Portland State University is used as an example of an institution that has accepted the blurred boundaries between research, teaching, and engagement, which are all hallmarks of excellence in public scholarship. What is important for the future development of public scholarship is that faculty and evaluators do not advise junior colleagues to postpone public scholarship if that is where their interests lie. The institution, the authors argue, needs more flexible definitions of scholarship, research, and creative work.
Caplan, Scott E., Elizabeth M. Perse, and Janice E. Gennaria. 2007. "Computer-Mediated Technology and Social Interaction." In Communication Technology and Social Change: Theory and Implications, edited by Carolyn A. Lin, and David J. Atkin, 39-57. Mahwah, NJ: Lawrence Erlbaum Associates.
Caplan, Perse, and Gennaria explore how and why people use instant messaging, email, and chat rooms in a social context. The authors provide a brief historical background on these technologies, as well as the social implications that result from a change between computer-mediated communication (CMC) and face-to-face communication. The authors find that teens and young adults provide most of the traffic in computer-mediated social forums. The reduced amount of non-verbal cues contributes to selectively controlling the quantity, quality, and validity of personal information available to other participants. As computer-mediated social interaction increases in popularity, physical location will become a less salient predictor of whom people interact with. Communication scholars need to adapt communication theories to evolving technologies and changing contexts to understand the uses and effects of computer-mediated social interaction technologies. . "Preparing for an Age of Participatory News." Journalism Practice 1(3): 322-38.
Deuze, Bruns, and Neuberger argue that journalism must rethink and reinvent itself in the wake of declining public trust in news. The authors believe that news journalism will be gathered, selected, edited, and communicated by professionals, amateurs, producers, and consumers. The authors include findings from emerging practices in the Netherlands, Germany, Australia, and the United States. The four case studies used are the American Bluffton Today, the Dutch Skoeps, the German Opinio, and the Australian On Line Opinion. These digital resources, the authors argue, provide clear and workable alternatives to the standard separation of journalists, their sources, and the public. Due to the highly accessible flow of information available digitally to the public, journalism can no longer leave large sections of the citizenry disenfranchised from participation, nor omit valuable insights into political and social processes. Dumova addresses the social potential of blogging centres on the ways in which blogs permit people to engage in social interactions, build connections, and collaborate with others. She argues that blogging should not be studied in isolation from the social media clusters that function together to sustain each other. She also notes that blogging is an international phenomenon, since over 60% of all blogs created after the 1990s are written in languages other than English. Next, Dumova broadly traces the development of blog publishing platforms. She concludes that network-based peer production and social media convergence are the driving forces behind the current transformation of blogs to increasingly user-centric, user-driven practices of producing, searching, sharing, publishing, and distributing information.
Gahran details the benefits of using SeeClickFix, a web-based open access web widget used for illuminating local issues, spurring community discourse, and sparking story ideas. Users can also use it to file public reports on local issues and vote for individual reports when they would like to see a specific issue resolved. The widget allows users to plot locations on a Google Map interface so that users within a geographic area can view a list of individual reports in that area. Having this widget on a site makes it easier to stay aware of community-reported issues and maintain greater engagement with the broader geographic area that the individual or group in question is part of. Hall edits this collection of essays on various community and university relationships within Canada. The book includes topics on Canadian social economy research partnerships from 2005-2011, new proposals for evaluating the research partnership process, respect and learning from communities, and the British Columbia-Alberta research alliance's effects on social economy. The appendices of the collection include region-specific information, such as the BC and Alberta Node and the Atlantic Node. This book focuses on the outcomes of previous grant offers and university-community partnerships, and the role of funding in university related partnerships. Hart and Northmore argue that the development of effective audit and evaluation tools is still at a formative stage in university communities and public engagement activities. The literature search, which was based on articles written in or after the year 2000, confirmed the authors' suspicion that the development of the appropriate tools for auditing and evaluating public engagement is still at its outset. The University of Brighton's Corporate Plan is used as a case study for further elaboration, which includes engagement with the cultural, social, and economic life of the localities, region, and nation as its primary precept. The authors suggest that this case study demonstrates that back and forth dialogue between practitioners, researchers, and community members is essential to the audit and evaluation process. Hart and Wolff draw on the experiences of local community-university partnership activities at the University of Brighton to offer what they perceive as a pragmatic framework for future community-university partnerships. The authors argue that unless the discussion is framed in a way that shows that academics are trying to understand community members, academics will have considerable difficulty in demonstrating the practical application of scholarly knowledge. The Community University Partnership program at the University of Brighton was established in 2003 to enhance the capacity of the community and university for engagement with mutual benefit, and to ensure that the university's resources are fully available to and used by local and sub-regional communities. The authors conclude by addressing both the cultural and spatial dimensions of the terrain and their impact on community-university partnerships within a community of practice framework. Hiebert, Bowen, and Siemens introduce Iter Community, a public-facing web-based platform prototyped by the Electronic Textual Cultures Lab and Iter: Gateway to the Middle Ages and Renaissance, with a specific focus on how this platform is geared towards facilitating social knowledge creation. The authors argue that the emerging area of research known as social knowledge creation promotes critical interventions into the more conventional processes of academic knowledge productions this type of research is increasingly made more convenient by emerging technologies that allow research groups to more actively participate in and contribute to the dissemination of their work and communication with other partners. The Iter Community page is meant as a critical intervention into modes of scholarly production and publication, and models how the implementation of functionalities that support social knowledge creation can facilitate novel research opportunities and invite scholars and members of the community to participate in the creation of knowledge. The platform facilitates online knowledge production and dissemination in ways that ultimately enhance research practices and community outreach. . "Creating a Supportive Environment for Community-University Engagement: Conceptual Frameworks." In Engaging Communities, Proceedings of the 31 st HERDSA Annual Conference, Rotura, July 1-4. http://www.herdsa.org.au/system/files/Holland%20%26%20Ramaley.pdf.
Holland and Ramaley argue that the changing nature of knowledge production, global issues, and the role of education affect intellectual strategies, relationships, societal roles, and expectations of how universities prepare students for the workplace. Educational institutions must increasingly embrace multidisciplinary and collaborative frameworks in order to address the evolving community landscape. The study concludes with the authors' recommendation that universities stop using communities as laboratories for research and learning, and rather collaborate with and acknowledge the essential expertise and wisdom that resides in communities. This shift will transform current understandings and prompt academics to understand themselves as learners, and to respect community leaders as experts in their own right.
Hoy, Ariane, and Matthew Johnson, eds. 2013. Deepening Community Engagement in Higher Education. New York: Palgrave Macmillan.
Hoy and Johnson collect diverse essays on approaches to community engagement in higher education that stress the role of students as civic-minded professionals in student development, as well as communitycentred approaches for the institution to engage in more productive partnerships with community leaders as partners. The Bonner High Impact Initiative embraces this and has the goal of transforming curricula, including approaches to engagement and institutional structures and practices. The authors hope to share their methods of engaging with communities as a means of allowing institutions to craft roles for themselves as stewards of place, civic learning, and agents of change. The essays included cover student leadership, pedagogy, institutional architecture, and community partnerships. Jenkins and Deuze elaborate that shifts in communication infrastructure are bringing about contradictory forces between democratization and the concentration of power. The current global digital culture, the authors suggest, should be understood as Lev Manovich's culture of remix and remixability, where usergenerated content exists both within and outside commercial contexts, supporting and subverting corporate control. Media branding choices are made as frequently in boardrooms as they are in teenagers' bedrooms due to the use of mobile web technologies. Contemporary media operates through a complex web of temporary connections and relationships between media companies and public stakeholders. Liquid differentiation is increasingly the model of corporate production: a formerly linear product is infused with unconventional new media formulas, hybrid genres, and transmedia strategies in its next installment to keep the brand marketable. The authors point to how these diverse forms of media further the capitalist agenda of constructing citizens as individualized and perpetually connected consumers. Jensen and Helles take up Horace Newcomb and Paul Hirsch's model for studying television as a cultural forum and use that as the frame of reference for studying the internet. A cultural forum is the most common reference point for public issues and concerns in a particular society. The internet is a distinctive kind of medium that comprises different communicative genres. The authors find that blogs, social network sites, and other recent genres attract much public and scholarly attention; however, ordinary media users are still inclined to engage in typical broadcasting methods. While the internet is displacing television, the authors argue that it will not replace it completely, and that future studies should focus on the plurality of cultural forums in a given society. Lampe, LaRose, Steinfield and DeMaagd address the barriers to social media use for public policy informatics. For the authors, social media has the potential to foster interactions between policy makers, government officials, and their constituencies. The authors refer to this framework as Governance 2.0 and use AdvanceMichigan as a case study. AdvanceMichigan is a social media implementation designed to crowdsource feedback from stakeholders of Michigan State University Cooperative Extension. This organization approaches the education process in such a way that students can apply their knowledge to a range of critical issues, needs, and opportunities. The organization is planning to return to traditional methods for collecting data from stakeholders due to the challenges of crowdsourcing data. The authors conclude with a discussion on how to create compelling technologies tailored to correctly scaled tasks for an audience who are likely to use social media sites.
Lin, Carolyn A., and David J. Atkin. 2007. Communication Technology and Social Change: Theory and Implications. Mahwah, NJ: Lawrence Erlbaum Associates.
Lin and Atkin edit this anthology that discusses significant outcomes of technology adoption and uses. Throughout the volume, authors explain how communication and information technologies facilitate social change. The editors organized the collection of essays to enhance the understanding of these social change outcomes by readers, scholars, students, and practitioners from a theoretical standpoint that examines the effects of communication technology on different social environments. The technologies examined by the authors include video and home entertainment, online technology education and entertainment, and cultural attitudes toward paper and electronic documents. The editors argue from the standpoint of social change, namely that advancements in communication technology have shaped political perspectives around the globe toward, for example, the Iraq War.
McNall, Reed, Brown, and Allen identify a lack of substantial agreement on the characteristics of effective community-university partnerships for research. This, they argue, is due to a lack of empirical research on the relationship between the characteristics of these partnerships and their outcomes. Qualities of effective partnership include shared leadership, two-way open communication and constructive conflict resolution, participatory decision-making, shared resources, and well-organized meetings with collaboratively developed agendas. The authors use a survey to understand the purposes of these partnerships, as well as their group dynamics. Through their survey, the authors demonstrate that efficient partnership management is related to increased research on a community issue, that collaboratively created knowledge is associated with better service outcomes for clients, and that shared power and resources are adversely correlated with an increase in funding for community partners and organizations.
Milakovich, Michael E. 2011. Digital Governance: New Technologies for Improving Public Service and Participation. New York: Routledge.
Milakovich studies the application of digital information and communication technologies and their role in reforming governmental structures, politics, and public administration. He notes that governments are transitioning between electronic government to digital governance, which emphasizes citizen participation and the accessibility of information technology. Organizational bodies have shifted from bureaucracycentred to customer-centric service operation in order to restore public trust in both governing and corporate bodies. Milakovich contributes several chapters to the social implications of virtual learning, methods of applying digital technologies to governance, and a discussion of global attitudes and patterns towards digital governance in the international community. O'Meara, Sandmann, Saltmarsh, and Giles discuss the professional lives of faculty members and their intimate ties to the academic mission. The authors discuss how the conceptualization of faculty-community engagement influences the questions asked of the institution and the kinds of recruitment, support, and professional growth that it provides in turn. They provide a brief history of institutional community engagement and faculty work since the 1980s. The lack of clear boundaries between the work and lives of publically engaged scholars necessitates studies, frameworks, and methods that weave faculty work with theories from different social sciences and research methods. Demographics, identities, life experiences, epistemologies, personal goals, institutions, disciplines, and department contexts all influence attitudes towards community involvement in institutional settings. The authors also provide a critique of perspectives used to study faculty community engagement and argue that an approach with multiple lenses is needed, including social, psychological, and cultural dimensions.  -29, 2004 in Racine, Wisconsin. The conference was convened to examine the current and evolving role of higher education institutions, especially those operating within coalitions, consortia, and state systems in order to catalyze change on issues that affect communities and society. This event was also designed as a forum for groups with common interests and consists of a series of working groups with developed partnerships. The issues covered in the proceedings include how faculty can overcome the few incentives and little preparation given for them to engage in community improvement and how the universities can recognize working with communities as career enhancing. These discussions focus on university-community relations and their sustainability in the long term. Silka, Cleghorn, Grullon, and Tellez use their community-based participatory research group, the Lawrence Research Initiative Working Group (RIWG), as a case study for creating guidelines for ethical communitybased research. The authors seek to move beyond the problems identified by the Agency for Healthcare Research and Quality and begin to include tribal nations and research centres. The primary focus is to develop an ethical and non-exploitative relationship on the part of the institution. They introduce a set of guiding documents-the RIWG documents-that outline strategies for dealing with the challenges of multiple layers of partners, coping with changing committee memberships, and providing tips for technical research language to help strengthen communication. The research team recommends that other communities adapt the RIWG documents for their own use. They hope to shift understanding toward community decision making as a necessity rather than a luxury. Silka and Renault-Caragianes discuss the problems that have previously faced community-university partnerships. These partnerships often involve powerful university scholars with relatively disempowered community members. Funding agencies are now calling for researchers to set up partnerships in order to investigate health disparities in poor urban communities. The challenge currently facing this type of partnership is to move beyond existing guidelines that were not designed to provide ethical guidance, and to work with the community in establishing mutual respect. The research agenda, usefulness and purpose of said research, and research methods all need to be determined by discussions with the community. . "International Perspectives on Community-University Partnerships." Metropolitan Universities Journal: An International Forum 22(2): 3-162.
Silka and Toof claim that communities struggle to create research guidelines for ethical collaborative research within their localities. The authors use the Mayor's Health Task Force Research Initiative Working Group from Lawrence, Massachusetts, as a case study. The task force addresses research ethics in a community where families struggle with limited resources and face many health disparities. An earlier study on high levels of pollution in the area did not take the Lawrence area's residents' concerns seriously, and was unable to answer the community's questions as to how their approaches to research were selected, who would receive the results, who would own the data, and what would be done with the saliva samples collected. Research committees must involve community members in discussions of how problems should be investigated and what kind of purposes the research aims to achieve. Sturm, Eatman, Saltmarsh, and Bush's work grew out of the realization that the long-term success of diversity, public engagement, and student success initiatives requires that these efforts be more fully integrated into institutional settings. They explain their concept of full participation, which is an affirmative value focused on creating institutions that enable people to thrive and realize their capabilities. They note that a lack of integration of diversity, public engagement, and student success efforts in university architecture limits the efficacy and sustainability of the institution's work. The authors argue that public engagement will encourage and enable full participation of diverse groups and communities, which is a critical attribute of legitimate and successful public engagement. The institutions that take account of public engagement enhance the legitimacy, levels of engagement, and robustness of higher education. Whitmer, Ogden, Lawton, Sturner, Groffman, Schneider, and Hart discuss how solutions to current environmental problems can be developed through collaborations between scientists and stakeholders. Societal partners are active throughout both the research and knowledge transfer processes. They are able to identify problems for conducting research and developing strategies for applying the outcomes of said work. The article provides some examples of science-related programs, including Georgetown University's program on Science in the Public Interest, which promotes direct dialogue with engaged and interested public groups on critical scientific issues. The authors also address topics related to developing a peer community and sustainability issues in linking knowledge with action. According to the authors, institutions should evaluate faculty by recognizing research and activities that advance scientific knowledge and improve outcomes for human and natural systems.

B. Reference List
Barnes, Jessica V., Emily L. Altimare, Patricia A. Farrell, Robert E. Brown

Citizen Science
Citizen science refers to research that is partially or wholly conducted by nonscientists, typically by volunteers who receive the training necessary to collect and interpret data for a target research investigation. In recent years, citizen science projects have become much more prominent, especially in the social sciences. Authors argue that this is due to the advancement of technology that allows the collection and transfer of data by nonprofessionals, as well as the recent demand of funding agencies to seek the public's approval of scientific research endeavors, since taxpayer dollars often fund these initiatives. The Cornell Lab of Ornithology (CLO) is one of the central organizations that has been practicing citizen science for over twenty years; in this category, CLO researchers provide a model for setting up a successful citizen science project ). Some of the central issues brought to light in this category revolve around the question of data reliability, which often depends on the clarity of instructions, the type of training, and the level of motivation of participants. Prestopnik and Crowston propose one way of increasing this motivation through a gaming model that includes a crowdsourcing element. The authors propose that gaming may encourage a more engaged practice and accurate data (2011). Research also addresses the educational role of citizen science for individual participants and the necessary steps that project leaders must take to ensure reaching these goals, as well as the benefits of integrating citizen science in undergraduate curricula (Jordan, Balard & Phillips 2012;. Authors especially encourage making data, results, and their interpretation available for the public in open access , and all unanimously agree that if done properly, citizen science can go a long way in educating the public, supporting scientific research, and improving the environment more generally. Bonney et al. provide a model for citizen science based on the past two decades of the Cornell Lab of Ornithology (CLO) citizen science projects, an organization that is deeply engaged with environmental studies projects and has had thousands of participants gather tens of millions of observations each year. The authors assert that citizen science is especially useful in projects that require the gathering of vast amounts of data over the span of many years. They outline the various steps of their model, which involve choosing a scientific question and forming a science-based interdisciplinary team comprising scientists, educators, evaluators, and technologists to lead the project and develop the necessary standards and materials to carry it out. This is followed by recruiting and training citizen participants in the required skills for gathering the appropriate data. This data is immediately displayed to the public in open access, after which it is analyzed and interpreted. These results are then disseminated and the outcomes are measured. The authors emphasize the necessity to innovate current data management, scientific analysis, and educational research practices in order to accommodate the growing scope and level of citizen science. Gallo and Waitt describe a citizen science program-the Invaders of Texas-that relies on local volunteers to gather data on invasive species in certain parts of Texas. This data is uploaded into a public database and serves as a point of reference for policymakers, scientists, and resource managers to make various decisions about weed distribution and to be aware of the scope of invasive species at a given time. The volunteers in the citizen science program receive proper training in order to provide detailed information about the target weed, such as its physical attributes, the GPS coordinates of where it was spotted, time of observation, level of damage to the crop, and other information. This information is on the Invaders of Texas database, which is an open source web application with an imbedded Google Maps interface that supports exportation in a variety of formats. The authors conclude that making such projects a more common and collaborative endeavor could benefit the ecosystem as a whole.

A. Annotations
Jordan, Rebecca C., Heidi L. Ballard, and Tina B. Phillips. 2012. "Key Issues and New Approaches for Evaluating Citizen-Science Learning Outcomes." Frontiers in Ecology and the Environment 10(6): 307-9. DOI: https:// doi.org/10.1890/110280. Jordan, Ballard, and Phillips call attention to the educational role of citizen science projects and focus on its importance in developing ecological literacy on the individual, communal, and program levels. The authors are concerned with whether citizen science projects carry out the educational goals they set forth. They argue that team members ought to develop an evaluation plan to trace whether project activities allow these learning goals to be achieved, whether the goals are clearly defined, and what the concrete measures of success for both of these points are. The balance between carrying out tasks and achieving learning goals should primarily be calculated according to the length of the volunteer's participation and the difficulty level of the tasks they carry out. The authors suggest that a more comprehensive approach-one that would take into consideration the wider scope of impacts, ranging from the individual to the community level-should be implemented. They point out that various types of citizen science projects have resulted in positive outcomes for the community, including 'increased social capital, community capacity, and trust between scientists, managers, and the public' (309). Mayer argues that phenology-the relationship between annual events and seasonal changes, such as observing the bloom of flowers-lends itself to citizen science when many people record these observations. Mayer addresses the data quality issue and states that evidence from various studies show that clear and straightforward instructions result in reliable data from volunteers. She discusses various recent and long-term phenology projects, such as the National Ecological Observatory Network (NEON) and Feedwatcher, with a specific focus on how both projects address the challenge of sustaining ongoing citizen observations. This is an issue in phenology, since it is long-term observations over years that add real value to a project. Another issue is that such long-term research is not compatible with traditional funding agencies, since they often give out shorter term grants than such research requires.
Newman, Greg, Andrea Wiggins, Alycia Crall, Eric . "The Future of Citizen Science: Emerging Technologies and Shifting Paradigms." Frontiers in Ecology and the Environment 10(6): 298-304. DOI: https://doi.org/10.1890/110294. Newman et al. speculate about the future of citizen science in conjunction with rapidly evolving technologies. They propose suggestions for project managers to integrate technology in a way that would help their research appeal to a wider audience. The authors describe how the various steps of citizen science projects, including 'gathering teams/resources/partners, defining research questions, collecting and managing data, analyzing and interpreting data, disseminating results, and evaluating program success and program outcomes' (299) may change in the future. They foresee a future in which technology could allow volunteers to have more agency and responsibility in science projects, and argue that this could eventually balance out the hierarchy between scientists and volunteers into more of a partnership. Newman et al. focus on wireless sensor networks that may help link the laboratory to the environment and help volunteers collect, analyze, and interpret data. They recommend that project managers encourage the use of open data and open source software, and utilize technology in a way that would increase motivation, retention, and ethnic diversity. Oberhauser and LeBuhn point to the various benefits of including undergraduate students in citizen science and advocate for increased citizen science hands-on training during undergraduate years. They argue that the type of learning that citizen science invokes is valuable since it is an inquiry-based practice that encourages students to pose questions, gather and interpret data, and draw conclusions. The authors focus on two citizen science projects, including the Monarch Larva Monitoring Project (MLMP) and the Great Sunflower Project (GSP) in which the students participate in data collections, data analysis, and in creating independent or group research projects. The authors are the project managers of these initiatives and provide a number of examples of how students behave and learn in such environments. Students working on these projects range from volunteers to paid assistants, and are engaged in the midst of the scientific process rather than merely performing tedious tasks. Oberhauser and LeBuhn believe that the three areas of undergraduate studies that would most benefit from citizen science are data collection, class projects, and research opportunities. . "Citizen Science as a Tool for Conservation in Residential Ecosystems." Ecology and Society 12(2): 11. http://www.ecologyandsociety.org/ vol12/iss2/art11/.
Phillips et al. address the role that citizen science plays in implementing conservation strategies in residential lands for positive impacts on biodiversity. The authors argue that the value suburban and urban residential lands can contribute to our understanding of ecosystems is only beginning to be acknowledged. They propose a framework for using citizen science in order to gather data that may help gain insight into conservation studies in this newly emerging field. The volunteers gather data, often over long periods of time, based on their training and the research questions set by the science-led team interested in observing certain occurrences, such as watershed-based monitoring. They base their framework on the citizen science model developed at the Cornell Lab of Ornithology that is outlined in this section of the bibliography (see Bonney et al.) and adapt it to their own research question. Phillips et al. conclude that using citizen science in conservation research in residential areas can help not only in tackling scientific questions, but also in implementing and monitoring various management strategies at a large scale, which can eventually result in long-term improvements in the environment. Prestopnik and Crowston discuss the role of gaming in improving the present citizen science model, especially in terms of increasing motivation and data quality. This is done by presenting Citizen Sort, a social-computational system that functions as a game for crowdsourced science. According to the authors, the major challenges of the study include measuring abstract concepts such as the level of motivation of the users, whereas more tangible attributes of measurement are related to the quality and completeness of the dataset. The main task of Citizen Sort, when launched, will be to identify whether introducing a gaming element into citizen scholarship will result in a more engaged practice (instead of the somewhat tedious process of gathering data), or if gaming is distracting or uninteresting in this context. The game itself will be made in a system assemblage approach, meaning that it will incorporate different features and technologies in order to appeal to users. Users will be asked to either upload or identify photographed species according to preset parameters that vary from game to game. Their interaction with the game will be recorded in order to determine the games that are the most fun, motivational, and that result in the highest data quality; this information will be used for improving their gaming techniques. Purdam explores the practical implications of citizen social science in a real world setting. The volunteers are trained by social scientists to systematically collect observation data throughout their routine daily lives rather than go out of their way to target the specific focus of the study. The research focus is recording the number of people seen begging in Central London, specifically because London is a densely populated city with a high rate of beggars and limited research in this field. The main concerns raised are in relation to the methodology, quality of the data collected, ethical implications that observation of others raises, presentation of the data, and potential value of a citizen engaged model for social science research. The various findings of this research are of interest to the social science researchers and policymakers, including international charities, local authorities, and policymakers. The ethical issue of surveillance is strongly acknowledged, especially in terms of what this type of research could mean if it were formalized and the scope was increased. However, the authors argue that the form of surveillance in this study adheres to ethical standards since the observers were not spies and lacked political interest in their subjects, and since the study is based on an ethically approved research design and followed a strict set of instructions. One limitation that is emphasized is the fact that the targets were only observed for a short time, which fails to provide the researchers with facts that may help improve their situation. Other limitations were found in the sample size and strategies that were adopted, and the potential for generalizations to occur. The authors believe that such citizen social science, backed up with new theoretical frameworks, could help with research projects that explore inequality and oppression in a coordinated way. Rotman, Preece, Hammock, Procita, Hansen, Parr, Lewis, and Jacobs conduct a study that borrows from a motivational model in order to determine the various incentives of volunteers to participate and perform well in citizen science projects related to ecological scientific research. Although many successful citizen science projects exist, many do not fully take advantage of the collaborative possibilities between the scientists and volunteers; hence, studying the motivation of each party and designing an environment that rewards and motivates all parties could drastically improve the field altogether. After conducting the study, the authors found that volunteers are primarily motivated by their curiosity, drive for learning, and desire for conservation, whereas the scientists were primarily motivated by their career and scientific advancement more generally. They also found that the two most important motivational moments for volunteers are the first encounter with the project and group and the wrapping up of a project, when volunteers decide whether or not to participate in other projects. The authors also contribute a dynamic model that displays the engagement cycle of the participants throughout the different stages of the project.  Silvertown calls attention to burgeoning citizen science projects, especially in environmental sciences, and addresses the main underlying reasons for such exponential growth. The first is the availability of tools that facilitate the gathering and dissemination of information to and from the public by the volunteers themselves. The second is the fact that citizen science is carried out by volunteers that bring a diverse set of skills, thereby significantly cutting down on project costs. Finally, he states that present funding agencies require scientific research to incorporate an element of project-related outreach, and a means to ensure that the public values taxpayer-funded work; having members of the public directly participate in scientific research allows them to reach this goal. Despite its established roots, dating from the nineteenth century, the author points out that citizen science is underrepresented in formal scientific literature because the term itself is fairly recent and the practice has yet to fit within the standard methods of scientific research that are based on hypothesis testing. He concludes by pointing to guidelines for good practice in citizen science, outlining various challenges that may spring up, and arguing for the benefits of citizen science in large-scale projects. Wiggins and Crowston engage in a discussion of citizen science in terms of the common attributes many projects share and attempt to provide a theoretical sampling that future citizen science projects may rely on. The authors argue that the majority of scholarship on citizen science is invested in describing the process of integrating volunteers into the various levels of scientific research, without taking into account the macrostructural and sociotechnical factors. They believe that this comes at the expense of crucial design and process management. Wiggins and Crowston identify and discuss five distinct typologies witnessed in various citizen science projects: action, conservation, investigation, virtuality, and education. The authors classify these typologies by major goals and the extent to which they are virtual. One of the main motivations for developing these typologies is to describe the existing state of citizen science and to make accessible the necessary conditions for successful citizen science projects.

Crowdsourcing
Crowdsourcing projects, typically built on information gathered by large groups of unrelated individuals through digital means, are becoming more common in academia. In this category, authors define crowdsourcing and explore most common trends and essential practices . Crowdsourcing projects in the digital humanities typically engage participant contribution by adding to existing resources or creating new ones, especially in terms of charting, locating, sharing, revising, documenting, and enriching materials . Some exemplary projects are included, such as the Transcribe Bentham project that successfully brings together crowdsourcing and public engagement into a scholarly framework , and Prism, a textual markup tool that supports multiple interpretations of text through close reading (Walsh et al. 2014). Authors also propose ways to moderate input from users with unknown reliability . The category provides a rich snippet of existing crowdsourcing practices and offers suggestions for optimal implementation. . This definition was used to select the case studies for the current research. The researchers found two major trends in the 36 initiatives included in the study: crowdsourcing projects either use the crowd to (a) integrate/enrich/configure existing resources or (b) create/contribute new resources. Generally, crowdsourcing projects asked volunteers to contribute in terms of curating, revising, locating, sharing, documenting, or enriching materials. The 36 initiatives surveyed were divided into three categories in terms of project aims: public engagement, enriching resources, and building resources. Causer and Terras reflect on some of the key discoveries that were made in the Transcribe Bentham crowdsourced initiative. Transcribe Bentham was launched with the intention of demonstrating that crowdsourcing can be used successfully for both scholarly work and public engagement by allowing all types of participants to access and explore cultural material. Causer and Terras note that the majority of the work on Transcribe Bentham was undertaken by a small percentage of users, or 'super transcribers.' Only 15% of the users have completed any transcription and approximately 66% of those users have transcribed only a single document-leaving a very select number of individuals responsible for the core of the project's production. The authors illustrate how some of the user transcription has contributed to our understanding of some of Jeremy Bentham's central values: animal rights, politics, and prison conditions. Overall, Causer and Terras demonstrate how scholarly transcription undertaken by a wide, online audience can uncover essential material. Causer, Tonra, and Wallace discuss the advantages and disadvantages of user-generated manuscript transcription using the Transcribe Bentham project as a case study. The intention of the project is to engage the public with the thoughts and works of Jeremy Bentham through creating a digital, searchable repository of his manuscript writings. Causer, Tonra, and Wallace preface this article by setting out five key factors the team hoped to assess in terms of the potential benefits of crowdsourcing: cost effectiveness, exploitation, quality control, sustainability, and success. Evidence from the project showcases the great potential for open access TEI-XML transcriptions in creating a long-term, sustainable archive. Additionally, users reported that they were motivated by a sense of contributing to a greater good and/or recognition. In the experience of Transcribe Bentham, crowdsourcing transcription may not have been the cheapest, quickest, or easiest route; the authors argue, however, that projects with a longer time-scale may find this method both self-sufficient and cost-effective. Estellés-Arolas and González-Ladrón-de-Guevara present an encompassing definition of crowdsourcing, arguing that the flexibility of crowdsourcing is what makes it challenging to define. They demonstrate that, depending on perspective, researchers can have vastly divergent understandings of crowdsourcing. By conducting a detailed study of current understandings of the practice, Estellés-Arolas and González-Ladrónde-Guevara form a global definition that facilitates the distinguishing and formalizing of crowdsourcing activities. Using textual analysis, the authors identify crowdsourcing's three key elements: the crowd, the initiator, and the process. They advance a comprehensive definition that highlights the individuals, tasks, roles, and returns associated with crowdsourcing. They present a verification table, with nine categories, that can be used to determine whether or not an initiative falls into the classification of crowdsourcing. Estellés-Arolas and González-Ladrón-de-Guevara suggest that further research should be done to understand the relationship between crowdsourcing and other associated concepts, such as outsourcing. Franklin, Kossman, Kraska, Ramesh, and Xin discuss the importance of including human input in query processing systems due to their limitations in dealing with certain subjective tasks, which often result in inaccurate results. The authors propose using CrowdDB, a system that allows for crowdsourcing input when dealing with incomplete data and subjective comparison cases. The authors discuss the benefits and limitations of having human effort combined with machine processing, and offer a number of suggestions to optimize the workflow. Franklin et al. envision the field of human input combined with computer processing to be an area of rich research due to its improvement of existing models and enablement of new ones. ** Gahran, Amy. 2012. "SeeClickFix: Crowdsourced Local Problem Reporting as Community News." Knight Digital Media Center, September 19. http://www.knightdigitalmediacenter.org/blogs/agahran/2012/09/ seeclickfix-crowdsourced-local-problem-reporting-community-news.html. Archived at https://perma.cc/ T6NF-EGZX.

A. Annotations
Gahran details the benefits of using SeeClickFix, a web-based open access web widget used for illuminating local issues, spurring community discourse, and sparking story ideas. Users can also use it to file public reports on local issues and vote for individual reports when they would like to see a specific issue resolved. The widget allows users to plot locations on a Google Map interface so that users within a geographic area can view a list of individual reports in that area. Having this widget on a site makes it easier to stay aware of community-reported issues and maintain greater engagement with the broader geographic area that the individual or group in question is part of. Gosh, Kale, and McAfee address the issue of how to moderate the ratings of users with unknown reliability. They propose an algorithm that can detect abusive content and spam, starting with approximately 50% accuracy on the basis of one example of good content, and reaching complete accuracy after a number of entries using machine-learning techniques. They believe that rating each individual contribution is a better approach than rating the users themselves based on their past behaviour, as most platforms do. According to Gosh, Kale, and McAfee, this algorithm may be a stepping-stone in determining more complex ratings by users with unknown reliability. Holley defines crowdsourcing and makes a number of practical suggestions to assist with launching a crowdsourcing project. She asserts that crowdsourcing uses social engagement techniques to help a group of people work together on a shared, usually significant initiative. The fundamental principle of a crowdsourcing project is that it entails greater effort, time, and intellectual input than is available from a single individual, thereby requiring broader social engagement. Holley's argument is that libraries are already proficient at public engagement but need to improve how they work toward shared group goals. Holley suggests ten basic practices to assist libraries in successfully implementing crowdsourcing. Many of these recommendations centre on project transparency and motivating users. Lampe, LaRose, Steinfield, and DeMaagd address the barriers to social media use for public policy informatics. For the authors, social media has the potential to foster interactions between policy makers, government officials, and their constituencies. The authors refer to this framework as Governance 2.0, and use AdvanceMichigan as a case study. AdvanceMichigan is a social media implementation designed to crowdsource feedback from stakeholders of Michigan State University Cooperative Extension. This organization approaches the education process in a way that students can apply their knowledge to a range of critical issues, needs, and opportunities. The organization is planning to return to traditional methods for collecting data from stakeholders due to the challenges of crowdsourcing data. The authors conclude with a discussion on how to create compelling technologies tailored to correctly scaled tasks for an audience who are likely to use social media sites. . "'By the People, For the People': Assessing the Value of Crowdsourced, User-Generated Metadata." Digital Humanities Quarterly 9(1): n.p. http://www.digitalhumanities.org/dhq/vol/9/1/000204/000204.html.
Manzo, Kaufman, Punjashitkul, and Flanagan make a case for the usefulness of folksonomy tagging when combined with categorical tagging in crowdsourced projects. The authors open with a defense of categorization by arguing that classification systems reflect collection qualities while allowing for efficient retrieval of materials. However, they admit that these positive effects are often diminished by the use of folksonomy tagging, which promotes self-referential and personal task organizing labels. The authors suggest that a mixed system of folksonomic and controlled vocabularies be put into play in order to maximize the benefits of both approaches while minimizing their challenges. This is demonstrated through an empirical experiment in labeling images from the Leslie Jones Collection of the Boston Public Library, followed by evaluating the helpfulness of the tags. Ridge examines how crowdsourcing projects have the potential to assist museums, libraries, and archives with the resource-intensive tasks of creating or improving content about collections. Ridge argues that a well-designed crowdsourcing project aligns with the core values and missions of museums by helping to connect people with culture and history through meaningful activities. Ridge synthesizes several definitions of crowdsourcing to present an understanding of the term as a form of engagement where individuals contribute toward a shared and significant goal through completing a series of small, manageable tasks. Ridge points towards several examples of such projects to illustrate her definition. She argues that scaffolding the project by setting up boundaries and clearly defining activities helps to increase user engagement by making participants feel comfortable completing the given tasks. Ridge sees scaffolding as a key component of mounting a successful crowdsourcing project that offers truly deep and valuable engagement with cultural heritage. Rockwell demonstrates how crowdsourcing can facilitate collaboration by examining two humanities computing initiatives. He exposes the paradox of collaborative work in the humanities by summarizing the 'lone ranger' past of the humanist scholar. He asserts that the digital humanities are, conversely, characterized by collaboration because of its requirement for a diverse range of skills. Rockwell views collaboration as an achievable value of digital humanities rather than a transcendent one. He presents case studies of the projects Dictionary and Day in the Life of Digital Humanities to illustrate the limitations and promises of crowdsourcing in the humanities. Rockwell argues that the main challenge of collaboration is the organization of professional scholarship. Crowdsourcing projects provide structured ways to implement a social, counterculture research model that involves a larger community of individuals. Wiggins and Crowston engage in a discussion of citizen science in terms of the common attributes many projects share and attempt to provide a theoretical sampling that future citizen science projects may rely on. The authors argue that the majority of scholarship on citizen science is invested in describing the process of integrating volunteers into the various levels of scientific research, without taking into account the macrostructural and sociotechnical factors. They believe that this comes at the expense of crucial design and process management. Wiggins and Crowston identify and discuss five distinct typologies witnessed in various citizen science projects: action, conservation, investigation, virtuality, and education. The authors classify these typologies by major goals and the extent to which they are virtual. One of the main motivations for developing these typologies is to describe the existing state of citizen science and to make accessible the necessary conditions for successful citizen science projects. Rockwell, Geoffrey. 2012

Collaborative Scholarship
Collaborative scholarship in academia is rapidly gaining prevalence, as evident in the increase in both disciplinary and interdisciplinary research partnerships on individual campuses and across universities. The possibility of virtual correspondence fueled by the internet is one of the primary catalysts for this development. Authors in this category address the benefits and challenges of collaboration, and suggest essential practices. This category includes an extended study on collaboration throughout the life cycle of a seven-year project, the Implementing New Knowledge Environments (INKE). Siemens reflects on collaboration at the end of every funded year of the project and explores how it evolves over time, how to develop and maintain positive and productive team relationships, how to integrate new team members into a project in the most optimal way, and how to deal with many other challenges that may occur within collaborative environments . Authors also address partnerships in virtual communities and the importance of utilizing platforms designed to facilitate collaboration . Overall, this category is meant as a solid starting point for those preparing to launch collaborative projects. Arbuckle, Belojevic, Hiebert, and Siemens, with Wong, Siemens, Christie, Saklofske, Sayers, and the INKE and ETCL research groups provide three annotated bibliographies anchored in social knowledge creation. They claim that their project transiently represents interrelational research areas, and that it emphasizes '(re) shaping processes that produce knowledge' (n.p). The authors address the work's intent, highlighting the importance of collaboration and open source. The authors discuss the principles to which this bibliography attends, addressing topics such as the book, print, remediation of culture, and interaction and collaboration. In addition, they explore the importance of digital tools and gamification to the practice of social knowledge creation. The three main parts of this document are: social knowledge creation and conveyance, gamedesign models for digital social knowledge creation, and social knowledge creation tools. Each of these sections begin with an introduction that presents an overview of the section's content, and end with a complete alphabetical list of selections. Brown addresses the affordances of web technologies that facilitate collaborative modes of online scholarly knowledge production. She argues that collaboration in the humanities still lags behind natural and social sciences. Brown discusses the key principles researchers ought to consider when choosing a platform for collaborative scholarship, as well as components of work processes and workspaces that help implement these principles into the project. She defines 'best' practices as both the control over scholarly processes that bring together a number of contributors, and those that more optimally address interoperability, preservation, reuse, and the various ethical and professional considerations that are involved with group work. This article focuses on approaches to systems and standards that enable collaborative knowledge production online rather than on ways to coordinate collaborative relationships. Crompton, Mash, and Siemens study the use of microdata formats as a means to include larger groups of researchers and editors working on a digital social edition. They also provide readily parsable data about the content of A Social Edition of the Devonshire Manuscript, the main object of their study. The authors argue that adopting linked data standards allows for an interconnection between texts and virtual collaboration across projects and scholars. Crompton, Mash, and Siemens explain how Resource Descriptions Framework in Attributes (RDFa) is well suited for academic projects and elaborate on the idea of encoding for the Semantic Web. They discuss technical decisions that would shift the focus of the encoder on data entry instead of the technical details of encoding. In their conclusion, the authors suggest that with the RDFa enhancement, A Social Edition of the Devonshire Manuscript will provoke new research questions around the culture and contexts of the Tudor court. . "Virtual Communities of Practice: Design for Collaboration and Knowledge Creation." In Proceedings of the European Conference on Products and Processes Modelling.

A. Annotations
Kondratova and Goldfarb discuss knowledge dissemination and collaboration in online communities. They conduct a study on design functionality by looking at portal types that include institutional, governmental and organizational, professional, and social portals. The study includes 80 criteria grouped under content, discussion forum functionality, features, tools and learning modules, search functionality, membership, and topic experts. Based on this study, the authors develop a new template, as they believe that there is need for further similar investigations. Rotman, Preece, Hammock, Procita, Hansen, Parr, Lewis, and Jacobs conduct a study that borrows from a motivational model in order to determine the various incentives of volunteers to participate and perform well in citizen science projects related to ecological scientific research. Although many successful citizen science projects exist, many do not fully take advantage of the collaborative possibilities between the scientists and volunteers; hence, studying the motivation of each party and designing an environment that rewards and motivates all parties could drastically improve the field altogether. After conducting the study, the authors found that volunteers are primarily motivated by their curiosity, drive for learning, and desire for conservation, whereas the scientists were primarily motivated by their career and scientific advancement more generally. They also found that the two most important motivational moments for volunteers are the first encounter with the project and group and the wrapping up of a project, when volunteers decide whether or not to participate in other projects. The authors also contribute a dynamic model that displays the engagement cycle of the participants throughout the different stages of the project. Siemens begins by identifying a contrast between conventional humanities research and digital humanities research: while the humanities disciplines have relied on predominantly solo research efforts, digital humanities research involves the collaboration of various individuals with a wide spectrum of skills. Siemens argues that the collaborative nature of academic research communities, especially in the humanities, has been understudied. This article is a step toward remediating this gap by examining the results of interviews conducted on the topics of teams, team-based work experiences, and team research preparation. The interview subjects identified both benefits and challenges of team research. Some of the challenges include relationship building with potential for future projects, communication challenges, funding, and team member retention. In conclusion, Siemens articulates a list of five essential practices: (i) deliberate action by each team member; (ii) deliberate action by the project leader; (iii) deliberate action by the team; (iv) deliberate training; and (v) balance between digital and in-person communication. Siemens addresses the advantages and challenges of collaborative work in the first year of the sevenyear funded Implementing New Knowledge Environments (INKE) project, a group of 35 researchers from Canada, England, Ireland, and the United States that focus specifically on Interface Design, Textual Studies, User Experience, and Information Management. The study is carried out in an interview format with seven collaborators, including graduate research assistants, researchers, members of the administrative team, and others. Findings indicate that team members share similar views on collaboration, saying that it yields grander research results and helps attain established goals, and that it requires a certain skill set to work together productively. The advantage of collaborative work is that it allows graduate students and researchers to interact with the larger community, and members of the community to learn and acquire various skill sets from each other. Disadvantages involve accountability within the project and to the funding agencies, the time-consuming nature of the project, the necessity of travel for personal meetings, and other potential personal or institutional tensions. Siemens summarizes the benefits and challenges of the first year of the INKE project, and argues that the skill set acquired by participating in such a project may be useful in future academic and nonacademic work. Siemens explores how collaborative practices evolve over the span of a project, using the Implementing New Knowledge Environments (INKE) second year of funded research as a case study. As with the previous study based on year one , the study here is carried out through a series of interviews with various researchers and administrators of INKE. The results show that INKE's members have developed closer relationships, allowing research to progress; the graduate research assistants also stated that their work experience has deepened their academic and collaborative skills. Some of the major challenges have to do with human resources and include the difficulty in securing postdoctoral fellows with technical skills and a project manager, mostly due to competition with other disciplines for hiring these professionals. A number of members and sub-research areas were in a period of transition, which resulted in a slight restructuring of the project. Siemens offers a number of potential solutions to the aforementioned challenges and addresses ways to structure the workflow during transitional periods that would help maintain the flow of the project and to swiftly integrate newcomers. She ends her article with various recommendations on how to sustain successful collaboration, which include having face-to-face meetings (in formal and informal settings) of geographically distributed team members, utilizing the most optimal methods for knowledge transfer in moments of transition, and taking into account ways in which university policies of partners may affect the project and its internal dynamics. Siemens addresses the collaborative nature of the Implementing New Knowledge Environments (INKE) project at the closing of the third year of funded research. The purpose of this investigation is to document the nature of collaboration so that teams can benefit in future scenarios from past lessons learned. Also, there is a notable lack of scholarship focused on collaboration despite its increased adoption in the academic sphere. Siemens frames the third year as a transitional one for INKE since it is the period in which there was a change in sub-research areas, partners, and team members. The study is conducted through various interviews with team members and the data analysis is carried out through a ground theory approach. The major observations that emerged in relation to transitional phases and how to best manage them include an acknowledgment that the integration of new team members into a project takes time and that an account of the project and team relationships, as well as project documentation, may be helpful. Other essential parts of this process are formal and informal face-to-face meetings. According to Siemens, the selection of individuals with a collaboration-oriented mindset is useful since they are more likely to accommodate the short timespan allotted for new team member integration while still meeting research objectives. Siemens discusses the fourth year of the Implementing New Knowledge Environments (INKE) project, and focuses on how the nature of the collaboration over this period of the project has evolved. As with other studies on INKE's collaboration, the study was carried out through a series of interviews with the researchers, graduate researchers, and administrative members of the team using a series of questions that could be extended and which are then analyzed with a ground theory approach. Siemens argues that year four reflects a more mature period of the project where the nature of collaboration has morphed into a more fulfilling and closely bound relationship: researchers from one area feeling more involved with research in other areas and all team members, including research assistants, recognize their role in an important and rewarding way. Siemens states that a significant development in year four is team members' ability to better balance INKE related work with outside research, sometimes even having INKE's research drive motivate other work endeavors. One major challenge that still exists is coordinating across four time zones with few windows for possible meeting times. Overall, the interviews demonstrate that the team acknowledges the need and benefits of working together to attain research objectives. Siemens concludes with a set of suggestions for other teams working in a collaborative atmosphere. Siemens addresses the sixth year of the Implementing New Knowledge Environments (INKE) project, namely the collaborative aspect of the long-term, large-scale project as it nears completion (in the seventh year). This study is carried out through a set of semi-structured interviews that are analyzed with a ground theory approach. According to Siemens, the team found collaboration to be a positive and beneficial experience overall, which was continuously strengthened through face-to-face interactions. Another finding pointed to how large-scale projects are in a constant stage of transition, where the change in pace of the project also affects the pace of work on an individual level and the nature of the collaboration. Recurring challenges that sprung up in earlier years and continued throughout the project include working at a distance with team partners and the hiring and retention of postdoctoral fellows and research assistants. The documentation of this collaborative process and the lessons learned throughout the years are employed as a foundation for the next grant application and future collaborations. Siemens outlines the administrative structure to be executed in the Implementing New Knowledge Environments (INKE) project. The document is meant to serve as an agreement between the various individuals and groups involved in INKE on how members will work with each other over the upcoming years of collaboration in order to achieve the goals that were outlined in the research application. Siemens breaks down the administrative structure of the projects and the various groups involved, and presents the guidelines and responsibilities for each group. The groups consist of various researchers and partners, as well as various administrative divisions overlooking and advising the project. The author also discloses the guidelines concerning intellectual property of knowledge created as part of the project, as well as the adopted policies for co-authoring work within the INKE framework. In the latter part of the document, Siemens includes excerpts from the grant application that addresses the broad range of key stakeholder areas and the project charter that outlines how the work will be disseminated, the future of the project, and the nature of the work between members of INKE.

Prototyping
Prototyping, or the modeling of digital objects, has proliferated within and outside of scholarly environments for the last two decades. The resources in this category generally fall under experimental and productbased prototyping . Experimental prototyping is often carried out in academic contexts by approaching the prototyping process itself as a research endeavor that aims to manifest a thought process, explore a concept, or answer a question. Product-based prototyping is aimed at generating a robust product that is published and used. These two can overlap: in experimental prototyping, the digital object often moves through stages of development and is eventually disseminated for use by others; product-oriented prototypes often maintain an element of experimentation in order to deliver refined objects to their users. The research prototypes in this category experiment with conventional forms of scholarly communication by offering alternative modes of production and dissemination that are supported by the digital medium (Belojevic 2015;). The more product-oriented prototypes explore alternative design methods to ensure usability and access for their users . Despite the end goal, all prototypes in this category carry an experimental quality and seek to innovate particular aspects of their respective fields. articles undergo open peer review and can be commented on by a specific group of reviewers or the public. The prototyping process followed an approach similar to the one described in Katie Salen and Eric Zimmerman's Rules of Play: Game Design Fundamentals in which they outline common game design principles. Belojevic describes how the project moved from iterative prototyping to agile development, an approach that permits researchers to break down the project into smaller chunks. The approach allows stakeholders to ensure that their goals are being met at every stage and scholars and researchers to maintain the quality of the project. Further research will focus on determining the aspects of agile development that are adaptable for the project in order to facilitate a balance between project development and deliverables, while being flexible enough to pursue and integrate novel insights that may appear during the prototyping process. . "Inclusive Interface Design for Seniors: Exploring the Health Information-Seeking Context." Journal of the American Society for Information Science and Technology 58(11): 1610-17. DOI: https://doi.org/10.1002/asi.20645.

A. Annotations
Given, Ruecker, Simpson, Sadler, and Ruskin demonstrate a prototype of a web-based resource for identifying and providing information on medications for senior citizens with an image-based retrieval system interface. This prototype stems from a lack of research on the usefulness of existing web-based resources and an understanding of the complexities that arise in accommodating the various needs of seniors. The authors conduct a case study on twelve people, six men and six women aged between 65 and 80, who are comfortable with using computers; they are asked to search for, identify, and find information on a number of medications using two different resources: a more common commercial consumer-driven database (www.drugs.com) and the prototype used in this case study. In the former, participants were unable to identify the proper medication and generally found the platform to be crowded and confusing, with a lack of images to disambiguate between the forms and colors of medications. Given et al.'s prototype, by contrast, was built with usability theory in mind-an approach that sets users' needs at the forefront of the design process and is meant to accommodate users with various impairments and abilities in a straightforward, simple, and effective manner. Most of the participants were able to find information on the medications using the prototype, and reported it being easier and more straightforward to use. However, certain drawbacks still exist, such as the general overflow of information and the ambiguity in relation to the exact physical attributes of the medication, such as the color and shape. Further studies will work on ensuring that the prototype goes further in aiding the needs of its potential users. . "A Brief Taxonomy of Prototypes for the Digital Humanities." Scholarly Research and Communication 6(2). DOI: https://doi.org/10.22230/src.2015v6n2a222.
Ruecker's paper intervenes at the intersection of digital humanities and design as he presents a taxonomy for project prototyping. Ruecker begins by acknowledging the wide variety of prototype taxonomies that have been previously proposed before turning to his own. Ruecker suggests that prototypes should be categorized based on the kind of project they are supporting: production-driven, experimental, and provotypes (provocative prototypes). Ruecker recognizes that prototyping is defined and understood differently depending on the community and uses various examples, such as Xinyue Zhou's nationalist baby bottles and Juan Salamanca's crosswalk, to illustrate different types of prototypes and to trace their evolution. Ruecker argues that any of these categories of prototypes can be used effectively for pedagogy. In conclusion, Ruecker argues that clearly distinguishing the purposes of different prototypes can help manage and encourage their use. . "Design of a Rich-Prospect Browsing Interface for Seniors: A Qualitative Study of Image Similarity Clustering." Visible Language 41(1): 4-22.
Ruecker, Given, Simpson, Sadler, and Ruskin apply inclusive design delivery by designing an interface for access to healthcare resources for seniors. Their goal is to test whether an alternative visual browsing interface would be helpful to seniors in pill identification. They detail inclusive design principles and pose research questions that address the effect of interface design on usability of online drug databases. The authors discuss their results by comparing interfaces, looking at search task results from www.drugs.com and the prototype produced, and reading into implications for design. They also highlight their future research plan, which includes an expansion of search features for drug databases and the development of interfaces that adhere to inclusive design theory. . Visual Interface Design for Digital Cultural Heritage: A Guide to Rich Prospect Browsing. Burlington, VT: Ashgate.
Ruecker, Radzikowska, and Sinclair apply a rich-prospect browsing approach to the design of cultural heritage collections-a methodology that involves the investigation of an entire collection rather than focusing on search-oriented interfaces that have been used in the past. The authors dedicate the book to explaining the various affordances of rich-prospect browsing by laying out the theoretical framework for such an approach and balancing it with various prototypes and examples. They also address some central principles of rich-prospect browsing, such as the importance of meaningful representation of each item in the collection and what the most beneficial affordances for various audiences are. The book includes a total of nine prototypes that test rich-prospect browsing by balancing theory and practice; together, these are meant to serve as a guide for those who might want to implement this approach in their projects. They specifically address how to determine the type of affordances to include and how to steer them toward the potential audiences for their cultural heritage artifacts collection. In light of the focus of the Implementing New Knowledge Environments (INKE) on the ways in which digital environments affect the production, dissemination, and use of established venues for academic research, the NewRadial prototype has been extended for further investigation of this research direction. NewRadial is a data visualization environment that was originally designed as an alternative way to encounter and annotate image-based databases. It allows users to engage with humanities data outside of scholarly paradigms and the linear nature of the printed book and encourages user contributions through collective commentary rather than isolated annotation. This prototype investigates a number of questions, such as whether the aforementioned venues can coexist in their present form, what are the ways in which scholarship can be visualized through time and space, how are critical ideas born and evolve, and whether the collaborative elements of Alternate Reality Games and Massively Multiplayer Online Role Playing Games can be adopted into the peer review process and secondary scholarship. The prototype is a response to the established view of a finished work existing in a print-based format and is rather a way of experimenting in an interactive and dynamic digital environment that invites dialogue and collaborative curation, as well as numerous alternative narrative opportunities. ** Saklofske, Jon. 2016. "Digital Theoria, Poiesis and Praxis: Activating Humanities Research and Communication through Open Social Scholarship Platform Design." Scholarly and Research Communication 7(2): n.p. DOI: https://doi.org/10.22230/src.2016v7n2/3a252. Sakolofske states that although research has drastically changed in the last two decades, scholarly communication has remained relatively stable, adhering to standard scholarly forms of publication as a result of materialist economies. Saklofske argues for the necessity of innovating digital means of scholarly communication with theoria, poiesis, and praxis in mind. He offers a number of case studies that experiment with unconventional ways of carrying out research that utilize these concepts, among which is the NewRadial prototype: an online environment that brings in secondary scholarship and debate, where outside information can be added to and visualized with the primary data without affecting the original databases. NewRadial is taken as a model for other spaces that facilitate such dynamic organization and centralized spacing as an alternative solution to typical, isolated forms of monographs and linear narrativization. Saklofske, who is a proponent of open social scholarship, argues that this type of scholarship is an essential part of the transformation of scholarly research and communication in a way that would take advantage of the digital medium rather than propagating conventional forms of knowledge creation into this environment. This type of research platform is also more inclusive and public facing. Siemens, Warwick, Cunningham, Dobson, Galey, Ruecker, Schreibman, and the INKE Research Group investigate the 'conceptual and theoretical foundations for work undertaken by the Implementing New Knowledge Environments research group' (n.p). They address the need for designing new knowledge environments, taking into consideration the evolution of reading and writing technologies, the mechanics and pragmatics of written forms of knowledge, and the corresponding strategies of reading, as well as the computational possibilities of written forms due to emerging technology. The authors highlight the importance of prototyping as a research activity and outline corresponding research questions, which target the experiences of reading, using, and accessing information, as well as issues of design. They discuss their research methods, which include digital textual scholarship, user experience evaluation, interface design prototyping, and information management. Siemens et al. conclude that the various reading interface prototypes produced by INKE allow a transformation of engagement methods with reading materials. Stafford, Shiri, Ruecker, Bouchard, Mehta, Anvik, and Rossello conduct a user-centred evaluation using Searchling, an experimental visual interface that combines a thesaurus, query, and document space, and is based on rich-prospect and metadata-enhanced visual interfaces for an improved search experience. The study is carried out with fifteen participants, most of whom are researchers at the University of Alberta studying the usefulness of understanding different types of information clustering and the relationships between them. The assigned task asks participants to select as many possible features on the interface without paying particular attention to the content. The study is based on both a qualitative feedback and on a user rank of items using a 5-point Likert scale. Results show that among the numerous positive impacts, the most useful feature of the interface is that it solves the problem of formulating queries, which is considered the greatest problem with other search tools. According to the participants, the most prominent limitation is that Searchling fails to isolate keyword search terms on a specific topic.

Data Management
Data management concerns effective methods for organizing data and documents through the application of a systematic mechanism. The works included here address metadata, database management, and data visualization (Fear 2011;Hedges, Hasan & Blanke 2007). Some articles investigate ethical uses of data obtained from research, as well as accountability mechanisms and guidelines to ensure that collected data is properly managed, stored, and preserved . The resources in this category address what can be done with data collected from research projects and how research can be conducted more efficiently, with specific attention paid to data preservation and curation strategies . Overall, the resources address the lifecycle of data management and the necessary infrastructural mechanisms for effective governance of digital information. Akers and Doty conduct a survey on disciplinary differences in faculty research data management practices and perspectives. The authors divide faculty members into four broad research domains: arts and humanities, social sciences, medical sciences, and basic sciences. The percentages of faculty per area are considered, as well as attitudes toward open access data and familiarity with basic terms of data management. The survey also seeks to understand faculty attitudes towards digital documentation and preservation. Both authors worked to create Shibboleth authentication access for Emory University researchers to the DMPTool that walks researchers through the creation of data management plans for grant proposals. The authors also point out that OpenEmory, the current institutional repository, does not warrant further research data development and that more effort could be focused on facilitating the deposit of data in disciplinary repositories or setting up instances of the Dataverse Network. Serious consideration of both similarities and dissimilarities among disciplines can guide academic librarians in the development of a range of data management related services. Corrall, Kennan, and Afzal analyze current trends in library support for research. Funding bodies are increasingly viewing libraries as 'bottomless pits' rather than self-evident positive support for researchers, especially as the web becomes more accessible and user friendly (qtd. in Wood, Miller & Knapp 3). According to the authors, e-research should provide libraries with the impetus to extend their services beyond the material archive. Libraries in the US, such as MIT's libraries, are quicker to adapt to digital services, and the Association of Research Libraries in 2009 found 21 libraries that already provide infrastructure or support for e-science and another 23 that intend to do so. The authors conducted a questionnaire that asked respondents questions about their organizations, bibliometrics, research data management, and future plans. Corrall, Kennan, and Afzal suggest that academic librarians involved in research support need to understand governmental and institutional research agenda so that they can support strategy and policy development and implementation. Crompton, Mash, and Siemens study the use of microdata format to include larger groups of researchers and editors working on a digital social edition. They also provide readily parsable data about the content of A Social Edition of the Devonshire Manuscript, the main object of their study. The authors argue that adopting linked data standards allows for an interconnection between texts and virtual collaboration across projects and scholars. Crompton, Mash, and Siemens explain how Resource Descriptions Framework in Attributes (RDFa) is well suited for academic projects and elaborate on the idea of encoding for the Semantic Web. They discuss technical decisions that would shift the focus of the encoder on data entry instead of the technical details of encoding. In their conclusion, the authors suggest that with the RDFa enhancement, A Social Edition of the Devonshire Manuscript will provoke new research questions around the culture and contexts of the Tudor court. Fear's article explores data management at the University of Michigan, investigates the factors that have shaped the practices of researchers, and seeks to understand the motives for extending or inhibiting changes in data management practices. She argues that institutions should have an interest in protecting the data of their researchers. For Fear, improving infrastructure for data sharing and accessibility is one way of improving data management standards. She conducts a survey with questions such as whether the researcher believes data to be personal information, how researchers manage their data over the short term, what kind of data management plans are provided when researchers apply for funding, what are the methods for preserving data over the long term, and the extent of their general familiarity with the basics of data management. The study concludes with the observation that data management is part of a continuum of processes that tend to blur together as researchers move from document to document. According to Fear, researchers regard separating data management from other research activities as confusing and counterproductive.
Harth, Hose, and Schenkel edit an anthology that covers the concept of linked data management. The anthology begins by describing how modern computers still struggle with the idiosyncratic structure and semantics of natural language due to ambiguity. The authors outline many of the key concepts in emerging linked data management systems, including RDF vocabularies and foundational terms such as the Semantic Web. A list of SPARQL and OWL queries are given, and the authors state that the novel Web of Data requires new techniques and ways of thinking about databases, distributed computing, and information retrieval. Topics range from the digital architecture of linked data applications, to the Bigdata RDF Graph database, to different methods of query processing. Hedges, Mark, Adil Hasan, and Tobias Blanke. 2007. "Management and Preservation of Research Data with iRODS." In Proceedings of the ACM First Workshop on CyberInfrastructure: Information Management in eScience, 17-22. http://dl.acm.org/citation.cfm?id=1317358.
Hedges, Hasan, and Blanke provide recommendations for the management and preservation of research data using the integrated Rule-Oriented Data System (iRODS). iRODS is a recently developed automated, scalable digital preservation tool, equipped with Rule Engine, which allows the system to actively react to events. The Rule Engine allows iRODS data grids to exceed previous limitations through a flexible mechanism for implementing application-specific processing. The article provides information on driver requirements for managing large amounts of data, curation and preservation, automation, and transparency, as well as a list of rules used to implement preservation and data management. iRODS is capable of executing rules conditionally and can define multiple rules to implement alternative means towards the same goal simultaneously. The authors conclude that they will continue with an analysis of different preservation strategies and procedures currently followed by the Arts and Humanities Data Service archive in order to increase the automation and reliability of the preservation process. . "Investigating Data Management Practices in Australian Universities." APSR. http://eprints.qut.edu.au/14549/1/14549.pdf.
Henty, Weaver, Bradbury, and Porter conduct a survey on changing expectations for the provision of data management infrastructure in Australian universities. Most of the respondents are academic staff, with significant postgraduate student participation and a low response rate from emeritus or adjunct professors. The questions asked of respondents are oriented towards researcher awareness of digital data, the types of digital data collected, the sizes of the data selections, the software used for analysis and manipulation of digital assets, and research data management plans. The questions also concern institutional responsibility and structure for data management, such as whether researchers outside the team are allowed to access shared research data, and how the data is accessed and used. Henty et al. compile data from the Queensland University of Technology, the University of Melbourne, and the University of Queensland. Neil Chue Hong. 2011. "Distributed Data Management with OGSA-DAI." In Grid andCloud Database Management, 63-86. DOI: https://doi. org/10.1007/978-3-642-20045-8_4 Jackson, Antonioletti, Dobrzelecki, and Hong outline the OGSA-DAI framework for sharing and managing distributed data. The system can manage and share relational data, XML files, and RDF triples. The chapter provides basic definitions of workflows and how they are executed, list markers and how they are used to group outputs, concurrent execution, and client requests and discusses how to access the OGSA-DAI framework. Several graphs and taxonomies are provided to illustrate workflows and workflow execution. The authors suggest that data delivery is slower through web services than direct methods such as FTP and GridFTP and outline OGSA-DAI's approach to security, distributed query processing, relational views, interoperability, performance requirements, and a list of related programs. Complete data abstraction is not possible with the program; however, it can be used to build higher-level capabilities and enhance distributed data management.  Jackson, Antonioletti, Hume, Blanke, Bodard, Hedges, and Rajbhandrari contribute their research to conference proceedings on digital data management of ancient and classical materials. The 'islands of data' they refer to are resources that are separate from larger repositories and often largely inaccessible. The authors discuss the LaQuAT project: an initiative that attempts to link and query ancient texts through cooperation of a group of diverse experts from different institutions. The article describes the databases constructed under the LaQuAT initiative, including Project Volterra, which is a database of Roman legal texts and associated metadata, and the HGV, which is a database of papyrological metadata in relational and TEI XML formats. Problems shared by initiatives such as these are the contamination of data by control characters, which can invalidate XML documents. Database drivers and lack of funds can also pose considerable roadblocks. Johnston, Lafferty, and Petsan offer advice on how to train researchers on data management through a scalable, cross-disciplinary approach. The authors describe the curriculum, implementation, and results of research data management training offered by the University of Minnesota Libraries. Johnston, Lafferty, and Petsan provide a description of Minnesota's 'Creating a Data Management Plan' workshop, which trains university faculty and researchers on the basics of file management, metadata standards, and data accessibility. The research team conducts a survey to understand workshop attendee roles, college affiliations, and the most useful parts of the workshop. The workshop leaders introduced a team-teaching approach that has had an overwhelmingly positive impact on the libraries' ability to respond to research data management needs.  Jones, Ball, and Ekmekcioglu provide a summary of their tool, the Data Audit Framework, which provides organizations with the means to identify, locate, and assess the current management of their research assets. The framework was designed to be applied without dedicated or specialist staff, making librarians suitable auditors for the program. Common issues plaguing data management at the institutional level are storage metadata, lack of awareness of data policy, and a lack of long-term legacy data mechanisms. The authors argue that institutional data policies with guidance on best practices in data creation, management and long-term preservation would greatly assist departments in maintaining digital assets. They then provide a list of organizations from which departments can receive advice on best practice and services that can equip postgraduates and department members with the support needed to produce sound data management plans. The Data Audit Framework identifies main data issues, including areas where data is at risk, and helps to develop solutions. . Data Management for Libraries: A Lita Guide. Chicago: ALA TechSource.
Krier and Strasser's guide to data management for libraries is intended for libraries that are in the early stages of initializing data management programs at their institutions. The opening chapters provide definitions of data management, different types of research data, curation, and lifecycle. The guide contains advice on how to start a new service and point-form questions to help the reader decide what kind of plan works best for their institution. The authors suggest identifying researchers who are receptive to working with the library and request assistance with data management plans or curation services. An overview of descriptive, administrative, and structural metadata is provided, along with an explanation of its role in data management. The differences between storage, preservation, and archiving are discussed, along with definitions of domains and institutional repositories. The authors then briefly describe the preservation process. The final chapters loosely cover access and data governance issues that have caused problems with data management in the past. Lewis begins his chapter by asking the rhetorical question of whether managing data is a job for university libraries. He argues that it is part of the role of the university library to help manage data as part of the global research knowledge base; however, the scale of the challenge requires concerted action by a range of stakeholders who are not all necessarily employees of the library. Lewis advises that institutions develop several policies for research data management that include developing library workforce data confidence, providing research data advice, developing research data awareness, teaching data literacy to postgraduate students, bringing data into undergraduate research based learning, developing local data curation capacities, identifying required data skills with LIS schools, leading local data policy development, and influencing national data policy. Non-trivial research funding is needed for these initiatives and should be funneled through a primary 'pathfinder' phase of two years from major research councils. Lewis concludes with the observation that in order to develop such training, award-bearing programs (Masters level training for data managers and carer data scientists looking to pursue career track positions in data centres), short course accredited provision, and training for data librarians are needed.
Research Data Canada. 2013. "Research Data Canada Response to Capitalizing on Big Data: Towards a Policy Framework for Advancing Digital Scholarship in Canada." http://www.rdc-drc.ca/wp-content/uploads/ Research-Data-Canada-Response-to-the-Tri-Council-Consultation-on-Digital-Scholarship.pdf.
Research Data Canada looks at foundational elements for scholarship in Canada: stewardship, coordination of stakeholder engagement, and development of capacity and future funding parameters. The document emphasizes the importance of coordination and the need for it to have clear guidelines and policies in order to achieve exemplary digital scholarship in Canada. The authors suggest that addressing the four following areas would strengthen the paper: long-term data curation, development of data professionals, data generated by government-based research and private research data, and engagement with the international data community. The authors conclude by committing to full engagement in the on-going discussion on behalf of Research Data Canada. . "Data Management in the Humanities." ERCIM News 89, April 3. https://ercim-news. ercim.eu/en89/special/data-management-in-the-humanities.
Romary describes several data management tools in the humanities. The first tool Romary describes is HAL, a multi-disciplinary open access archive for the deposition and circulation of scientific research documents, regardless of publication status. The author then shifts focus to the Digital Research Infrastructure for the Arts and Humanities (DARIAH) project, which aims to create a solid infrastructure to ensure the long-term stability of digital assets and the development of wide range services for the original tools. DARIAH depends on the notion of digital surrogates, which can be metadata records, scanned images, digital photographs, or any kind of extract or transformation of existing data. A unified data landscape for humanities research would stabilize the experience of researchers in circulating their data. Laurent suggests that an adequate licensing policy must be defined to assert the legal conditions under which data assets can be disseminated and researchers involved with projects such as DARIAH need to converse with data providers on how to create a seamless data landscape. Sakr, Sherif, and Eric Pardede. 2012. Graph Data Management: Techniques and Applications. Hershey, PA: IGI Global. Sakr and Pardede's anthology of essays on techniques and applications of graph data management covers the use of graphs in the semantic web, social networks, biological networks, protein networks, chemical compounds, and business process models. The mechanisms for the main types of graph queries are prioritized throughout the collection, and authors consider both algorithmic and applied perspectives. The book covers data storage, labeling schemes, data mining, matrix decomposition, and clustering vertices in weighted graphs. The editors claim that the anthology provides a comprehensive perspective on how graph databases can be effectively utilized in different situations.  Schmidt, Waas, Kersten, Cares, Manolescu, and Busse discuss emerging XMark technology and its role in XML data management. The authors argue that XML is currently in great need of new benchmarks to provide coverage for XML processing. The XMark benchmark features a toolkit for evaluating the retrieval performance of XML stores and query processors. It contains a workload specification, a scalable benchmark document, and a comprehensive set of queries designed to feature natural and intuitive semantics. The authors then provide an outline of XML query processing and related work, database description, hierarchical element structure in XML, as well as benchmark queries. Schmidt et al. conduct an experiment that uses six different systems to measure size/bulk load time ratios operating with XMark. The experiment concludes with an analysis of the essential primitives of XML processing in data management systems. The authors suggest that a W3C standard still needs to be defined and specifications should be updated. Surkis and Read provide an introductory resource for librarians who have had little or no experience with research data management. Basic concepts are defined, such as the fluidity of data in process and analysis as well as the data lifecycle. The authors suggest that the line between publications and data is blurry, and that data management is essential in making data and publications discoverable. This, they argue, is a central task of the librarian. The authors then recommend the online course, MANTRA: Research Data Management Training, to introduce librarians and researchers to the topic.  Venugopal, Buyya, and Ramamohanarao provide a taxonomy of data grids for distributed data sharing, management, and processing. The authors propose that grid computing is a paradigm that proposes aggregating geographically distributed storage and network resources for unified, secure, and pervasive access to their combined capabilities. The study contains a comprehensive discussion on data replication, resource allocation, and scheduling. Venugopal, Buyya, and Ramamohanarao focus on the architecture of data grids, as well as the fundamental requirements of data transport mechanisms, data replication systems, and resource allocation and job scheduling. Ward, Freiman, Malloy, Jones, and Snow cover the goals and methods of Incremental, a program that identifies institutional requirements for digital research data management and pilots relevant infrastructure projects. The majority of projects piloted are soft infrastructure designed to break down the barriers that information professionals have unintentionally built with the use of specialist terminology. The authors note that researchers organize their data in ad hoc fashion and that a lack of clear file naming practices and version control leads to difficulties when retrieving legacy data later. Language barriers and late starts to digital preservation are both substantial barriers in accessing legacy works and new research. Researchers indicated that they desire diverse, web-based modes of training (online tutorials, videos, and interactive learning resources). The authors argue that collating and repurposing existing guidance, training, and support will be effective in the long run. Wilson, Martinez-Uribe, Frazer, and Jeffreys suggest that the University of Oxford needs to develop a centralized institutional platform for managing data through all stages of its life cycle that mirrors the framework of the institution in its highly federated structure. The Bodleian Libraries is currently developing a data repository system (Databank) that promises metadata management and resource discovery services. Researchers are given the role of guiding and validating each strand of data development as projects progress. Institutional data management is favoured over the establishment of national repositories. The authors conclude with the suggestion that data management might be better placed in, or integrated with, cloud-based services that are implemented in institutions but do not belong to them. . "Digital Curation." OCLC Systems and Services: International Digital Library Perspectives 23(4): 335-40. DOI: https://doi.org/10.1108/10650750710831466.
Yakel's article on digital curation provides an overview of the basic aspects necessary to ensure that digital objects will be maintained, preserved, and available for future use. Yakel remarks that digital curation is becoming an umbrella concept that includes digital preservation, data curation, records management, and digital asset management. She briefly traces the history of the term ' digital curation' from its first use in the National Science Foundation's 2003 report to a later article by Liz Lyon. The Digital Curation Centre in the UK defines digital curation as the maintenance and adding of value to a trusted body of digital information for current and future use. Yakel provides many official definitions of the term for various official organizations. The article concludes by suggesting that the range of diverse definitions of digital curation has brought the scientific, educational, and professional communities together with governmental and private sector organizations. Akers, Katherine G., and Jennifer Doty. 2013  A. Annotations Castelli, Donatella, Simon J. E. Taylor, and Franco Zoppi. 2010. "Open Knowledge on E-Infrastructure: The BELIEF Project Digital Library." IST-Africa, 2010, 1-15.

Social Justice and Open Knowledge Facilitated by Technology
Castelli, Taylor, and Zoppi discuss the Bringing Europe's eLectronic Infrastructures to Expanding Frontiers (BELIEF) Project, which aims to ensure the development and adoption of e-infrastructures on a worldwide scale. They focus on providing users with documentation that matches their search criteria based on their professional profiles. The authors outline the objectives of the project, introduce the methodology, provide a technology description, and discuss system developments. In the results section, Castelli, Taylor, and Zoppi analyze the impact of the Digital Library on the target audience and explain that there was a remarkable growth of the community due to the successful outcomes of the organized events. They study statistical data on user's provenance, top sites, top operations, and yearly trend. On business benefits and sustainability, Castelli, Taylor, and Zoppi summarize their views by explaining that the implementation and operation costs of a Digtial Library include training Content Providers' Correspondents, maintaining the network of liaisons that is necessary to promote the community leveraged by the running Digital Library, and performing harvesting operations to OpenDLib Administrators. They conclude that the effective implementation of the Digital Library was achieved, especially in terms of harmonization of metadata from the various information sources.
Chopra explains that the difference between free and proprietary software is that the latter restricts user actions through end user license agreement while the former eliminates restrictions on users. He starts by explaining the concept of free software, talking about software freedom, the Freedom Zero problem, the ethical use of scientific knowledge, and scientific knowledge and property rights. He then discusses community discourse and Freedom Zero (the freedom to use a software in any way or for any purpose), explaining that Freedom Zero supports deliberative discourse within development and user communities. When exploring the ethical uses of software, Chopra answers the question of whether Freedom Zero is inaccurate and whether a free software licensor could be liable for granting Freedom Zero. The author concludes that Freedom Zero facilitates a broader debate about software's larger social significance. Christians, Clifford. G. 2015. "Social Justice and Internet Technology." New Media and Society 18(11): n.p. DOI: https://doi.org/10.1177/1461444815604130. Christians looks at internet technology from a social/cultural point of view. He claims that relativism is a crisis in ethics, proposing an intellectual flow as coherent articulation of social justice with internet technology. This flow consists of ontological realism, justice as intrinsic worthiness, and a human-centred philosophy of technology. Christians discusses relativism as a watershed for ethics, talking about media ethics in particular, and listing a few problems including the Gamergate controversy, Wikileaks.org, privacy in Facebook networks, red envelopes in China, and online hate speech. He then moves to Naturalism, explaining that moral anti-realism denies the validity of an intellectual apparatus for ethics, and that philosophical realism is needed for a credible concept of social justice. The author proposes that justice in the moral realist term is grounded in the inherent dignity of the human species. Christians denies the epistemology of the neutral view on internet technology, and proposes that instead of looking for technical improvements in instruments, one needs to reconceive the technology itself. The author then poses the issue of common good, emphasizing the importance of community. He concludes that through social justice, individuals can do great things to help with the development of today's world. Dunlop, Judith M., and Graham Fawcett. 2008. "Technology-Based Approaches to Social Work and Social Justice." Journal of Policy Practice 7(2): 140-54. DOI: https://doi.org/10.1080/15588740801937961.
Dunlop and Fawcett investigate the need for integration of conventional and electronic advocacy models in the field of social work. They investigate whether social workers are able to assist organizations enter the information age by using technology-based approaches to help disadvantaged populations, and by implementing electronic advocacy practices to promote social justice in local communities. The authors offer a historical background on social work advocacy by examining conventional and electronic advocacy practices. They continue by exploring types of social or free software that could be used by nonprofit organizations and investigate the application of social software to social work advocacy practices in the age of technology. Dunlop and Fawcett conclude that there is a need for technologically competent social workers for the purpose of organizing virtual communities and providing leadership in the electronic advocacy practice. Edwards and Hoefer address social work advocacy efforts and explore the potential of Web 2.0 technology in the field. The authors study social work advocacy, explaining that there are various approaches that allow social workers to succeed in their advocacy efforts. These approaches include communicating with decision-makers, resource management, and information sharing. They also investigate Web 2.0 and how it allows decentralized knowledge building by going through examples of social media such as blogs, RSS feeds, wikis, podcasting, video sharing, social networking, and social bookmarking. When discussing web advocacy, Edwards and Hoefer talk about ways social work advocates using Web 2.0. The article continues with a presentation of previous research, and an explanation of the methods employed such as sampling and data collection. The results include two sections: the use of various internet components and differences between general social work organizations and state chapters of the National Association of Social Workers. The authors then discuss the results and explain that social work organizations do not often use Web 2.0 or previous web technologies for advocacy. They conclude that Web 2.0 technologies enhance inclusion in political discourse, accessibility of information, and the formation of relationships that strengthen the advocacy effort. Farrington, John, and Conor Farrington. 2005. "Rural Accessibility, Social Inclusion and Justice: Towards Conceptualisation." Journal of Transport Geography 13(1): 1-12. DOI: https://doi.org/10.1016/j. jtrangeo.2004.10.002.
Farrington and Farrington explain the concept of accessibility in the rural context and discuss its central placement in social inclusion and social justice policy agenda. The authors discuss accessibility and the welfare concept in human geography, accessibility as normative and relative, and accessibility from the perspective of social inclusion and social justice. They also elaborate on accessibility and policy, noting that accessibility and social justice are two characteristics of policy adjustment. They list four dimensions that add value and accessibility to a construct of social justice and its application: space and location, sustainability, integration within the structural view of the causes of social exclusion, and empowerment of citizens through participation. Farrington and Farrington conclude that the accessibility of life opportunities is a necessary condition for social inclusion and justice. Goldkind, Lauri. 2014. Goldkind then studies the barriers to advocacy practice, draws the conceptual framework, and provides her hypotheses, in which she claims that organizations use electronic technologies because their organizational cultures are conducive to them. The author uses two demographic characteristics (organizational age and budget size) in her study. She concludes that organizational success depends on the capacities of organizations to invest in social media tools while being attentive to policy advocacy. Kline, Jesse. 2013. "Why Canada has Third World Access to the Internet." National Post, September 24. http:// news.nationalpost.com/full-comment/jesse-kline-why-canada-has-third-world-access-to-the-internet. Archived at https://perma.cc/BH8L-K33K.
Kline addresses the internet problem that Canada is facing, namely that it pays some of the highest rates for internet access among countries in the developed world. The author argues that this is a result of the lack of competition in the Canadian marketplace. Existing companies have created a monopoly, and Canada has a relatively small population that is widely dispersed across the country, making investment costly. Kline argues that in order to remediate this problem, the Canadian government ought to allow new competitors an easier entry by cancelling the foreign ownership restrictions currently held for Internet Service Providers (ISPs) and wireless carriers in order to create a more profitable opportunity to invest in Canada. Another solution Kline proposes is to have municipalities support building network infrastructure by removing current bureaucratic impediments to place cable in new houses, as well as banishing restrictions on where wireless towers can be built. The author concludes that this problem requires immediate attention. Langman claims that new types of internet-based social movements and cyber activism require new kinds of theorizations. She explains various perspectives on social movements, including resource mobilization, framing and meaning construction, political process, new social movements, and the Frankfurt School. Langman also explores domination and ideology, as well as the adverse consequences of globalization, and moves toward a critical theory of internetworked social movements (ISMs). The author studies electronic media and 'Virtual Public Spheres,' collective identities and social movements, internetworked social movements, alternative media (blogs, global civil societies, alternative professional networks, and radical geeks), global justice, global forums, anti-war movements (global justice, the World Social Forum movement, anti-war mobilization), and the move from virtual networks to cyberactivism. She concludes that the legacy of critical theory offers a comprehensive framework to chart new forms of social mobilizations and to inspire participation in the struggle for global justice. . "Reconstructing the Internet: How Social Justice Activists Contest Technical Design in Cyberspace." Media Culture Journal 9(1): n.p. http://journal.media-culture.org.au/0603/10-milberry.php.
Milberry explores how activists have shaped the internet to fit technical needs and movement goals. She begins by exploring geeks and global justice, namely how tech activism joins the free software ethos and concerns for social justice, explaining that the novelty of tech activism is in the incorporation of democratic goals of the global justice movement (GJM) into the technology itself. Milberry elaborates on the concept of politicizing technology, arguing that tech activists in global justice return to computer technology development for their political action. She addresses movements such as Indymedia and Free Software, and Wild Wild Wikis. The author concludes that since the internet is socially constructed, users are able to contribute to its development by shaping its future direction, allowing it to bridge the gap between geek and activist communities, and supporting a digital infrastructure for progressive worldwide activism. Paliwala, Abdul. 2007. "Free Culture, Global Commons and Social Justice in Information Technology Diffusion." Law, Social Justice and Global Development Journal 1: n.p.
Paliwala explores the role of digital intellectual property rights in the realm of the digital divide between developing countries of the Global South. He starts by exploring the intellectual property rights in information technology at the World Summit on Information Society (WSIS) and the World Intellectual Property Organization (WIPO). The author also studies the nature of change in production relations in the Age of Information, and the importance of Free and Open Source Software and Content (FOSS-C) movements. Paliwala then investigates the potential for digital social justice with regards to the application of arguments

Action and Activism
Most often, activism involves campaign-based practices calling for social or political action. In recent years, there has been an increase of activism in the digital medium, due to the wider scope of outreach and visibility available online. This category offers resources on how the internet and social media help enhance social activism. It contains articles on Hacktivism (i.e., hacking computer systems for political and social purposes) , activist blogging , and social networks such as Twitter . Certain authors approach online activism critically , and others study the growth of activism by looking into how specific websites allow for greater potential in political and social work. Various case studies are referenced in this category to highlight the potential of online tools in specific revolutionary movements, including the Dutch campaign 'Wij vertrouwen stemcomputers niet,' which translates to 'We do not trust voting computers' (a campaign against electronic voting, Oostveen 2010), and 'Stop the War Coalition' (a British anti-war organization, ). Also included in this category are contemporary examples of how activism has been carried out through various online platforms and addresses the effectiveness of such practices.

A. Annotations
Krishnan, and Catona focus on factors such as source and content that increase how viral a message on Twitter becomes in times of crisis. They also address top tweeters versus top retweeted users by studying what characteristics might differentiate the former from the latter and stress the importance of content during a political protest. The authors discuss source characteristics-credibility, sociability, connectivity, and content choices (the latter considers language intensity, information sharing, and social action)-to assess persuasiveness according to the number of retweets. The dataset was obtained from TwapperKeeper, which contains 150,000 tweets from 45,000 users posted between January 25 th and February 11 th 2011 using the #Jan25 hashtag. The findings show that users closer to the action, with more media affiliation, longer account duration, and more followers are more likely to be retweeted. The authors have also found that sharing information did not affect the number of retweets and that there was an absence of social action features. They conclude that microblogging should be further explored by communication scholars, since tools like Twitter can be utilized in various ways and allows immediate sharing of information as an outlet when other channels are blocked. Deibert examines the use of the internet and the World Wide Web by citizen users lobbying against the Multilateral Agreement on Investment (MAI). He highlights various reasons why this case is instructive, emphasizing that press accounts, academic studies, and state and civil society participation in this campaign suggested a strong connection between anti-MAI and the internet. He suggests that the internet helped groups to pressure politicians and publicize anti-MAI views. Deibert explores the inclusion of the nonstate actor in international policy processes, nongovernmental organizations' legitimacy, and the internet configuration for a viable public sphere. He claims that a rethinking of the architecture of global politics is necessary for the inclusion of citizen networks into the world operating system. He also studies the role of the internet in the success of the citizen networks and the alternative results of the campaign without the internet. Deibert continues to look at the global public policy implications, elaborating on issues regarding domestic and international forum actors, the multiplicity and diversity of citizen networks and their issues of classification, inclusion of citizen networks, and internet governance public policy. He concludes that the MAI exemplifies how the internet is responsible for boosting the reach of citizen networks and refutes arguments suggesting that the internet is an unsustainable platform, thereby legitimizing civil society networks in world politics. . "Wikipedia's Sexism Toward Female Novelists." New York Times, April 24, 2013. http://www.nytimes.com/2013/04/28/opinion/sunday/wikipedias-sexism-toward-female-novelists.html. Archived at https://perma.cc/X526-R7NJ.
Filipacchi addresses an important gender-related activity on Wikipedia that occurred in April 2013: female authors listed on the 'American Novelists' Wikipedia page were being moved to the category 'American Women Novelists,' whereas male authors listed on the same page remained. Wikipedia editors justified this migration by stating that there are too many entries under the 'American Novelists' category. But what Filipacchi finds most troubling is that the original page was not then renamed to 'American Male Novelists,' which implies that American novelists are de facto male. The entries were moved by alphabetical order, and after some investigation, the author also noticed that the same is true for Haitian novelists. Filipacchi notified the Word of Mouth (WOM) published female writer listserv about this problem, which responded with outrage and immediate action to remediate the issue. She concludes by highlighting the need for Wikipedia editors and users to acknowledge the weight of such decisions. . "'These Days Will Never Be Forgotten…': A Critical Mass Approach to Online Activism." Information and Organization 25(1): 52-71. DOI: https://doi.org/10.1016/j. infoandorg.2014.12.002.
Ghobadi and Clegg explore the phenomenon of social activism by studying its dynamics in a web environment. They present an overview of the literature available on online activism, the theoretical perspectives of political systems, and their vulnerability to change. Their research methodology consists of three complementary cases, including data collection and data analysis. The results of their longitudinal study demonstrate that online activism helped organize social movements. For their cross-case analysis, they examined three key elements-initial conditions, interventions, and subsequent conditions-in order to better understand the formative of collective action. Ghobadi  Häyhtiö and Rinne study individualized political participation and activity on the internet by looking into the Finnish internet protest campaigning against gossip journalism (May 2006). They claim that this study provides insight into the dynamics, patterns of change, and variety of political activity on the internet. The authors set the case study in the context of reflexive politics, referring to the politicization of private worries and issue-specific questions, and also focus on the motivations of the protest from the perspective of political consumerism. Häyhtiö and Rinne explore how the repertoire and forms of citizen-oriented politics are transformed into individualized politics through a complex multi-spatial environment. The authors discuss the phenomenon of reflexive politics, explaining that reflexivity proposes active interaction between the individual and the surrounding world, and claiming that politicized issues are enhanced by personal interests and aims. They discuss reflexive politics emerging from an incident at the 2006 Eurovision Song Contest, where a Finnish hard-rock band, Lordi, performed in monster costumes and won the contest that year. This incident triggered a debate since it was frowned upon by conservatives, who associated the band with sacrilege; but it also induced a sense of national cultural identity for the general population, elicited by the symbolic Finnish attire of the band. Häyhtiö and Rinne address the political consumerism that motivates the internet campaign by studying the political aftermath of the Lordi incident and the de-medialisation (the circulation of unfiltered and unedited communication) and self-made publicness as an arena of politics (347). The authors conclude that access to the internet makes it possible for any individual to participate in public discussions and shape their agenda in online forums. They also assert that self-made publicness emphasizes anti-hierarchical free spaces. Howard, Agarwal, and Hussain study various cases during which governments have censored and interfered in internet networks. They claim that democratization movements preceded technologies like the mobile phone and the internet, but that these technologies have allowed individuals to build networks, create social capital, and organize political action. The authors suggest that digital media and online social networking applications have affected the organization of dissent around the world. They explain that authoritarian regimes were able to control broadcast media easily during political crises before the age of digital media. This new media has complicated the task and occasionally forces regimes to disable their national information infrastructures. Howard, Agarwal, and Hussain claim that by collecting known incidents during which the state intervenes in information networks, one can map out contours of crisis situations, political risks, and civic innovations for the purpose of understanding the relationship between state power and civil society. They conduct a comparative case analysis of instances where regimes disconnected portions of their national digital infrastructures. In doing so, Howard, Agarwal, and Hussain can define the range of situations in which states have hindered substantial segments of their national information infrastructure. They reveal that democracies also disconnect their communication networks, not just authoritarian regimes. The authors claim that the internet is an information infrastructure independent of the state for the most part, making it an incubator for social movements. Howard, Agarwal, and Hussain discuss states' tactics, explaining that two themes govern the states' interference: the first theme includes protecting political leaders and state institutions, and the second theme is about preserving the public good. They conclude that information infrastructure is in itself politics, as its disconnection creates stop-gap measures that reinforce public expectations for global connectivity. Lam, Uduwage, Dong, Sen, Musicant, Terveen, and Riedl recognize the role of Wikipedia as a central public venue of knowledge in contemporary scholarship populated by thousands of volunteer editors. However, an important imbalance in the structure of Wikipedia is that the number of male editors vastly outweighs those of female editors. Lam et al. explore possible explanations for such a radical imbalance, and whether this affects the types of topics that are covered more thoroughly in the online encyclopedia. Lam et al. apply quantitative statistical analysis to the English data available on Wikipedia. The results show that the gender gap is substantial and that it does indeed skew the coverage quality on certain topics in Wikipedia, which affects its goal of being a high quality, comprehensive, open encyclopedia. Results also show that female editors are more likely to leave Wikipedia sooner than males; that the gender gap has not been closing over time; that articles with a high female editor involvement are often more highly disputed; that female editors face more reversion than males; and that female editors have a higher chance of being indefinitely blocked. All of these findings point to a Wikipedia culture that is resistant to female participation. Lam et al. conclude that more research needs to be done and concrete steps taken to address this gender imbalance and its underlying reasons. Losh argues that examining theories of hacking and hacktivism (hacking computer systems for political and social purposes), as well as the nonviolent political investment in digital tools, is becoming of greater importance. She supports this claim by pointing to the relationship with political protests in educational institutions and in the realms of coding and programming. Starting with the issue of digital dissent, Losh explains that not everyone using software for political dissent thinks that hacktivism and research go together naturally. The author uses the example of HyperCities  to show how GIS-based digital humanities practices have adopted digital mapping technologies originally used in human rights work of NGOs. She argues that this potentially positions digital humanities scholars as agents of change. Losh continues to discuss electronic civil disobedience, describing it as a form of political resistance in the field of digital humanities-one that is becoming more prominent in professional associations. She also addresses critical information studies, tackling digital ephemera, political coding, and performative hacking. When talking about hacking the academy, the author suggests that change in the university is necessary, and partnerships between social actors and political interests are encouraged. Losh asks whether or not hacktivism is relevant to the field and claims that the answer depends on the context. She concludes that systems may need to be broken in order to understand how the relationship between symbolic representation humanists and political representation activists is formed. . "The Wikipedia Gender Gap Revisited: Characterizing Survey Response Bias with Propensity Score Estimation." PLOS One 8(6): n.p. DOI: https://doi.org/10.1371/journal. pone.0065782. Mako Hill and Shaw demonstrate a novel approach to calculate the demographics of Wikipedia contributors. They design their calculation as a response to the Wikimedia Foundation and United Nations University at Maastricht (WMF/UNU-Merit) survey of 2008 that sought to calculate the demographics of contributors to Wikipedia and flagged the gender imbalance-less than 13% female contribution-that resulted in Wikimedia Foundation's initiative to raise the number of female contributors to Wikipedia by 25%. According to the authors, this estimate is inaccurate since it fails to take into account the numerous complexities that are involved in calculating the number of contributors. This has resulted in a discrepancy of the actual number of women contributors to Wikipedia, which, according to Mako Hill's and Shaw's calculations, are 26.8% higher than the WMF/UNU-Merit estimates, with a total of 16.1% instead of 12.7%. They also found that married people and parents are also underestimated, and that students' and immigrants' contribution to Wikipedia has been overestimated. Although they acknowledge that there is no ultimate formula to calculate the exact ratio, they claim that the study described in this article has a more sound approach than the one in the original study of the WMF/UNU-Merit estimates. McDonald explores Anonymous, an international collaboration of hacktivists and activists, particularly its emergence and role in the campaign against Scientology (2008) and in the Occupy Wall Street movement (2011). According to Macdonald, Anonymous emphasizes dimensions of digital culture, including the ephemeral, the grotesque, and memes. He argues that the analysis of contemporary conflicts and political mobilizations (such as the Arab Spring, Occupy Wall Street, and the Indignados movement in Spain) need to encompass new forms of communication in the digital realm. McDonald looks beyond collective identity and networks to focus on movements, information, and communication. The author also addresses the legacy of Indymedia and its networking practices. In addition, he addresses the use of masking as a social practice, stating it is done primary for symbolizing the transformation from one state to another, or accessing a form of power, rather than merely as a means of concealing an identity. He explains that microblogs like 'I am the 99%' and Anonymous cannot be classified as networking practices, as they construct singularity through the relationship between what is visible and what is not. He concludes that this article is the beginning of an exploration of practices framed around masking, the ephemeral, contingency, creativity, temporality, and refusal of a fixed identity in the realm of collective action, power, and conflict in online spaces. Merry analyzes the content of 200 environmental group websites in order to answer two questions: 'to what extent has the Internet disrupted patterns of resource accumulation and voice among interest groups?' and 'how do groups use the Internet to connect citizens to government, and to what extent do group characteristics explain those uses?' (110). Merry attempts to assess the effects of the internet on interest groups by reviewing the nature of interest group politics before the internet, suggesting that the internet disrupts the uneven distribution of resources and political influence. She explains that the internet allows groups to have stronger links between citizen and government by enhancing political participation. The author argues that popularity of groups' websites is tied to their financial resources. Merry explains that there are two criteria for the inclusion of groups: that these groups work on national or regional level environmental issues and that they have websites. According to the study, political participation is encouraged by campaigns that include action alerts and requests for donation. She concludes that the internet has helped smaller and lesser-known organizations and suggests that groups use their websites for purposes of information dissemination and political participation. Merry discusses the implications of social media in public policy and government issues. She approaches Twitter as a microblogging site and evaluates how environmental interest groups use it for purposes of public outreach. Merry focuses on features of Twitter and how they shape interest group advocacy, exploring the implications of the activities of Twitter groups and the role they play in defining the problems, specifically those related to the government. She claims that the content analysis she conducts on the communications about the oil spill in the Gulf of Mexico (2010) shows that Twitter was a quicker and more sustainable medium for interest groups during the disaster. In a section on cyberactivism and the policy process, the author explains that the internet is integral to advocacy strategies as it is a low-cost medium for activists and a platform that supports information dissemination. She also addresses Twitter's unique features and implications for advocacy, highlighting its transformative potential for interest groups. Merry follows the event-centered approach to conduct the study at hand, focusing on environmental groups and their responses to the disaster. The results of the study consider the speed of communication, patterns of issue attention over time, policy-relevant content, hyperlinks, and hashtags. She concludes that the medium used by interest groups plays a role in framing events and speeding up dissemination. Mihailidis studies the perception of young adults in terms of social media habits and dispositions. He explores how young people's perception of social media influences the ways they use these tools to engage in daily social and civic life. He also studies how they use social networks to engage in conversations around public issues. Mihailidis asks, 'how do college students use social media for daily information and communication needs?' and 'how do college students perceive social media's role in daily life?' (1061). He conducts a study of 873 college students from nine universities by asking them to complete a 57-question survey. From these, the author selected a sample of 71 participants. The study results in findings that the author discusses according to the following separate sections: peer-content drives news consumption and political expression, extending relationships through public networks, social learning, and leisure. Several findings emerged, including the resistance to using popular social networks in professional settings, which suggests that the tools in question are not the best tools for serious matters. Furthermore, the article suggests that the tools are social amplifiers that lack real context for concrete civic uses. The third implication is that the tools integrate various forms of content, disrupt the typical information flow, and facilitate peerto-peer communication. Mihailidis concludes that social media enhances expression, sharing, creation, consumption, and collaboration, and that a full recognition of the opportunities provided by social media is essential in order to preserve the value of social media tools in daily civic life. ** . "Reconstructing the Internet: How Social Justice Activists Contest Technical Design in Cyberspace." Media Culture Journal 9(1): n.p. http://journal.media-culture.org.au/0603/10-milberry.php.
Milberry explores how activists have shaped the internet to fit technical needs and movement goals. She begins by exploring geeks and global justice, namely how tech activism joins the free software ethos and concerns for social justice, explaining that the novelty of tech activism is in the incorporation of democratic goals of the global justice movement (GJM) into the technology itself. Milberry elaborates on the concept of politicizing technology, arguing that tech activists in global justice return to computer technology development for their political action. She addresses movements such as Indymedia, Free Software, and Wild Wild Wikis. The author concludes that since the internet is socially constructed, users are able to contribute to its development by shaping its future direction, allowing it to bridge the gap between geek and activist communities, and supporting a digital infrastructure for progressive worldwide activism. Oostveen studies the exchange of emails between citizens and campaigners, specifically how such interactions can inform campaign tactics. She refers to Ennis and Schreuers's (1987) notion of weak supporters who claim to be neglected in the social movement literature. She studies whether individuals are engaged in political writing in the internet age by reviewing emails sent by citizens to the general email addresses of a campaign. She also investigates reasons for which citizens used emails as their means of communication with activists, particularly whether or not these emails influenced the tactics of the activists and contributed to the outcomes of the campaign. She introduces the Dutch campaign she is working with, 'Wij vertrouwen stemcomputers niet,' which translates to 'We do not trust voting computers.' The content of the emails studied varied between the following categories: proponents vs. opponents, complaints, off topic, information provision, volunteering, discussion and alternative solutions, and strategic input. She concludes that those emailing received personal replies and that getting serious feedback is a positive experience that increases the sense of political efficacy and commitment for citizens in addition to making the activists more aware of their own strategies. . "Symbolic Production, Representation, and Contested Identities: Anti-War Activism Online." Information, Communication & Society 12(7): 969-93. DOI: https://doi. org/10.1080/13691180802524469.
Pickerill explores the value of the symbolic dimension of collective action. She conducts three cumulative forms of analysis, which aim to explain how the symbolic domain is used. The author explains the strategic choices behind this use and links these representational choices to the subjective experience of the individual and to their processes of political identity construction. This is done through five case studies of British anti-war and peace organizations: the Stop the War Coalition, Faslane 365, the Society of Friends, Justice Not Vengeance, and Campaign for Nuclear Disarmament. She starts by studying online representations and collective identities and claims that considering collective action as cognitive praxis is essential for an understanding of the operations and achievements of social movements. Pickerill continues to unpack how groups represent themselves, focusing on multiple online interventions, the use of iconic and confirmatory material, and the use of representation as projection. She also explains representational strategies that include organizational principle and ideological frameworks, diversity and frame bridging, and information and communication technologies (ICTs) and the enduring importance of face-to-face communication. The last section investigates the change of the politics of identity through the case of Muslim anti-war activists. The author concludes that groups have used ICT in three common ways: in multifarious formats and interventions, in confirmatory ways, and as means of symbolizing power and alliance with other groups. . HyperCities: Thick Mapping in Digital Humanities. Cambridge: Harvard University Press. https://escholarship.org/uc/item/3mh5t455.
In this monograph, Presner, Shepard, and Kawano explore digital humanities mapping. They begin by emphasizing the multiplicity implied by the prefix 'hyper' and draw on the multiple spaces, media, records, and participants in hypertexts and HyperCities. They describe HyperCities as a series of evolving maps of real cities overlaid with thick information documenting the place's past, present, and future. The authors first introduce their audience to the theories and ideas of HyperCities, thick mapping, and digital humanities before turning to the specifics of the HyperCities project. Presner, Shepard, and Kawano highlight the collaborative authorship of the book and use different fonts to signal each of their authorial voices. They argue that by including multiple voices (themselves and other project leaders) they are able to open up 'windows' into the HyperCities project as well as showcase the texture and variety of digital mapping projects more generally. Overall, Presner, Shepard, and Kawano make clear how digital mapping initiatives endeavour to recreate representations of place by bringing together the methods, content, and values of humanities research. Ritzer and Jurgenson discuss the rise of prosumer capitalism. They define prosumer as the dual focus on production and consumption, rather than privileging one over the other. The authors present an overview of production, consumption, and prosumption. They investigate prosumer society and capitalism in the age of the prosumer. Ritzer and Jurgenson address 'the inability of capitalists to control contemporary prosumers and their greater resistance to the incursions of capitalism' (22). They explain that one ' cannot ignore the gains for individuals as reasons for the rise of prosumption' (25). The authors explore the possibility of the emergence of a new economic form through Web 2.0 and investigate elements of abundance and effectiveness. They conclude with the notion that prosumer capitalist companies stand back from prosumers rather than attempt to control them. Sandoval-Almazan and Gil-Garcia present a model for analyzing the use of social media tools in social and political activism and apply this model to three protests in Mexico. They also investigate the effects of these tools on political activism. The authors aim to understand the process and the various stages of the development of the new form of activism known as Cyberactivism 2.0. Dividing their article into five different sections, Sandoval-Almazan and Gil-Garcia present a literature review on social movements, cyberactivism, and internet technologies, and propose a stage-based model of cyberactivism and social media. They delineate their research design and methods, emphasizing the idea that their work aims to analyze the evolution of activism using technology, and to combine standard and online forms of data collection. The authors also read into the evolution of cyberactivism by studying three cases in Mexico that include Cyberactivism 1.0, Twitter activism, and Cyberactivism 2.0. When discussing the results and implications, Sandoval-Almazan and Gil-Garcia identify differences and similarities across the three cases. They conclude that the use of online tools in social protest has had significant impact on promoting political activism, mobilizing certain portions of the society, and enhancing dissemination potentials for activist causes.  Tambini examines the use of computer-mediated communication (CMC) by civic networks and how it encourages democratic citizenship by studying the general implications of new media and democratic communication. He argues that CMC has implications for the formation and organization of political identity, and that the realization of the CMC's democratic potential requires that access to digital media is non-redistricted. Tambini explores the civic networking movement and how it became a trend due to the interplay of different strategies in political contexts; the author also explores various network designs. He explains that new civic networking occurs while institutions of democratic communication are at stake due to migration and multiculturalism. Tambini also claims that mass access and user-friendliness are new phenomena and constitute a turning point in public spheres. The author aims to understand the problem of regulating media in relation to the broader realm of social, political, and technological change. The author also discusses computer-mediated communication and democracy and investigates ways of using CMC to rejuvenate active citizenship, including information provision/access to information, preference measurement (referenda, polls and representation), deliberation, and will formation/organization. On issues in network design, he lists bias, regulation, and access. Tambini concludes that civic networking is still in its early stages and emphasizes the value of the future of civic networking. Van Aelst and Walgrave discuss the impact of the growth of the internet on political processes. They argue that information and communication technologies facilitate participation in politics, which is thus made easier, faster, and more universal. The authors focus on anti-globalization protests and the formation of new social movements that are affected by these new media. Van Aelst and Walgrave study three conditions that establish movement formation, which are a shared definition of the problem as a basis for collective identity, actual mobilization of participants, and a network of different organizations. They provide an overview of transnational protest actions against globalization and investigate the limitations and opportunities of the internet. They select 17 websites for their data, choosing sites of organizations mentioned in news reports on major anti-globalization protests. The websites were divided into three different groups: sites devoted to single events, social organizations or action groups fully or partly engaged in anti-globalization, and supportive organizations. The study aims to find whether the mentioned sites define anti-globalization in a similar manner, what mobilization function they fulfill, and the links between the organizations. They conclude that globalization is framed as an economic problem that has negative consequences on human beings and the environment, that politically there is lack of democratic legitimacy, and that the websites analyzed are hyperlinked to each other, which creates a network of related organizations. Van Laer and Van Aelst explain how the internet contributes to the shaping of the collective action repertoire of social movements that are involved in social and political change. They explain that the internet allows new forms of online protest activities and enhances offline forms of social movements. The authors define social movements as networks of informal interaction engaged in causes based on shared collective identity, and their action repertoire as a means available for collective use to make claims on individuals and groups. They particularly focus on unorthodox and unconventional political behavior. Discussing a typology of new digitalized action repertoire, Van Laer and Van Aelst study the actual possibilities the internet provides. They present four quadrants of the digital action repertoire: internet-supported action with low threshold, internet-supported action with high threshold, internet-based action with low threshold, and internet-based action with high threshold. The authors explore the limitations of the internet and the action repertoire of social movements, arguing that digital media are fundamentally unable to create stable ties between activists to help maintain collective action. They conclude that the internet has changed the action repertoire of social movements by allowing existing action forms to reach more people in faster and easier ways, and by creating new tools for activism.

Groups/Initiatives/Organizations Discussing Open Social Scholarship
This category presents a list of groups, initiatives, and organizations engaging with some aspect of open social scholarship. Advocacy for open access to information is the most dominant trend among the groups listed. In general, the organizations agree that publicly funded research should be accessible to the wider public and not siloed behind institutional paywalls. Core values of education, the human right to access information regardless of geographical or cultural factors, and collaboration are echoed among many of the initiatives presented in this list. Several of these groups work within a specific geographical region and tailor their outreach to the needs of that particular group (African Commons Project, Alliance of German Science Organisations, FinnOA). Other organizations operate on an international level by appealing to general principles of openness and fairness (IFLA Open Access Taskforce, Max Planck Society, Open Humanities Alliance). Many of the groups mention the Budapest Initiative and the Berlin Declaration as foundational to their mission (Federation for the Humanities and Social Sciences, Open Access Working Group; Canadian Association of Research Libraries). This category brings together groups that advocate for open, collaborative modes of knowledge production and dissemination.
Michael W. Carroll, Heather Joseph, Mike Rossner, and John Wilbanks lead Access2Research, an open access campaign that promotes reform in academic journal publishing. Access2Research is committed to making publicly funded research open access. Significantly, in 2012, the working group launched a petition demanding that the United States government require all journal articles arising from taxpayer-funded research be made openly available.
The African Copyright & Access to Knowledge Project, active from 2007 to 2011, was committed to investigating the relationship between African national copyright environments and access to learning materials. The African Copyright & Access to Knowledge Project probed this relationship within the context of A2K: a framework that protects user access to knowledge. The project conducted five environmental scans of copyright contexts across African nations and used the collected data to draft country reports, a comparative review, and executive policy briefs.
The Alliance for Taxpayer Access is a collective of researchers, practitioners, educators, publishers, and institutions that support barrier-free access to taxpayer-funded research. The Alliance is committed to four principles of open access: taxpayers are entitled to barrier-free access of research they fund, widespread access to published information is foundational to a country's investment in research, information should be shared in cost-effective ways in order to stimulate engagement, and facilitating access will result in information being used by individual taxpayers. The Alliance for Taxpayer Access is directed by the Scholarly Publishing & Academic Resources Coalition.
The Alliance of German Science Organisations is a coalition of Germany's top research organizations. The Alliance is committed to developing and implementing new research policies and addressing the challenges young researchers are facing. Members of the Alliance include the Alexander von Humboldt Foundation, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), the Fraunhofer-Gesellschaft, the German Academic Exchange Service, the German Council of Science and Humanities (Wissenschaftsrat), the German National Academy of Sciences Leopoldina, the German Rectors' Conference, the Helmholtz Association of German Research Centres, the Leibniz Association, and the Max Planck Society.
Founded in 2007, the American Academic & Scholarly Research Center promotes academic research activities and sustained development of global resources, especially in developing nations. They are committed to knowledge innovation, dissemination, and collaboration. To support its mandate of encouraging practical, interdisciplinary research, the American Academic & Scholarly Research Center launched its first journal in 2009 and recently started a second, multilingual journal. The American Academic & Scholarly Research Centre hosts annual, international conferences in order to unite likeminded individuals around the aims of the organization.
The Association for Computers and the Humanities (ACH) is a digital humanities society. The organization is committed to cultivating professional communities and disseminating digital humanities research. The Association is based in the United States but hosts conferences around the world and boasts international membership. ACH, along with the Alliance of Digital Humanities Organizations, publishes three peerreviewed journals, one of which is open access.
The Australasian Open Access Strategy Group is committed to four main principles: advocating, collaborating, raising awareness, and building capacity. The organization supports open access outcomes for publicly funded research in Australia and New Zealand. By collaborating with researchers and other organizations, the Australasian Open Access Strategy Group raises awareness of, and support for, open access initiatives.
The Authors Alliance supports authors who want to engage with their community through reading. The Alliance helps authors harness the power of digital networks for distributing knowledge. The Authors Alliance assists authors in navigating the complexities of print, copyright, and digitization in hopes of maximizing public access and supporting fair use. The organization bridges the gap between the author and the public in order to disseminate knowledge broadly. The Harvard Open Access Project (HOAP) is committed to opening access to research both within the university and beyond it. By using a combination of consultation, collaboration, and community building, HOAP aims to make knowledge accessible and reusable-maximizing the return on society's investment in innovative research. They developed the Open Access Tracking Project that uses folksonomy tagging to provide real-time updates on open access and related news. HOAP was launched in 2011 and is supported by the Berkman Center for Internet & Society.
Bioline International is a not-for-profit publishing group committed to facilitating open access for residents of developing countries. With a goal of reducing the South to North knowledge gap, Bioline International provides a distribution platform for peer reviewed journals to disseminate information on topics such as biodiversity, conservation, health, and international development. Bioline makes it possible for research coming out of developing nations to have a place on the global stage. Some of the journals in Bioline cooperative include Zoological Research, African Population Studies, Rwanda Medical Journal, and the Journal of Health, Population, and Nutrition.
The Canadian Association of Research Libraries (CARL) is a federation of 29 of the country's university libraries and two of Canada's national institutions. Members of the CARL community work together to improve access to knowledge; support students, faculty, and researchers; promote sustainable and effective communication; and share best practices. It is CARL's mission to support knowledge creation, dissemination, preservation, and public policy in order to enable broad access to scholarly information.
Founded in 2013, the Center for Open Science is a nonprofit technology company that provides free and open services in order to increase information inclusivity and transparency while also working to align more closely with the values of scientific research. The Center for Open Science operates on three mission components that guide its development of sound scientific research: openness, integrity, and reproducibility. The Center works with scientists, developers, research institutions, and publishers to build a community and infrastructure that fosters this type of open science.
The Center for the Study of the Public Domain is located at Duke University Law School. While much of the society's contemporary attention, resources, and care has gone into protecting exclusive intellectual property rights, the Center for the Study of the Public Domain is devoted to balancing economic, cultural, and technological dependencies by focusing on materials in the public domain. Compute Canada. 2014. "Sustainable Planning for Advanced Research Computing (SPARC)". Compute Canada. https://www.computecanada.ca/news/compute-canada-announces-sustainable-planning-for-advancedresearch-computing-sparc/. Archived at https://perma.cc/EU2W-VJYR.
The Canada Foundation for Innovation is renewing Compute Canada's national platform and positioning it as a funding body for domain-specific data projects with a budget for cyberstructure initiation that is meant to significantly benefit Canada's research community. In preparation for this, Compute Canada will bring together researchers and institutions to develop Sustainable Planning for Advanced Research Computing (SPARC), which is a discussion forum meant to address the type of investments that will position Canada's leadership in science and innovation, especially those sectors that highly rely on digital infrastructure. In this article, the various roles of SPARC are addressed, which focus on providing the necessary support for the growth of digital infrastructure and the infrastructure necessary for Compute Canada's upcoming service offering. Compute Canada also outlines the steps it will take to assemble the appropriate input for SPARC over the summer of 2014.
The Confederation of Open Access Repositories (COAR) is a not-for-profit organization based in Gottingen, Germany. Founded in 2010, COAR is a global confederation of over 100 libraries, universities, research institutions, government funders, and various other partners. COAR joins together major research networks and the broader repository community in order to build capacity, policy, and practices to support global open access. Their mission is to create an international knowledge commons that enhances accessibility to and visibility of information.
The European University Association (EUA) is one of the largest and most comprehensive university collectives. The EUA has more than 850 members across 47 countries representing over 17 million students. It is the vision of the EUA to advance the continued development of culture, society, technology, and economy in Europe. The EUA has a working group devoted to the study of open access policy. Federation for the Humanities and Social Sciences. 2018b. "Our Members." Federation for the Humanities and Social Sciences/Fédération des sciences humaines. http://www.ideas-idees.ca/about/members. Archived at https://perma.cc/ZAS4-JRAF.
The Federation for the Humanities and Social Sciences is dedicated to the promotion of research and teaching in order to advance towards a more inclusive and democratic society; this is done by supporting research and discussions that deal with critical issues within public and academic contexts. The Federation has a vast networked membership, with over 160 universities, colleges, and scholarly associations, altogether representing over 91,000 voices of various humanities and social sciences specialists. The three membership types at the Federation are scholarly association members (graduate students and researchers selected for excellent research and leadership qualities); institutional members (universities and colleges); and affiliate members (organizations that have similar agendas to the Federation and are dedicated to enhancing postsecondary education and research).
Founded in 2003, FinnOA is a collective group that supports and promotes open access to scientific research. From its inception, FinnOA has been committed to a variety of open publication and dissemination platforms that value transparency in scientific publishing. Now, partnered with individuals in academia, libraries, and data management, FinnOA is focused on resolving issues related to open access and publiclyfunded research data. . "About Force11." https://www.force11.org/about. Archived at https://perma.cc/W4E3-XS2L.
Force11 is a cooperative of scholars, librarians, archivists, publishers, and research funding bodies that have joined together to help facilitate a better means of creating and sharing knowledge. Force11 has grown from a small community of like-minded individuals into a bold and diverse working group in support of open access. The collective leverages information technologies and multimedia in order to reach and educate the greater community. Force11 welcomes new members who value and support their manifesto in favour of openness.
The mission of Foundation for Open Access Statistics (FOAS) is to support free software, open access publishing platforms, and reproducible research in statistics. FOAS encourages researchers to make information publicly available online and accessible to members of the academic community at large. FOAS is one of the few open access journals that is free for both authors and readers. FOAS advocates for open source code, and the materials and information necessary to reproduce mathematical results.
The Free Knowledge Institute (FKI) is a collective of networks and communities that support, facilitate, and enable the study, sharing, and collaborative development of free knowledge and free technologies. FKI supports a just, free knowledge society through sustainable collaboration and empowerment. FKI's mission is to educate people about open access and open source ideologies so that they can become effective participants and advocates in their own domain. The tenets of their collective include flexibility, collaboration, shared purpose, shared values, and communication.
The Minister of Industry presents Digital Canada 150 2.0, a plan that aims to equip all Canadian citizens with the necessary digital skills and tools to succeed in today's world, help individuals and communities by providing opportunities in the global digital economy, and connect and protect Canadians online. The five pillars of Digital Canada 150 are connecting Canadians, protecting Canadians, economic opportunities, digital government, and Canadian content. Their website addresses each of these pillars individually, defines what they mean, and provides updates of recent accomplishments in each of these categories by posting updates, policies, success stories, and other information. International Community for Open Research and Education. 2016. "Home." http://www.icore-online.org. Archived at https://perma.cc/ATY7-5U2G.
The International Community for Open Research and Education (ICORE) aims to support, promote, and enhance open access to research and education worldwide. The overall mission of the community is to re-establish openness, as was default in scholarship starting with the inception of journal publishing and up until as recently as several decades ago. ICORE's five main objectives are to promote open access as a fundamental social objective; to support the implementation of strategies and services for facilitating open access; to foster cooperation between policy makers, researchers, educators, and students; to facilitate the transfer of current research into the deployment of future research; and to encourage innovative research that benefits the other objectives of the association. ICORE currently supports two active working groups and hosts a workshop series entitled 'Openness for All.' Knowledge Exchange. n.d. "About Us." http://www.knowledge-exchange.info/about-us. Archived at https:// perma.cc/FQ96-54UX.
Knowledge Exchange understands that digital technologies open opportunities for advanced research and higher education. It is the vision of the organization that open scholarship is acknowledged and taken up as one of these opportunities. Knowledge Exchange argues that opening up access to scientific research and encouraging collaboration will improve transparency, engender trust, increase effective use of data, and support wider participation in research. The group's mission is to support their five partner organizations on the road to achieving a shared vision of open scholarship.
The '"Think Piece" on a DI Roadmap' investigates important steps towards ' a robust and sustainable digital infrastructure for research in Canada' (1). The authors address challenges such as governance/coordination, policy and planning framework, and data management within the DI roadmap framework. The authors propose methods to expand on the roles and responsibilities of organizational structures. The first phase of action consists of developing a collaborative national coalition for going forward, implementing priority working groups, pursuing refinements to the DI funding system, giving priority to the data management pillar of the DI ecosystem, and articulating a value proposition. In the second phase, the authors propose engaging the government and the private sector, expertise and capacity development, middleware and software development, and a need for more structured communications and engagement with research communities and institutions. The authors conclude that there is an ongoing need for an ' engagement with individuals within and external to this network [as it is] critical to communicating and realizing the vision for the sector' (17). Open Knowledge International is a global, nonprofit network that champions openness using advocacy, technology, and training. Their mission is for everyone to have access to key information, and possess the ability to understand and use it to shape their lives. They aim for open knowledge to be a foundational concept and for knowledge to create power for the many, not the few. Open Knowledge International works to achieve their aims by developing an international network of individuals, opening up information, and providing stewardship and consulting services.
The Open Scholarship Initiative is a global cooperative established with the goal of creating a space for dialogue about, and unification under, open access practices. It unites a group of high-level, international, scholarly publishing decision makers in a series of annual meetings where ideas can be shared and common, actionable solutions can be established. Their aim is to improve the scholarly publishing system over the course of their 10-year effort (2016-2025).
OpenAIRE envisions itself as a bridge between research stakeholders and the world of scholarly publication. It was created to support the implementation of open access policies through an aggregated repository of open resources. Currently, OpenAIRE is working with 50 partners, from the European Union and beyond, on the OpenAIRE 2020 project. This 3.5-year research and innovation project aims to promote open scholarship and to improve the accessibility, discoverability, and reusability of research publications. The initiative brings together research professionals, organizations, libraries, and data experts in a truly collaborative partnership. ** Organisation for Economic Cooperation and Development. 2015. "Making Open Science a Reality." OECD Science, Technology and Industry Policy Papers 25.
The Organisation for Economic Cooperation and Development states that open science represents an effort towards making accessible publicly funded research in digital format and provides a rationale for open science. The authors discuss key actors in open science, including researchers, government ministries, research funding agencies, universities and public research institutes, libraries, repositories, data centres, private nonprofit organizations and foundations, private scientific publishers, and businesses. They also examine policy trends in open science, which could be mandatory rules, incentives, or funding. Their main findings include statements that approach open science as a means and not an end. The authors also explore open access to scientific publications and define open access in an exploratory manner by looking at it from various perspectives, with an interest in its legal implications.
The Public Knowledge Project was established in 1998 by John Willinsky at the University of British Columbia. Since its inception, the project has expanded and evolved to include multiple universities in North America, but it is located primarily at Simon Fraser University. The Public Knowledge Project creates open source software, such as Open Journal Systems, Open Monograph Press, and Open Conference Systems. Additionally, it conducts research in order to improve scholarly publishing. The core team is made up of approximately twenty developers, researchers, students, librarians, and staff.  Ridley, Appavoo, and Pagotto present the Integrated Digital Scholarship Ecosystem (IDSE) project of the Canadian Research Knowledge Network (CRKN). CRKN is a partnership of 75 Canadian universities working toward increasing digital content for research in Canada, which has a significant impact on Canadian research and academic libraries. This article relates the findings of a study on digital scholarship within these institutions, focusing specifically on the first phase of the IDSE, an initiative dedicated to advancing research in Canada by exploring the state of the digital landscape. Some important issues that are addressed in phase one are the role of the library, preservation, development of a research agenda, promotion and tenure, and ways to address and sustain the voice of a community. IDSE, according to the authors, is an ecosystem that helps point out the areas of digital scholarship that need to be addressed.
The Right to Research Coalition was founded in 2009 in order to promote open scholarly publishing. The group agrees that no student should be denied access to information that they need for study on the basis that their institution cannot afford to pay the high subscription fees. They argue that open information improves education, democratizes access, improves the impact of scholarship, and advances research. The Right to Research Coalition now represents almost seven million students across the world. They work to educate and advocate for the universal adoption of open access policies.
The Scholarly Publishing and Academic Resources Coalition (SPARC) aims to democratize access to knowledge by opening up the dissemination of research outputs and educational materials. SPARC works with interested parties to create opportunities to promote change, both in infrastructure and culture, and make ' open' the default for research and education. SPARC has over 200 members across North America and is closely affiliated with other international open access organizations. The group values collective action and collaboration. Some of their current initiatives include organizing International Open Access Week, creating campus open access policies, and recognizing work in the field with the SPARC Innovator Award.
The Social Sciences and Humanities Research Council (SSHRC) is a council that represents the interests of both public and private sectors of academia and is a large facilitator of research through grants and fellowships granted to Canadian researchers. SSHRC has a wide network of faculty; postdoctoral, doctoral, and masters students; as well as other researchers and research partnerships. 'Dialogue' is the SSHRC eNewsletter that publishes news related to academia and funding, as well as news on various opportunities and deadlines. 'Dialogue' also publishes about original research administered domestically in Canada and globally.
The University of Cambridge hosts an open access repository of scholarly research outputs, ranging from datasets and media collections to articles and conference presentations. It is the aim of the university that this archive will help make scholarship widely accessible. The repository is supported by an Open Access team and an Open Data team, both of which assist in making research widely and freely available.
Wikipedia is a multilingual, online, open access encyclopedia. The foundational principle of Wikipedia is that the encyclopedia is developed from a neutral point-of-view that anyone can use, edit, and distribute. Wikipedia is written collaboratively and mostly by anonymous volunteers. Since its conception in 2001, Wikipedia has grown into one of the most largely referenced websites with more than 38 million articles and 374 million unique visitors each month.