Funded by the Social Sciences and Humanities Research Council of Canada (SSHRC) in 2016, Project Endings anticipates the future endangerment of knowledge in the digital humanities. Given that the fluid nature of the digital does not lend itself well to end points, it is not surprising that large numbers of DH projects are simply abandoned for lack of planning and, often, lack of resources. Moreover, even completed and archived projects may not be preserved in such a way as to guarantee future access to their content. Project Endings seeks to mitigate this danger by proposing concrete solutions that can be put into practice now. We are contributing to the scholarly conversation about digital project planning and preservation through conference presentations and publications while simultaneously preparing ‘Ending and Archiving Tool Kits’ that will be made widely available to the DH community.

Our research began with lessons learned from four case studies at the University of Victoria: the Robert Graves Diary Project, created by Elizabeth Grove-White and the University of Victoria Libraries; the Nxa’amxčín Database and Dictionary, directed by Ewa Czaykowska- Higgins; Le mariage sous l’Ancien Régime, led by Claire Carlin; and the Map of Early Modern London, directed by Janelle Jenstad. Three developer-programmers—Martin Holmes, Stewart Arneil, and Greg Newton—and three librarians—Lisa Goddard, John Durno, and Matt Huculak—complete the Project Endings team. We have also performed a thorough review of publications on ending and archiving, and we are studying current practice through a multiple-choice online survey of DH practitioners (see Figures 1 and 2 for responses to some of our questions), followed by in-depth interviews with a subset of the respondents. The survey and ongoing interviews illustrate several major challenges to ending and archiving a DH project, and they suggest the urgent need for new strategies.

Figure 1 
Figure 2 

We are already acting on an innovative preservation strategy that began with the four case studies and is now being used for several other DH projects based at the University of Victoria. Besides archiving in multiple repositories, long-term curation requires preservation not only of static data such as XML files but also of the experience of using a given web application and its associated tools. Our programmers are rewriting current web applications using only technologies with a reliable long-term future. Hypertext Markup Language (HTML), Cascading Style Sheets (CSS) and Javascript (JS) are all managed by well-funded standards bodies, with a history of slow but steady evolution and standardization, and have universal support across computing browsers and platforms. Using HTML5, CSS, and JS promises to be the most robust and successful combination of web technologies yet, forming the basis not only for websites but also for standalone applications on portable devices (as part of the ‘Open Web Platform’) and the user interfaces for most cloud-based services. From our four case-study projects, we know that, for digital editions and text collections, almost all of the current functionality that depends on server-side technologies such as XML databases can be reproduced using pure HTML5, CSS and JavaScript. Such an implementation is not a practical approach for a project still underway, in which data changes frequently, because it involves the generation of large numbers of static files with substantial duplication of material. But once a project is over and the data is ‘frozen,’ it makes perfect sense to ‘static-ize’ it. We have chosen two approaches to generating static versions of each site:

  1. Writing code modules to generate all the output pages needed, similar to our current practice on dynamic sites; and
  2. Processing a version of the actual existing site, using a web crawler to create a static version.

We are also investigating solutions for the loss of search functionality, which is typically dependent on server-side indexing and processing. For relatively small sites with highly structured text, we are creating a JavaScript-based client-side search engine divorced from the live web, based on pre-computed indexes. The FullProof project ( provides a working example of this approach. For larger sites, we will investigate and test methods to configure our static web content, leveraging all possible encoded metadata so that it will be optimally indexed by present and future web search engines, on the assumption that in addition to HTML, CSS, and JS, search engines of some kind will remain an intrinsic part of the web. We will develop examples of search interface pages preconfigured to work with Google and other search tools, demonstrating how the key document features (document type, metadata, etc.) can be discovered by the search engine and used in conjunction with ordinary text search.

We are committed to disseminating our Endings and Archiving Tool Kits widely by the spring of 2020. We are also developing a ‘preservation seal of approval’ for projects besides our case studies that can be deemed ‘Endings compliant’ according to the basic standards described here; five additional projects housed at the University of Victoria have already satisfied these requirements. The outline of our ongoing work is available on a University of Victoria Online Academic Community WordPress site: