---------- Forwarded message ----------
From: Rasa Bočytė <rboc...@beeldengeluid.nl>
Date: Fri, Apr 6, 2018 at 9:11 AM
Subject: Re: [Reprozip-users] Web Archiving
To: vicky.stee...@nyu.edu

Dear Vicky,

thank you for your response! I have been hopelessly looking for scalable
approaches for web preservation for the last couple of months, so I am very
excited that I finally found your project which deals with similar issues
and describes exactly what we need for our archiving purposes.
Automatically packaging and describing environments and dependencies is the
biggest challenge for us and ReproZip approach would be really useful.

If I understand correctly, ReproZip can describe environments that are
necessary to run a particular software or a web application that is used to
create a dynamic website (Nikola in the case of your website or Django with
StackedUp). Would it work if there is no such software? With the websites
that I am working with, we do have access to web servers. If the content is
placed on MySQL database, I guess you would be able to package the
environment with ReproZip, but could it capture the environment if the
content is placed on a server just as files and folders with no software?

Another question I have relates to the packaging process. How does it work
in terms of dynamic websites? I had a look at the video that you mentioned
in your email and what I would like to know is how ReproZip would deal with
content that is generated on the fly through user interaction. Is it
similar to webrecorder <https://webrecorder.io/>, in that it only records
dependencies and transactions that are clicked on by the user or could it
automatically capture the whole environment without user interaction?

I am sorry for bothering you with these questions. I come from a
non-technical background so it is difficult to understand all the technical
intricacies. But I would love to test your tools as part of my research,
and even if I do not manage to do that, I will definitely mention it as a
very promising approaches for web preservation.


On 5 April 2018 at 22:35, Vicky Steeves <vicky.stee...@nyu.edu> wrote:

> Hello Rasa,
> As the resident librarian on the team, I am really happy to see this email
> on the ReproZip users list!
> We are mainly exploring the possibilities of packing with dynamic sites,
> but within the domain of data journalism. Once we have worked with those
> use cases, we can certainly go beyond to other dynamic sites. Data
> journalism is a good place to start because of the nature of the work and
> how data journalism applications are served to the web --- lots of
> containers, databases, interactive websites, etc. In order to pack anything
> with ReproZip, we (or anyone using ReproZip!) needs access to the original
> environment and source files. We basically need access to the server, and
> then we can pack the dynamic site. We recorded the process of packing a
> dynamic website and put it on YouTube, which might be helpful:
> https://www.youtube.com/watch?v=SoE2nEJWylw&list=PLjgZ3v4gFx
> pXdPRBaFTh42w3HRMmX2WfD
> ReproZip automatically captures technical and administrative metadata. You
> can view the technical and administrative metadata collected in this sample
> config.yml file: https://gitlab.com/snippets/1686638. The config.yml has
> all the metadata from the .rpz package. That particular yml file I just
> linked comes from an experiment of mine, packing a website with ReproZip
> made with Nikola, a static site generator, and deployed on Firefox. This is
> the same website with Nikola, deployed on Google Chrome:
> https://gitlab.com/snippets/1686640. The config.yml file is human
> readable, but very long (lots of dependencies!). We still need to get the
> descriptive metadata from the users, though.
> ReproZip can visualize the provenance of the processes, dependencies, and
> input/output files via a graph. The documentation and examples of those can
> be found here: https://docs.reprozip.org/en/1.0.x/graph.html. We are in
> the process of integrating a patch for transforming this static graph into
> an interactive visualizations using D3.
> As for Docker, I too do not trust it. However, ReproZip simply *uses*
> Docker, but does not rely on it. ReproZip works on a plugin model -- so the
> .rpz file is generalized and can be used by many virtualization and
> container softwares. We are in the process of adding an unpacker for
> Singularity, for example. If Docker goes out of business/ceases to exist
> tomorrow, we can still unpack and reuse the contents of .rpz files. We
> actually wrote a paper about how ReproZip could be used for digital
> preservation, available open access here: 
> https://osf.io/preprints/lissa/5tm8d/
> In regards to emulation vs. migration, this is a larger conversation in
> the digital preservation community as you probably know. There is a benefit
> to serving a website (or any digital object really) in the original
> environment, for serving it up to users later on. We've seen this in the
> art conservation world, where time-based media preservation has forced
> archivists to engage with emulation more seriously. Yale University offers
> emulation as a service, where they provide access to various old operating
> systems and software to allow folks to interact with digital objects in
> their original environment. The video game community also has many
> discussions about this, with emulators really rising out of that community.
> A high quality migration can be as effective as emulation in preserving the
> original look and feel of complex digital objects -- but in terms of that
> scaling, I'm not sure.
> Cheers,
> Vicky
> Vicky Steeves
> Research Data Management and Reproducibility Librarian
> Phone: 1-212-992-6269
> ORCID: orcid.org/0000-0003-4298-168X/
> vickysteeves.com | @VickySteeves <https://twitter.com/VickySteeves>
> NYU Libraries Data Services | NYU Center for Data Science
> On Wed, Apr 4, 2018 at 3:28 PM, Rémi Rampin <remi.ram...@nyu.edu> wrote:
>> ---------- Forwarded message ----------
>> From: Rasa Bočytė <rboc...@beeldengeluid.nl>
>> Date: Wed, Apr 4, 2018 at 8:03 AM
>> Subject: Re: [Reprozip-users] Web Archiving
>> To: Rémi Rampin <remi.ram...@nyu.edu>
>> Dear Remi,
>> thank you for your response! It is good to hear that other people are
>> working on similar issues as well!
>> Could you tell me a bit more about your work on trying to packaging
>> dynamic websites? Are you working on specific cases or just exploring the
>> possibilities? I would be very interested to hear how you approach this.
>> In our case, we are getting all the source files directly from content
>> creators and we are looking for a way to record and store all the
>> technical, administrative and descriptive metadata, and visualise
>> dependencies on software/hardware/file formats/ etc. (similar to what
>> Binder does). We would try to get as much information from the creators
>> (probably via a questionnaire) about all the technical details as well as
>> creative processes, and preferably record it in a machine and human
>> readable format (XML) or README file.
>> At the end of the day, we want to see whether we can come up with some
>> kind of scalable strategy for websites preservation or whether this would
>> have to be done very much on a case-by-case basis. We have been mostly
>> considering migration as it is a more scalable approach and less
>> technically demanding. Do you find that virtualisation is a better strategy
>> for website preservation? At least from the archival community, we have
>> heard some reservations about using Docker since it is not considered a
>> stable platform.
>> Kind regards,
>> Rasa
>> On 3 April 2018 at 19:11, Rémi Rampin <remi.ram...@nyu.edu> wrote:
>>> 2018-03-30 08:59 EDT, Rasa Bočytė <rboc...@beeldengeluid.nl>:
>>>> What I am most interested in is to know whether ReproZip could be used
>>>> to:
>>>> - document the original environment from which the files were acquired
>>>> (web server, hardware, software)
>>>> - record extra technical details and instructions that could be added
>>>> manually
>>>> - maintain dependencies between files and folders
>>>> - capture metadata
>>> Hello Rasa,
>>> Thanks for your note! We have been looking specifically at packing
>>> dynamic websites, so your email is timely. In short, ReproZip can do the
>>> bullet points you’ve outlined in your email.
>>> ReproZip stores the kernel version and distribution information
>>> (provided by Python’s platform module
>>> <https://docs.python.org/3/library/platform.html>). That is usually
>>> enough to get a virtual machine (or base Docker image) to run the software.
>>> It also stores the version numbers of all packages from which files are
>>> used (on deb-based and rpm-based distributions for now).
>>> For instructions, ReproZip stores the command-lines used to run the
>>> software that is getting packaged. It can also record which files are
>>> “input” or “output” of the process, which is useful when reproducing, to
>>> allow the user to replace inputs and look at outputs easily. We try to
>>> infer this information, but we rely on the user to check this, and provide
>>> descriptive names for them (however this means editing the YAML
>>> configuration file, so our experience is that not everyone takes the time).
>>> They can also edit the command-lines/environment variables.
>>> I am not entirely sure what you mean by “dependencies between files and
>>> folders”. The path to files in the original environment is stored, so the
>>> reproduction can happen with the exact same setup.
>>> As for metadata, ReproZip can automatically capture technical and
>>> administrative metadata in the config.yml file included in the ReproZip
>>> package. The user (archivist, in your case!) will need to add descriptive
>>> metadata to the package in something like a README, a finding aid, or by
>>> archiving the ReproZip package in a repository and filling in the required
>>> metadata form.
>>> We have a few examples of ReproZip packing and preserving websites. We
>>> packed a general blog made with Django (a small web application using a
>>> SQLite3 database) -- you can view more information about that website and
>>> the instructions for unpacking it here
>>> <https://github.com/ViDA-NYU/reprozip-examples/tree/master/django-blog>.
>>> We’ve also packed a data journalism app called Stacked Up, a dynamic web
>>> application to explore the textbook inventory of Philadelphia public
>>> schools (data is stored in a PostgreSQL database)-- you can view more
>>> information about that website and the instructions for unpacking it
>>> here
>>> <https://github.com/ViDA-NYU/reprozip-examples/tree/master/stacked-up>.
>>> You can view all of our examples in our examples site
>>> <https://examples.reprozip.org/>.
>>> Best regards,
>>> --
>>> Rémi Rampin
>>> ReproZip Developer
>>> Center for Data Science, New York University
>> --
>> *Rasa Bocyte*
>> Web Archiving Intern
>> *Netherlands Institute for Sound and Vision*
>> *Media
>> <https://maps.google.com/?q=Media%C2%A0Parkboulevard%C2%A01&entry=gmail&source=g>
>>  Parkboulevard
>> <https://maps.google.com/?q=Media%C2%A0Parkboulevard%C2%A01&entry=gmail&source=g>
>>  1
>> <https://maps.google.com/?q=Media%C2%A0Parkboulevard%C2%A01&entry=gmail&source=g>,
>>  1217 WE  Hilversum | Postbus 1060, 1200 BB  Hilversum | **beeldengeluid.nl
>> <http://www.beeldengeluid.nl/>*
>> _______________________________________________
>> Reprozip-users mailing list
>> Reprozip-users@vgc.poly.edu
>> https://vgc.poly.edu/mailman/listinfo/reprozip-users


*Rasa Bocyte*
Web Archiving Intern

*Netherlands Institute for Sound and Vision*
1217 WE  Hilversum | Postbus 1060, 1200 BB  Hilversum |
Reprozip-users mailing list

Reply via email to