Posters/demonstrations

POSTER instructions

Poster Slam

The « poster slam » will be given in a plenary session and is one hour long.
Each poster will be presented in one minute (!) with your « slam »
speech and a one slide.
The objective is to hear about the key points of your poster in a less
formal manner than the one you would adopt in the paper session. So
please, just « come as you are » and if you wish, be creative! But don’t
fear the exercise.

Poster Sessions

Two poster sessions will follow, with the same posters. They take place
in the atrium of the plenary sessions venue. The second poster session
takes place at the same time at the SIGs meetings. But if you intend to
participate at a SIG meeting you won’t have to stay at your place for
the second session.

Displaying your Poster

When you arrive and register, please inform the conference desk that
you’ve got a poster to hang. The instruction and time will be given to
you then.
The posters will be displayed on large movable grids and will be
attached to the grids using tape or clips.
The best format is A0 (841×1189 mm) in a portrait orientation.
If your poster has a landscape orientation, please contact us.
Please also note that poster presenters will have to take down theirs posters at 18:00 on Thursday, October 30th.

Posters List

VIGLIANTI, Raffaele (Maryland Institute for Technology in the Humanities)

Keywords: stand-off, tools, web apps

  • Session: Posters
  • Date: 2015-10-29
  • Time: 15:00 – 16:00 & 16:30 – 17:30
  • Room: Atrium (Erato Building)

Encoding stand-off markup by hand is notoriously difficult and error prone; TEI projects, which generally rely on manual encoding, often shy away from stand-off techniques because of the considerable managerial overhead. Good authoring tools can help in producing solid stand-off markup, particularly if the elements linked are part of the same XML document. CoreBuilder is a web app aimed at supporting encoders in creating stand-off encoding via a user interface.[1] Users can set the XML elements and attributes that the tool should create to link selected content together. This poster will address the rationale for creating CoreBuilder, as well as demo the app itself. Stand-off markup is an essential encoding technique to model secondary hierarchies within the main hierarchy of an XML document. The TEI makes extensive use of this technique to model correspondence and alignment (<link>), temporal synchronization (<timeline>), aggregation (<join>), and more. TEI elements for stand-off rely on linking mechanisms that connect the element to a target element’s identifier, for example: <link target=”#element1 #element2″/> Efficient project management remains essential to deal with links across different documents (what Bański calls “remote stand-off”)[2] and with project-specific requirements. The Freischütz Digital Project, for example, employs remote stand-off markup to model a critical apparatus of variants across several TEI transcriptions of sources of the libretto for Carl Maria von Weber’s opera, Der Freischütz.[3] CoreBuilder was created for this projects’ encoders to create apparatus entries within a visual environment. Encoders can select elements that form a variant from multiple sources and generate stand-off apparatus entries, which can be further revised or removed via the user interface. The tool automatically creates the links using the identifiers from the sources, thus substantially reducing human error. CoreBuilder is currently in the process of being generalized to support the use of any stand-off TEI element.

Bibliography
  • [1] https://github.com/raffazizzi/coreBuilder
  • [2] Bański, P. (2010). Why TEI Stand-off Annotation Doesn’t Quite work: and why you might want to use it nevertheless. In Proceedings of Balisage: The Markup Conference 2010, Volume 5 of Balisage Series on Markup Technologies.
  • [3] Viglianti, R., Schreiter, S., and Bohl, B. (2013). A stand-off critical apparatus for the libretto of Der Freischütz. TEI Members Meeting 2013, Sapienza Università di Roma, 2-5 October

BÉRANGER, Marine (PROCLAC UMR7192 Research Lab – EPHE and CNRS, Paris, France); HEIDEN, Serge (ICAR UMR5191 Research Lab – Lyon University and CNRS, France); LAVRENTIEV, Alexei (ICAR UMR5191 Research Lab – Lyon University and CNRS, France)

Keywords: Akkadian language, cuneiform writing, letters corpus, TXM portal, linguistic analysis

  • Session: Posters
  • Date: 2015-10-29
  • Time: 15:00 – 16:00 & 16:30 – 17:30
  • Room: Atrium (Erato Building)

This proposal will show the results of a project which goal is to outline the different Mesopotamian scribal traditions and to understand the complexity of a letter’s writing based on a corpus of currently 350 Akkadian letters written on clay tablets in the Old Babylonian dialect between 2002 BC and 1595 BC. At first, the demonstration will use the desktop version of TXM, installed on a laptop, to show the different available ways to import the TEI encoded letters into TXM for various kinds of analysis at word or at character (cuneiform sign) level, to show a kwic concordance of the cuneiform signs that were erased by the scribe during the writing of the letter, to show how one can identify the vocabulary which is characteristic to a place of composition, a circumstance or a period (based on tablets’ metadata), and how to visualize the similarity or dissimilarity between letters. In a second stage, the demonstration will use the web portal version of TXM, installed on a server (http://portal.textometrie.org/demo) and accessed by a web browser, to show how the same corpus can be browsed, read and analyzed online as in illustration 1 below.

[illustration 1] Screenshot of a web browser accessing a TXM portal displaying a synoptic view of the edition of tablet AS 22 plate 3 n° 6 composed of facsimile / cuneiform / transliterated facets, with a kwic concordance of the ‘na’ syllable below, the seventh syllable occurrence being highlighted in pink in the edition facets: – facsimile image from Old Babylonian Letters from Tell Asmar. R. M. Whiting, Jr. Assyriological Studies (22), 1987, Oriental Institute of the University of Chicago, <https://oi.uchicago.edu/research/publications/assyriological-studies> – transliteration from Archibab research team <http://www.archibab.fr>

LETRICOT, Rosemonde (LARHRA UMR5190, France; Université Jean Moulin Lyon 3, France); HOURS, Bernard (LARHRA UMR5190, France; Université Jean Moulin Lyon 3, France); SYLVAIN, Boschetto (LARHRA UMR5190, France); BERETTA, Francesco (LARHRA UMR5190, France)

Keywords: digital edition, anotation with ontology, interaction with database, interoperability, digital tools for historians

  • Session: Posters
  • Date: 2015-10-29
  • Time: 15:00 – 16:00 & 16:30 – 17:30
  • Room: Atrium (Erato Building)

Le projet d’édition critique des Mémoires de Léonard Michon, chronique lyonnaise en 7 volumes conservée aux Musées Gadagne de Lyon, souhaite s’inscrire dans le tournant des humanités numériques afin de mettre à profit la puissance computationnelle au sein d’un projet de recherche. Développé en collaboration avec le Pôle Histoire numérique du LARHRA (UMR 5190), ce projet vise à diffuser en ligne l’édition d’un manuscrit du XVIIIe siècle pour lequel de nombreuses informations historiques ont été collectées. Pour sa version numérique, le choix des méthodes et technologies s’est naturellement tourné vers la formalisation XML/TEI pour baliser la transcription de cette source.Y sont décrits non seulement les éléments de structuration du texte mais aussi les éléments qui intéressent la recherche historique, à savoir les entités nommées, les dates et les lieux. Par ailleurs, le LARHRA dispose d’une base de données collaborative mise à disposition des chercheurs pour centraliser de manière cumulative les informations historiques extraites des études de sources, selon une méthode d’atomisation de la donnée appelée SyMoGIH. Il était important de pouvoir lier le travail d’édition XML/TEI avec les informations collectées parallèlement dans ce système d’information, ce qui a été fait grâce à l’insertion d’identifiants uniques déréférençables à l’intérieur même des balises TEI. De cette manière, à l’issue du développement du site Internet de l’édition critique, il sera possible de présenter efficacement aux lecteurs à la fois tous les extraits de texte se rapportant à une entité nommée, mais aussi les données historiques qui auront été collectées et qui viendront en compléter la lecture. Concernant l’aspect scientifique, le croisement des informations issues du texte et celles recueillies dans les documents d’archives permet d’envisager une représentation des relations entre les individus cités dans les Mémoires et d’étendre les perspectives de recherche par les méthodes d’analyse de réseaux. Ce poster vise à présenter un exemple d’articulation entre une édition de texte XML/TEI et son apparat critique constitué dans une base de données relationnelle.

GALLERON, Ioana (Université Stendhal Grenoble 3, France); WILLIAMS, Geoffrey (Université Stendhal Grenoble 3, France)

Keywords: French 18th c theatre, de Boissy, Topoi, corpus linguistics

  • Session: Posters
  • Date: 2015-10-29
  • Time: 15:00 – 16:00 & 16:30 – 17:30
  • Room: Atrium (Erato Building)

The Boissy project sets out to create a complete digital version of the theatrical work of the French playwright Louis de Boissy (1694-1754). The aim is to create a freely available TEI-XML version of his works, both printed and manuscript, and gradually to link these to his other productions and corpora constructed from contemporary language texts as well as an electronic version of Antoine Furetière’s dictionary in its second, 1701, edition. One of the applications of this corpus is to be used in imagining ways of automatising the identification of literary topoi and other literary significant recurrences using TEI. Topoi are of a huge importance in establishing histories of ideas, of sensibilities and of representations, but their national and international fortunes have been approached only through case studies. The multiplication of electronic corpora allows for a much needed quantitative survey. Boissy’s theatre, produced for the prestigious Comédie-française as well as for the less well considered theatres of the Italians and of Parisian fairs, constitutes an interesting field of observation for the permanences and the novelties in drama writing at the middle of the eighteen century. The challenge is to train the computer to recognise literary structures beyond those observable through the standard mark-up of performance texts, as recommended by TEI guidelines. To achieve this, the team engaged in the definition of linguistic features of a topos, and looks at the possibilities of translating these as TEI mark-up. Interaction between two communities of use, that of scholars on French drama and that of corpora linguists, is therefore key to the project and constitute the main topic of this poster.

GAIFFE, Bertrand (ATILF-CNRS (UMR 7118), France); BÉATRICE, Stumpf (ATILF-CNRS (UMR 7118), France)

Keywords: critical edition, publishing process

  • Session: Posters
  • Date: 2015-10-29
  • Time: 15:00 – 16:00 & 16:30 – 17:30
  • Room: Atrium (Erato Building)

In 2008, thank to an ERC grant, a team began editing the first french translation of the City of God by Raoul de Presles. In 2011, in Würtzburg, we gave an account of our edition process even though nothing was published yet. Today, books 1 to 3 have been published and books 4 and 5 are expected for publication in july this year (both by Champion Paris). We are now able to describe our TEI based edition workflow and explain why TEI proved a good choice even when we aim only at a traditional paper book. Our edition, on paper, follows the habits of having footnotes split into 3 levels dedicated respectively to text edition, variant readings and bibliographic/historical references. We also have three indexes whose top level entries are the most frequent forms appearing in the text. We will describe some of the compromises we had to do. For instance editors sometimes want to add prose into automatically generated notes (cf. corr ou rdg), sometimes also, a “non formal” note has to land into a level where one would expect formal notes only. We will describe some of the white spaces nightmares we sometimes fell into even though most of our solution actually consists in very carefull proof reading.

References

La Cité de Dieu de saint Augustin traduite par Raoul de Presles (1371–1375), Livres I à III. Edition du manuscrit BnF Fr. 22912 O. Bertrand Ed., Paris, Champion,2013. Gaiffe B., Stumpf B. (2011), « A large scale critical edition : first translation of St Augustine’s City of God by Raoul de Presle » (Annual Conference and Members’ Meeting of the TEI Consortium) Würzburg, 10 October 2011.

TURSKA, Magdalena (University of Oxford, United Kingdom)

Keywords: tei simple, processing model, abstraction, interoperability

  • Session: Posters
  • Date: 2015-10-29
  • Time: 15:00 – 16:00 & 16:30 – 17:30
  • Room: Atrium (Erato Building)

The Guidelines of the Text Encoding Initiative Consortium (TEI) have been used throughout numerous disciplines producing huge numbers of TEI collections. These digital texts are most often transformed for display as websites and camera-ready copies. While the TEI Consortium provides XSLT stylesheets for transformation to and from many formats there is little standardisation and no prescriptive approach across projects towards processing TEI documents. TEI Simple project aims to close that gap with its Simple Processing Model (SPM), providing the baseline rules of processing TEI into various publication formats, while offering the possibility of building customized processing models within TEI Simple infrastructure. For the first time in history of TEI there exists a sound recommendation for default processing scheme, which should significantly lower the barriers for entry-level TEI users and enable better integration with editing and publication tools. Possibly of even greater significance is the layer of abstraction provided by TEI SPM to separate high-level editorial decisions about processing from low-level output format specific intricacies and final rendition choices. SPM aims to offer maximum expressivity to the editor at the same time encapsulating the implementation details in TEI Simple Function library. A limited fluency in XPath and CSS should be enough to tailor the default model to specific user’s needs in a majority of cases, significantly reducing time, cost and required level of technical expertise necessary for TEI Simple projects. This presentation aims to explain both theoretical foundations of TEI SPM and practical aspects of using it with a collection of TEI documents that can be converted to TEI Simple. It is hoped that editors, curators and archivists as well as developers dealing with TEI will benefit from employing TEI SPM in their workflows. All outputs of TEI Simple project are freely available under open licenses.

SPADINI, Elena (Huygens ING, Netherlands, The); TURSKA, Magdalena (Oxford University); BRIGHTON, Mischa (University of Cologne); SPINAZZÈ, Linda (National University of Ireland, Maynooth)

Keywords: standoff, markup, pointing, overlap, interoperability, encoding, limitation, interpretation, enrichment

  • Session: Posters
  • Date: 2015-10-29
  • Time: 15:00 – 16:00 & 16:30 – 17:30
  • Room: Atrium (Erato Building)

In this poster, we propose two different approaches to using standoff markup in XML-TEI. What? “Markup is said to be standoff, or external, when the markup data is placed outside of the text it is meant to tag” (<tei-c.org>). Why? One of the most widely recognized limitations of inline XML markup is its prohibition against element overlap; standoff has been considered as a possible solution to it. On a theoretical level, inline markup embeds one interpretation into another interpretation. On a very practical level, standoff reduces distraction while encoding. Standoff may also be a step towards interoperability (see Schmidt 2014 on the TEI Journal). Finally, all these characteristics facilitate the possibility to enrich and further annotate existing digital texts. How? Having considered the up-to-date bibliography on TEI standoff markup, we propose two different approaches: first, one using the TEI Module Feature Structures; second, one utilizing existing TEI elements into a different structure. Several issues will be addressed, such as pointing mechanisms, changes to the base text and layers of markup (for visualization, searching, processing). We will finally consider pro and cons of the two approaches.

Bibliography
  • Boot, Peter. “Towards a TEI-Based Encoding Scheme for the Annotation of Parallel Texts.” Literary and Linguistic Computing 24.3 (2009): 347–361. Dipper, Stefanie. “XML-Based Stand-off Representation and Exploitation of Multi-Level Linguistic Annotation.” <http://www.linguistics.ruhr-uni-bochum.de/~dipper/papers/xmltage05.pdf>
  • Ide, Nancy, and Keith Suderman. “GrAF: A Graph-Based Format for Linguistic Annotations.” Proceedings of the Linguistic Annotation Workshop. Stroudsburg, PA, USA: Association for Computational Linguistics, 2007. 1–8. <http://www.cs.vassar.edu/~ide/papers/LAW.pdf>
  • Pose, Javier, Patrice Lopez and Laurent Romary. “A Generic Formalism for Encoding Stand-off annotations in TEI”. 2014. <hal-01061548>

HASHIMOTO, Yuta (Kyoto University, Japan); KANO, Yasuyuki (Kyoto University, Japan); OHMURA, Junzo (Bukkyo University, Japan)

Keywords: visualization, Japanese texts, seismology, GIS

  • Session:
  • Date: 2015-10-29
  • Time: 15:00 – 16:00 & 16:30 – 17:30
  • Room: Atrium (Erato Building)

Studying historical earthquakes is one of the most important research topics in seismology since it provides a valuable foundation for understanding the mechanisms of past seismic phenomena and thus for making predictions of potential future seismic hazards. Because instrumental recordings are not available for earthquakes that took place before the 20th century, studies on those earthquakes have to rely mostly on written recordings such as public documents written by the governments of the day or private diaries and letters. Since 2011 a group of seismology researchers and humanities scholars including the authors have conducted a weekly seminar at Kyoto University for studying such earthquake recordings preserved in Japanese historical archives. So far we have transcribed several historical documents written in the Edo period (1603-1868). Our goal is to make these historical recordings a useful resource for seismic research and disaster prevention in the future. On the other hand, a mere transcription text will not be a primary data source for seismic research because it has to be interpreted by a human; it needs to be converted into data that can be analyzed by mathematical models or computer programs. Our current attempt is to encode our transcription texts into a machine-readable format with TEI. Once the information in the texts (such as location names and dates) is properly encoded, we can analyze it numerically and even use it to create temporal and geographical visualizations using GIS technologies, which will make it much easier to grasp the locations and scales of the earthquakes described in the texts. In our presentation, we will demonstrate how we encode the Japanese earthquake recordings with TEI and will also demonstrate a few computer programs that we are developing for their visualization.

BURGHART, Marjorie (UMR 5648 CNRS)

Keywords: Critical editions, tools

  • Session: Posters
  • Date: 2015-10-29
  • Time: 15:00 – 16:00 & 16:30 – 17:30
  • Room: Atrium (Erato Building)

The TEI Critical Edition Toolbox is a simple tool based on TEI Boilerplate and offering an easy visualisation for TEI XML critical editions encoded with the parallel segmentation method. It especially targets the needs of people working on natively-digital editions. Its main purpose is to provide editors with an easy way of visualizing their ongoing work before it is finalised, and also to perform automatic quality checks on their encoding. Tools like Diple or the Versioning Machine are very useful for finished editions, but they may not be well adapted to ongoing work. For instance, an ongoing edition is likely to present only <app/> elements with only <rdg/> children, or to present a mix of <app/> elements with only <rdg/> children and others with both a <lem/> and <rdg/> children. Proposing a visualization for such encoding is not easy, because there is no base text (yet).

The Toolbox lets you:

  • Check your encoding: offers facilities to display your edition while it is still in the making, and check the consistency of your encoding. For instance, you can check which apparatus entries do not list all the witnesses (if you are using a positive apparatus), or which are mistakenly listing the same witness twice, or highlight apparatus entries listing a particular witness, etc.
  • Display your text according to a specific witness: display the text of a particular witness from your edition, using the apparatus entries you have created.

This poster will introduce the TEI Critical Edition Toolbox and discuss the different ways in which its development could be continued.

SCHMIDT, Gleb (Saint Petersburg Institute of History, Russian Federation)

Keywords: synaptic edition, critical edition, TXM, TEI

  • Session:
  • Date: 2015-10-29
  • Time: 15:00 – 16:00 & 16:30 – 17:30
  • Room: Atrium (Erato Building)

The aim of the present poster is to demonstrate an example of a synaptic critical edition of the Medieval Latin text of the late XIth century. The edition was prepared as a part of Master Thesis project in French University College (Saint-Petersburg, Russia). This Master Thesis project was supported by Zeno Karl Schindler Foundation (Switzerland) and the major part of the work was executed in ENS de Lyon under the supervision of Dr. A. Lavrentev. The idea of the edition was based on two fundamental principles. The first was the creation of machine-legible text combining traits of different edition types: critical edition (representation of all readings proposed in the tradition), synaptic edition (representation of the facsimile text of the base manuscript), corpus edition (the presence of the morphological and syntactical description of the text). The second was the use of the simplest tools: MS Office, Oxygen XML editor, TXM. The first stage of the work the transcription and the collation of the text were made. The different readings were noted in the most evident and simple way, as footnotes in MS Office. In spite of the information on readings, the footnotes contained some remarks on the sources used by the author of the text and scholarly commentary which were distinguished using the internal syntax of the document. On the second stage we produced a TEI-XML encoded text where notes of various types have got different attributes, all quotation names and termini were described. On the last stage we have used the TXM with macros to tokenize, lemmatize the text and create a synaptic edition. Then, the editing default CSS and XSLT stylesheets has permitted us to format the footnotes and create two groups of notes (apparatus criticus, apparatus fontium) and thus create a classical critical edition.

CAPELLI, Laurent (CCSD, France); FARHI, Laurence (Inria, France); ROMARY, Laurent (Inria, France; Institut für Deutsche Sprache und Linguistik, Allemagne)

Keywords: publication repository, bibliographic meta-data, repurposing

  • Session: Posters
  • Date: 2015-10-29
  • Time: 15:00 – 16:00 & 16:30 – 17:30
  • Room: Atrium (Erato Building)

La plateforme d’archives ouvertes HAL (hal.archives-ouvertes.fr), développée et administrée par le CCSD, est destinée au départ à la diffusion d’articles scientifiques de niveau recherche,émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Chaque ressource, représentée par une grille de métadonnées qui dépendent de sa typologie (article, conférence, ouvrage, diplôme, rappport, donnée de recherche, etc), est décrite dans un format pivot XML décrit comme une customisation des directives de la TEI. Plus précisément, il décrit tout échange de méta-données au sein ou avec HAL sous la forme d’un document à trois niveaux décrivant la transaction (teiHeader), les méta-données propres à un contenu de HAL (biblFull), et les méta-données de la source de ce contenu (biblStruct, par exemple les méta-données de l’article publié). C’est ce format qui permet ensuite à HAL de proposer via des transformations XSL les formats “standards” des publications tels que le BibTeX, le Dublin-Core et le Endnote par exemple. Notons également que le format XML de dépôt via le protocole SWORD dans HAL repose lui aussi sur le même modèle dans une version simplifiée. L’API de recherche de HAL (api.archives-ouvertes.fr/search/) propose ses ressources suivant le format TEI également. Les outils tels que le CV d’un chercheur (cv.archives-ouvertes.fr), la liste des publications d’une unité de recherche proposées par le CCSD reposent ainsi sur une extraction en TEI des ressources de l’archive HAL. D’autres institutions qui utilsent HAL ont aussi développé des outils à partir de l’export TEI de HAL. Ainsi Inria génère la liste des publications annuelles destinée au rapport d’acticité sur la base de ce même format. Ce travail s’intègre dans une dynamique plus large des infrastructures françaises de publication (Istex, Revue.org) qui toutes utilisent la TEI comme format de référence pour l’interopérabilité de leurs données. Ce format pérenne et ouvert est également à tous (le schéma de la TEI utilisée dans HAL est disponible sous http://api.archives-ouvertes.fr/documents/aofr.xsd) alors contribuez à HAL et proposez vos outils.