What Interoperability for Computer Vision Labels? The Challenge of Annotation Standards
Béatrice Joyeux-Prunel (UNIGE)
Mathieu Aubry (ENPC-Paristech)
Béatrice Joyeux-Prunel and Mathieu Aubry: Introduction
Mathieu Aubry will give a quick overview of his past and ongoing projects which could be used to establish links between artworks and historical documents, including repeated patterns discovery in artwork collections, fine artwork alignment, document images segmentation, historical watermarks recognition, generic clustering, and scientific illustration propagation analysis.
He is a Computer Vision researcher at École des Ponts ParisTech. He leads the EnHerit project on developing Computer Vision tools specifically for Digital Humanity applications. He earned his PhD in CS from ENS in 2015 and spent a year as a visiting researcher in UC Berkeley.
Béatrice Joyeux-Prunel will briefly present the project VISUAL CONTAGIONS (SNF 2021-2025), a project on how images have contributed to globalization, its infrastructure, and the issue of an interoperable classification of visual content for the project.
She is full Professor at the University of Geneva in Switzerland, as chair for Digital Humanities. Originally a historian of modern and contemporary art, she leads Artl@s, a platform that encourages computational approach to the globalization of art and images and publishes global sources for that project (BasArt - exhibition catalogues worldwide since the 19th c.), and Postdigital (on digital cultures and contemporary art). She also leads the IMAGO Centre at the École Normale Supérieure, a “European Jean Monnet Excellence Center” dedicated to teaching, research and creation on the circulation of images in Europe (www.imago.ens.fr).
Jean-Philippe Moreux, Bibliothèque nationale de France: Sharing Annotations with IIIF: From Collections to Researchers
Jean-Philippe Moreux is the Gallica scientific advisor at the Bibliothèque nationale de France. He works on all the BnF heritage digitization, digital mediation, digital humanities and AI programs. He participates in national and international research projects on these topics. He’s also the chairman of the "CENL AI for Libraries network group". Prior to that, he was the BnF's OCR and digital text formats expert, an IT R&D Engineer and project manager, and then worked as a science editor and a consultant in the publishing industry.
Matthew Lincoln, Carnegie Mellon University Libraries: Match Assessment and Descriptive Tagging in the Carnegie Mellon University Photo Archive.
I will present our observations on a a prototype computer vision project to manage close match assessment and descriptive tagging in the Carnegie Mellon University photo archive (https://doi.org/10.1184/R1/12791807) The presentation will focus on the questions raised when integrating computer vision and match detection with core collections management systems. How can collecting institutions prepare their workflows and data infrastructure for new layers of data produced by these research projects?
Dr. Matthew Lincoln is the Collections Information Architect at Carnegie Mellon University Libraries, where he focuses on making library and archives collections tractable for data-driven research. He earned his PhD in Art History at the University of Maryland, College Park, and has formerly held positions at the National Gallery of Art, the Getty Research Institute, and as technical lead ofThe Programming Historian. His recent publications includeThe Index of Digital Humanities Abstracts and“Tangled Metaphors: Network Thinking and Network Analysis in the History of Art,” in The Routledge Companion to Digital Humanities and Art History.
Stuart James, Istituto Italiano di Tecnologia:Recognising Objects at Not So Short Distance
One of the goals of the MEMEX project builds a Knowledge Base of Cultural Heritage objects many of which haven't been meticulously digitised and in some cases even have a publicly available image association. Therefore, we focus on developing algorithms that take advantage of scene level labels of objects to help in the localisation and retrieval problem. While not so close it raises important questions on the hierarchical nature of objects, parts and patches to understand a scene in images from a camera or art.
Researcher (Assistant Professor) in Computer Vision at the Istituto Italiano di Tecnologia (IIT). Stuart's research focus is on Visual Reasoning to understand the layout of visual content from Iconography (e.g. Sketches) to 3D Scene understanding and their implications on methods of interaction. He is involved in the coordination and implementation of the MEMEX EU H2020 project for increasing social inclusion with Cultural Heritage. Stuart has previously held PostDoc positions at IIT, University College London (UCL) and the University of Surrey and continues to hold an honorary position at UCL and UCL Digital Humanities. In 2015 Stuart was awarded his PhD at the University of Surrey. He collaborates actively across disciplines including organising the Vision for Art workshop held in conjunction with ECCV.
Fabian Offert, University of California, Santa Barbara: Standardizing Feature Vectors
Image features automatically extracted by deep neural networks are increasingly common in digital art history: they are often a prerequisite for meaningful clustering, and more generally for any operationalization of "similarity" that goes beyond purely syntactic aspects of an image corpus. Current best practices, however, all but prohibit interoperability: feature vectors are incessantly re-computed, and slightly different but functionally similar model architectures break compatibility at every level. How can we standardize feature vectors in digital art history?
Fabian Offert is Assistant Professor in History and Theory of Digital Humanities at the University of California, Santa Barbara. His research and teaching focuses on the epistemology of artificial intelligence, and its intersection with the arts. Before joining the faculty at UCSB, Fabian was Postdoctoral Researcher in the research project "Synthetic Images as a Means of Knowledge Production" (DFG SPP 2172) at Friedrich Alexander University Erlangen, and Affiliated Researcher in the Artificial Intelligence and Media Philosophy Research Group at Karlsruhe University of Arts and Design. His research was supported by fellowships from the German National Academic Foundation and the Regents of the University of California. He was also a visiting scholar at the University of California, Berkeley. Previously, he worked for a number of German cultural institutions, including ZKM | Center for Art and Media Karlsruhe, Goethe-Institut New York, and Ruhrtriennale Festival of the Arts. His interdisciplinary projects have been supported by grants from Kulturstiftung des Bundes, Kunststiftung NRW, and the French Ministry of Culture, among others.
Leonardo Impett, Durham University: Copies and Curating
Current UK copyright law stems from the so-called Hogarth Act (the Engraving Copyright Act of 1734), which William Hogarth was instrumental in creating (in order to maximise his profits). In our project, "Hacking Copyright" (UT Sydney; Yale; Copenhagen BS; Durham) we seek to use computer vision to outline the networks of copies, modifications and knock-offs in the milieu of Hogarth (including his own borrowings). The first step of this will be the creation of an appropriate training dataset; the initial plans for which will be presented at the meeting.
Leonardo Impett is assistant professor of Computer Science at Durham University. In 2020 he finished his PhD with Sabine Susstrunk and Franco Moretti on Distant Reading and computer vision for the history of art. He has been DH Scientist at the Bibliotheca Hertziana (Max Planck), DH Fellow at Villa I Tatti (Harvard), Fellow and Visiting Scholar at CDH (Cambridge). He is currently an Associate of Cambridge Digital Humanities; an Associate Fellow of the Zurich Centre for Digital Visual Studies; and an Associate Researcher at the Orpheus Institute for Artistic Research.
2 févr. 2021