Photographic archives are the storehouses of humankind's visual memory, and the documents stored in them form a cultural testimony asset of incalculable value. Their main mission is to preserve their cultural assets and to disseminate them to the rest of the society. From all the documents stored and managed in historical archives an average of 90% correspond to photographs that have an enormous cultural and historical value. They are snapshots of our society's past, and they enable us to see the day to day life of our ancestors.
Deeparchive is a project aiming at direct technology transfer in two key areas of photographic archives' activity: cataloguing, and access provision. While the archivist community has been very active in the early implantation of new information technologies, photographic archives still rely on manual annotation of their assets and traditional keyword-based search engines with several known limitations. With this project we propose to leverage the recent advances in Machine Learning and Computer Vision research to: (1) create new tools to help archivists in the tasks of classification, annotation, and indexation of new data; and (2) propose a transition from the keyword-based to the semantic-based search paradigm in photo archives' access.
We follow a Software as a Service (SaaS) model, in which the same underlying technology is used to provide different flavoured services to archivists and the general public. Our trained deep learning models for historical archive images labeling, dating, captioning, and semantic search are accessible by simply querying a REST API.
Upload an image (max. size 2Mb) to our demonstration system for automatic annotation and explore similar images from the public Europeana photography collection. In particular we use the collection of Ajuntament the Girona (under CC license) for this demo.
Best view screen size >= 1440 x 900 pixels.
If you are interested in our services as a possible customer/partner please contact us at info@deeparchive.io
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 712949 (TECNIOspring PLUS) and from the Agency for Business Competitiveness of the Government of Catalonia.
Principal Researcher : Lluis Gomez-Bigorda (Computer Vision Center - Universitat Autonoma de Barcelona)
Scientific Advisors: Andrew Bagdanov (MICC - University of Florence) / Dimosthenis Karatzas (Computer Vision Center - Universitat Autonoma de Barcelona)