Some occasional thoughts, writings, and musings from me. Photos, things I learned, half-baked ideas...
Call for Responsible AI Case Studies
Our Responsible AI team is soliciting proposals for case studies that explore ethical challenges in Artificial Intelligence (AI) projects and tools in libraries and archives. 500 word proposals due January 20, 2023. Honorariums available for selected case studies.
Coalition for Networked Information (Online), Session with Leila Sterman, 21 March 2022.
The image classifer is in agreement about this treachery. This is a coffeepot.
As part of my research informatics work, I’ve been prototyping how we can use #machinelearning to create metadata or connect it to scholarship and research processes. This image classifier uses #transferlearning based on the ImageNet database of 14,197,122 images (http://image-net.org) and samples a few @msulibrary digital collection images. Use the link below and refresh to cycle through the predictions.
Opening René Magritte jokes aside… This prototyping has been instructive to understand the limitations of #machinelearning image classification. More importantly, it gives me faith that this can be improved with some additional models. The #TensorFlow library I’m using here allows me to register the confidence it has in the predictions. Moreover, I can train and serve an additional model to help it. There is a path to improvement.
Too often, I’ve seen these limitations put forward as reasons to not pursue or worry about how #machinelearning will work in cultural heritage settings. A refrain of “Oh, that dumb machine can’t figure out what this is!” But, the scale and opportunity to refine our metadata and classification routines within these tools is too hard for me to ignore. The scale gives me pause. Because if we get classification and description wrong, we not only impact discovery, we might also label someone’s humanity incorrectly. See image below where Native American context is not present in the prediction.