Abstract
Modern AI image classifiers have made impressive advances in recent years, but their performance often appears strange or violates expectations of users. This suggests humans engage in cognitive anthropomorphism: expecting AI to have the same nature as human intelligence. This mismatch presents an obstacle to appropriate human-AI interaction. To delineate this mismatch, I examine known properties of human classification, in comparison to image classifier systems. Based on this examination, I offer three strategies for system design that can address the mismatch between human and AI classification: explainable AI, novel methods for training users, and new algorithms that match human cognition.
Summary and Conclusions
Overall, human classification differs from AI systems in many ways. Humans using these system may incorrectly assume they operate like humans do, which I refer to as the cognitive anthropomorphism of AI. This mismatch means that humans may reject technology that is useful, unless systems are appropriately designed to acknowledge the mismatch and work against it. This can be done in a number of ways, but all involve some level of interactive design in order to improve human-machine capabilities. Thus, the comparison between human and machine classification described here may offer a number of leverage points to improve AI-human capabilities.
References
- arXiv: https://arxiv.org/abs/2002.03024
- PDF: https://arxiv.org/ftp/arxiv/papers/2002/2002.03024.pdf
Liked this post? Follow this blog to get more.