Why Can't AI Understand Images as Man Does?
Keywords:AI, image, analogy, understand, knowledge, Other, Ego
AI can identify images, but cannot understand them as man does. The problem of understanding the iconic signs is the analogy, which cannot be clearly operationalized. Nothing guarantees signification by analogy, because it is neither the necessary effect of a cause, as in the indicative signs, nor the obligatory consequence of a rule, as of symbols (words). But the analogy is also fundamental to the human condition because our Ego implies the presence of Other. Or, just as the images, the understanding of Other implies the analogy: he is a self like me, but another self than myself, that is, an analogous self. That is, you can understand the behavior of the other's activities and actions, even what he communicates as messages because you interpret them as if it were about yourself. Different from the human being who is an existent, in AI the essence precedes existence. Even if the algorithms of the analogy process will be infinitely perfected, that analogy will miss the interpretation that comes from the life of the existing one. AI knows digital, man understands analog; AI understands from knowledge, man knows from understanding.
Ayzenberg, V., & Lourenco, S. F. (2019, June 27). Skeletal descriptions of shape provide unique perceptual information for object recognition. https://www.nature.com/
Bergman, M., & Paavola, S. (2019). The Commens dictionary. Digital companion to C. S. Peirce. Commens. http://www.commens.org/dictionary
Codoban, A. (2011). Imperiul comunicării. Corp, imagine şi relaţionare [The empire of communication. Body, image and relationships]. Idea.
Feldman, J. (2019, September 19). What you see that the machine doesn’t. https://mindmatters.ai/2019/09/what-you-see-that-the-machine-doesnt/
Lakoff, G., & Johnson, M. (2003). Metaphors we live by. The University of Chicago Press.
Loclair, C. M. (2019). Narciss. https://christianmioloclair.com/narciss
Norvig, P. (2019). Review of metaphors we live by. https://norvig.com/mwlb.html
Ray, T. (2018, November 30). Google's image recognition AI fooled by new tricks. https://www.zdnet.com/article/googles-best-image-recognition-system-flummoxed-by-fakes/
Searle, J. (1980). Minds, brains and programs. The Behavioural and Brain Sciences, 3(3), 417-457. https://doi.org/10.1017/s0140525x00005756
Watzlawick, P., Bavelas, J. B., & Jackson, D. D. (1967). Pragmatics of human communication: A study of interactional patterns, pathologies, and paradoxes. Norton.
How to Cite
Copyright (c) 2020 The Authors & LUMEN Publishing House
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant this journal right of first publication, with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work, with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g. post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g. in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as an earlier and greater citation of published work (See The Effect of Open Access).
Postmodern Openings Journal has an Attribution-NonCommercial-NoDerivs