First, apologies, this is probably going to be a rather short and freeform blog post because, as I'm sure is the case with many of my peers, all my mental capacities are running on empty. So, bare with me as I try to make sense of some of the more complicated stuff we have covered thus far, digitation and AI in particular.
Basic Digitization: Breaking the World into Binaries
I found Walter Ong's parallel breakdown of logos and early digitization really fascinating. In class I was thinking of it in terms of analysis vs. synthesis, which I like to further define as deconstruction and reconstruction. I'm thinking that the world exists as an unimaginably large interconnected and synthesized being that, by experiencing it, we analyze or break apart into understandable, but ultimately distanced from reality, bits (or digits) of information, which is then interpreted to make meaning...That said, this sorta sounds like platonism, which is troubling, and, thanks to this course (which is for the better ultimately) will always be troubling. Nevertheless, I think the creation of knowledge is basically the deconstruction and reconstruction of information—subjectively perceived or received from others (copies of copies)—ideologies, and the products of biology (emotions primarily) into strange, complicated, mutable, living, fluid and necessarily discursive enigmas (adjectives!).
More on point, Digitization, as I understand it, is a similar form of this deconstruction and reconstruction (we analyze to synthesize) of information. My problem is with the notion that consciousness, a thing we (as far as I know) don't really fully understand, can be broken easily into a series of binary inputs—yes or no—and recreated in something that transcends biology—isn't beholden to hormones and chemistry or unmanageable entropy and eventual death. We have computers that are writing and rewriting their own code in the same way a human brain does, but I'd argue that, because a computer's "consciousness" is built out of a series of binaries (of answers) responding to the world it is perceiving, AI reflects western logic rather than human consciousness. It is fallacious to view humanity's relationship to the world as a series of yes or no questions and answers. How we act in the world can be seen as such—will I do this or that? yes I will—but it ignores the fact that most of our existence is spent thinking, not about what I am perceiving in this exact moment, but about fictional and abstract possibilities, the past, possible futures, why I had or hadn't done this or that: all of which are questions that rarely have clear, yes or no, answers. We are arrogant to think that human consciousness could ever be as clear and organized as binary. Also, it seems like an idea that is pragmatic to a problematic degree (P.S. when I'm feeling extra critical, I tend to think of pragmatism as the turning of capitalism into a philosophy). I'd argue there is more to living than acting and the consequences of those actions.
So, AI, as I understand it, does not reflect human consciousness comprehensively (I might be arguing an already generally accepted point here), but consciousness based on western logic, or logos (the linear organization of inputs into binaries), and consequently only reflects the small bit of our consciousness that analyzes to simplify. I think a world wherein our consciousness is made up of as many unanswerable questions as simple answers, and where we create meaning instead of receiving it—like a spotty radio antenna—would be a whole lot more enjoyable and interesting anyway. So...terribly flawed human consciousness...1 point...terrifyingly efficient Artificial Intelligence...0 points.
Welp, this is all sorta cynical, sorta beautiful, and sorta terrifying—I should probably spend some time to clarify some of the bits (or digits) in my head—but hey, this is my soapbox.
Comments