Aporia – what we are missing in many conversations/debates/arguments on the web

Aporia – what is it? Even though I have a BA in Rhetoric, I still don’t have a satisfactory answer, and that is perhaps apropos to the topic of aporia. From Wikipedia (best I could find):

Plato’s early dialogues are often called his ‘aporetic’ (Greek: ἀπορητικός) dialogues because they typically end in aporia. In such a dialogue, Socrates questions his interlocutor about the nature or definition of a concept, for example virtue or courage. Socrates then, through elenctic testing, shows his interlocutor that his answer is unsatisfactory. After a number of such failed attempts, the interlocutor admits he is in aporia about the examined concept, concluding that he does not know what it is. In Plato’s Meno (84a-c), Socrates describes the purgative effect of reducing someone to aporia: it shows someone who merely thought he knew something that he does not in fact know it and instills in him a desire to investigate it.

If we accept this, then aporia is the state in which an interlocutor accepts that their perception, which they thought to be complete, is incomplete, and acknowledges a need to further examine the topic from another perspective. (Aporia can also be used to bait someone into a false admission, but that is not the context that is relevant here.)

Aporia is, essentially, the graceful way to end a debate that has reached an ideological impasse. I don’t have time to pull examples, but this is the common exit for many of Socrates’ conversants. “Purgative” is especially relevant, as it suggests that the conversant can purge the prior belief and start anew; essentially, aporia is a liberating state.

Also, aporia, unlike checkmate, is a temporary status. It is a realization that you do not have the ideological framework to convince your opponent at the time, but require a timeout to collect and reconsider the topic from another angle. (That doesn’t preclude a reversal in opinion, by the way.) In the fire/return-fire nature of comment boards (including Facebook), there is no time for timeouts.

In my personal (and limited) experience on comment boards, aporia has become tantamount to acknowledging defeat or weakness, but that is obviously shortsighted. Having the last word rarely proves you are right, just last. Giving the parting shot on an argument due to impasse is no substitute for acknowledging a path forward; that’s what opposing governments or political parties do, and we can see how well that solves our problems.

My wife and I often admit aporia on topics we debate, both because we are in two different fields (communication and education) and lack sufficient knowledge overlap to prove the other person wrong, and because we have to live with each other (and each other’s imperfect knowledge). Unfortunately, on the web, we experience a different type of interaction.

I think it’s clear what comment thread I am talking about and who I side with (if not, Google me), but I wanted to raise an issue that we could all think about regardless of the topic.

Clever Tweetbots

Earlier in the week, I played this Radiolab segment “Clever Bots” for my students in my summer science fiction course. We were discussing artificial intelligence after reading Neuromancer by William Gibson and the segment discusses robots that approach what Alan Turing described as the threshold for intelligent machines (the ability for a machine to converse with a human and for that human to be unable to distinguish between the machine and another human around 30% of the time–the “Turing test”).

I was discussing language games and computer programming with Nicole earlier today. I was telling her about the interview in the above segment of Sherry Turkle, a professor at MIT, concerning a program named ELIZA: a language game program ca. 1966 that used natural language processing (NLP) to mimic the role of a therapist practicing Rogerian psychotherapy (a form of talk therapy), only a little too closely for the creator’s comfort. Reportage on the program speculated that people would go to phone-booth-like installations to receive therapy rather than a human psychiatrist. The creator, Joseph Weizenbaum, was greatly disturbed by the artificiality of this type of interaction.

When I saw James Schirmer, professor of English and prolific tweeter playing with the app That can be my next tweet, I had to give it a shot. Apparently it is a kind of language game that searches your past Twitter posts and assembles fragments of each post based on their parts of speech (presumably using NLP) into a semi-coherent and incredibly hilarious melange of random babblings. Only about one third of the tweets make any sense, but here are some of the funnier tweets the app generated, and I posted:

And my favorite, which I admittedly modified slightly by omitting a few random letters and characters at the end: