I recently listened to a podcast in which one of my favorite podcasters spent five minutes discussing a comment that podcaster made on Twitter. The complaint was that the person to whom the comment was directed (and which the podcaster admitted had been a mistake) had responded with a screenshot of the comment, rather than with a “quote tweet.” The podcaster’s point was that said podcaster could have responded to a “quote tweet” by admitting the response was wrong and apologizing for it, but could not respond to the screenshot. (My reaction was relief and self-congratulation that I never became a twit on Twitter.)
That such an inconsequential incident, such a tempest in a twitpot, could assume such significance, if only for a short time, is, frankly, distressing, which leads me to recommend Dr. Charles Johnson’s post at Psychology Today Blogs, in which he takes a look at how our metastasized “social” media has monopolized our attention and distorted our discourse, and at what we can do about it. Here’s a bit of what he has to day:
Machine learning algorithms don’t need ill intent or even a simple desire to maximize profit for them to have destructive effects. Instruct an algorithm to attract the maximum number of eyeballs (which is what people most often want them to do) and content that is ever more addictive and divisive becomes the natural result. Addiction is the best way to assure attention and divisive content is particularly habit-forming. Over the long term, content that actually benefits us stands little chance in this context.