Upcoming Events:
Thursday, 22 August, 1 pm
Book Talk, “Why Books?”, Fitchburg Community Center, 5510 Lacy Rd., Fitchburg, Wis.
Thursday, 19 September, 6:30 pm
Book Talk, “Why Books, and Why This Book?”, Oregon Public Library, 200 N. Alpine Parkway, Oregon, Wis.
Subscribe to my YouTube Channel
Antisocial media
Antisocial media
By David Benjamin
“Your ad isn’t approved because it doesn’t comply with our Advertising Policies. your ad was stopped because it was determined to be related to politics or an issue of national importance.”
— Facebook
MADISON, Wis. — Recently, Facebook encouraged me to “boost” one of my “Weekly Screeds.” A half-hour later, spiked that very essay — inaccurately referring to it as an “ad” — because it was deemed an objectionable excursion into “an issue of national importance.”
However, I’m oddly reassured by this digital spasm of civic vigilance. If the watchdogs of social media can stop me in my rhetorical tracks, there’s hope they can clamp down on other — bigger — offenders against community standards.
I’m especially encouraged by the apparently sincere intention of the social media czars — who regularly ban my “boosts” — to crack down hard on folks like the rabid white nationalist who slaughtered fifty people in two New Zealand mosques. Evidently, outfits like Facebook, Instagram, Twitter, YouTube and LiveLeak were shaken by their unwitting complicity in the mowing down of four dozen Muslim worshippers while they were kneeling and praying in the house of God. These “social media” conduits, as it turns out, permitted — without the sort of review they apply to me — the posting of the killer’s deranged 74-page declaration of racial and religious war against the innocent and defenseless. The “artificial intelligence” algorithms and human watchdogs whose job is sniff out mass murderers and nip their schemes in the bud somehow failed to spot the Christchurch killer — even after he used these media to announce his plans — and could not prevent millions of downloads of the massacre video recorded on the gunman’s GoPro helmet-cam. This was a big “oops.”
Of course, it is not business social media’s oligarchs to regulate the sort of paranoid bigotry that stockpiles ordnance and turns churches, mosques, concerts, kindergartens and dancefloors into piles of corpses and oceans of blood. After all, they have to make a profit.
But Facebook, Twitter, YouTube and all of social media really want to do better. And they know how! Every day, Facebook fields thousands, millions — billions!— of outbursts like my Weekly Screed. Facebook’s algorithms know me like Boeing knows autopilot. They also know my tastes in music, my friends in France, my favorite yogurt and all about that girl from Monona Grove High I dated (and stupidly broke up with) in tenth grade. Knowing me so well, Facebook’s AI algorithms know what to do did with me when I wax political and drop the forbidden word “Trump.” Facebook stifles my boost, cuts me off, shuts me down and protects the innocent from my seditious ravings.
Facebook, Twitter, YouTube, etc. have solved the easy problem of protecting us from stuff that requires no protection. They can spot a bare nipple, an exposed penis or the word “Chinaman” in a split-second twinkling and pixillate that sucker to Kingdom Come before you could say “Zuckerberg.”
The problem is that while there are thousands like me spouting opinions that might offend and there are millions uttering gratuitous slurs online, there are relatively few actual maniacs with automatic weapons crazy enough to forsake their keyboards, charge out the door, drive to a nursery school and spray the playground with a thousand rounds of .45-caliber ammo.
Facebook told CNBC that “its AI requires a large amount of content to learn patterns that would help it detect harmful content, which is difficult to do in the case of relatively rare mass shootings.”
To this, Glen Rosen, Facebook’s vice-president of integrity, added that “the shooter’s video did not trigger Facebook’s automatic detection systems because its artificial intelligence did not have enough training to recognize that type of video.”
I’m reminded of the words of Justice Louise Brandeis (in a case that ironically restricted First Amendment rights), when he said that “the remedy to be applied is more speech, not enforced silence.”
The problem Rosen explained and Brandeis foresaw is that our current generation of algorithms doesn’t have enough data to recognize the sort of human malignancy that any smart child can spot before he or she is old enough to recite the alphabet.
To tell right from wrong, AI needs to see a hell of a lot more wrong.
We need to flood Facebook with hateful manifestoes.
We need Twitter to explode with racist, anti-Semitic, misogynist, Islamophobic, homophobic, xenophobic, sociopathic, homicidal tweetstorms. We need to clog its pejorative pipes with kikes, dykes, coons, bitches, Japs, kaffirs, Hunyoks, ragheads and redskins, ’til even the dimmest, dullest algorithm deduces that these aren’t just naughty words but provocations to mayhem.
We need to watch more body-armored white men with legally acquired machine guns, live and in color, shooting throngs of infidel women and children with such speed that they cannot run, cannot duck and can’t even rise from their knees before they die. If we allow enough atrocity and gore, the algorithms will finally get the drift. They need massive data because they have no conscience, no moral guide, no commandments, golden rule or holy scripture. They have neither priest nor rabbi nor imam. AI knows no God, nor will it ever.
If we even hope to enlighten the digital gatekeepers of Facebook, Twitter, YouTube and all their click-crazed human parasites, we need to feed them loathing and bloodshed, carnage, fear and death in gigabytes, terabytes, petabytes. We need Big Data. We need, it seems, a holocaust, streaming online — in “real time” — as a tutorial in evil.
The irony of teaching a moral lesson to our machines of “artificial intelligence” will come when they’re wised-up enough to figure out that we haven’t learned it ourselves.