Regarding the Great Tumblr Porn Ban of 2018 -- new details regarding the situation indicate that the situation is even more "This is Tumblr" than I thought.

To begin with: A while ago there was a story, an amusing story, about how a neural network was tagging all pictures of grassy fields as "sheep" because it had only seen pictures of sheep in grassy fields. 

It appears that the same sort of technology is being used to screen erotic content. And it's a very simple thing -- according to the Twitter user @an_gremlin, it's the open_nsfw program. It's got two layers: one that identifies NSFW and one that identifies SFW. So if you post pr0n in a grassy field, it might catch the attention of the SFW layer enough to boost the SFW meter up to "don't flag".

Gremlin then gives a concrete example of how easily the program can be subverted: posting a picture of an owl next to a man's bare chest is enough to fool the flagbot.

As Tumblr user Crazy-pages puts it, an open-source unidirectional bicategory tag-trained neural network will never have enough complexity to do what has been asked of it, and therefore Tumblr staff flat-out lied to users about what content would be safe.

Crazy-pages also notes that the software is highly vulnerable to counter-neural strategies, saying,

"I bet you before the end of the month someone hooks up their own open source one layer bicategory neural network which puts an imperceptible (to humans) layer of patterned static over arbitrary images, and trains it by having it bot-post static-ed images to Tumblr and reinforcing based on whether the images are labeled nsfw or sfw. Seriously, within a month someone will have an input-output machine which can turn any image ‘sfw’ in Tumblr’s eyes."

Tumblr Programming at its wildest, when it looks like the site-design equivalent of a Bethesda hack, only with far greater consequences.

There was a time when Tumblr had enough staff that this sort of cheap-ass half-assed programming was inexcuseable. Nowadays they've been losing enough experienced programmers that the remainder may be forgiven for such mistakes; after all they are working under greater pressure than before with fewer people than before. It's possible that whoever was given the job of coming up with a pr0n filter decided to pick the easiest tool to implement because they knew their bosses wouldn't know enough about neural networks to raise any immediate objections. Tumblr was bought by Yahoo, after all, a company not exactly known for being on the leading edge of internet technology. This particular filter seems like it works just well enough to fool a bunch of hidebound bosses at a presentation.

No need to tell them it's a mockup. If they don't notice, that's their fault, and we're getting laid off next month anyway...