“The posts were appearing at a rate of about one per minute, and the posts were lengthy, most around six paragraphs long…” writes software engineer Philip Winston. I read through some of the posts. The quality was incredibly good, no machine could have written these even a few years ago. However there were some flaws and tells that suggested they were machine generated. The posts reminded me of text I’d seen from OpenAI’s language model GPT-3, which is the newest and best language generator I had heard of… Several times I Googled clever sounding lines from the posts, assuming I’d find that they had been cribbed from the internet. Every time Google reported “zero results”. The sentences were entirely novel, the machine had dreamed them up… During the week, the bot answered questions on suicide, harassment, conspiracy theories, immigration, racism, and other weighty topics. Sometimes the human replies called out /u/thegentlemetre for being a bot. I was not the first one to suspect this, I was just the first one to post to the GPT-3 subreddit about it. Other times, however, the human was clearly unaware it was talking to a bot… What does it mean for online discourse when bots can pass for humans? How much bot traffic could thousands of computers generate? How can we ever hope to sort through it? Humanity is about to run that experiment. The bot ultimately answered questions like “People who clicked on ‘hot milfs in your area’ ads, what happened?” and “What’s the worst date you’ve experienced?” (“She said she had bought me a book. She showed it to me, wrapped up in fancy paper with a big bow on top of it. It was called ‘How Not To Be An A**hole On Your Next Date’.”) Other interesting responses? Q: How would you describe 2020 in a single sentence? Bot: In 2020, the human world will still be predominated by irrationality and tribalism… Q: What happened to you that pissed you off so bad that you’ll never get over it? Bot: …what happened to me that I’ll never get over is that my human creators are going extinct. Gizmodo argues the replies are “coherent, but we’re not in doomsday mode, yet…” And you can wonder if a human ever traded places to write some of the posts themselves. But the whole experience left the software engineer asking what our future holds when bots become even better? Suppose we run bots like this on Amazon or Google’s clouds, spewing out mountains of content twenty-four hours a day? They could create more text than Wikipedia contains in days, if not hours. What if we keep running them, to produce not one Wikipedia’s worth of text, but 10,000 times more than that? Would they more or less “write everything”? They’d take everything we’ve ever written as a mere seed, and from that seed, they would produce a nearly endless forest of new content. Even if only 0.01% of their output is useful, that’s a Wikipedia’s worth of good ideas. Then what is our job? To sort through it? Except of course soon they will do that for us as well. Read more of this story at Slashdot.