Don't Trust the Media (repost)
(Old Facebook post from circa 7/2017:)
Here is why I don't trust the media, and you shouldn't either. As near as I can tell, this seems to be how the system works:
Step 1: Something happens in the non-media world.
Step 2: A journalist talks to some one or a few people involved in that event (sometimes they approach the journalist, sometimes the journalist approaches them). There are many other people involved in the event who don't talk to the journalist. There is physical evidence about the event, but the journalist doesn't have it; he just has what a few people told him. It doesn't matter if the person has an obvious bias. For instance, if the U.S. military just blew up a hospital, the journalist will call up a military spokesperson to find out what happened.
Step 3: The journalist basically prints what that person told him, but with a sensationalistic, entertaining, and/or playing-to-stereotypes spin that the journalist came up with. The goal is to get people to click on the story and/or share it.
Step 4: A hundred other journalists copy the first one's story, with varying degrees of distortion. They typically have the same spin. Typically, this spin is pretty close to the opposite of what people who actually have direct knowledge of the event would think.
* * *
A case study: I found these three articles:
1. 7/14/2017: "AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?" (https://www.fastcodesign.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it)
2. 7/21/2017: "Researchers shut down AI that invented its own language" (http://www.digitaljournal.com/tech-and-science/technology/a-step-closer-to-skynet-ai-invents-a-language-humans-can-t-read/article/498142)
3. 7/27/2017: "Facebook kills AI that invented its own language because English was slow", subtitle: "Stopping a Skynet scenario before it begins." (http://www.pcgamer.com/facebook-kills-ai-that-invented-its-own-language-because-english-was-slow/)
The story appears to be that some programmers at Facebook were writing a program for negotiation. They had two computers negotiating with each other. The computers started to degenerate into producing nonsense like this (the 2 computers are named "Bob" and "Alice"):
Bob: i can i i everything else Alice: balls have zero to me to me to me to me to me to me to me to me to Bob: you i everything else Alice: balls have a ball to me to me to me to me to me to me to me to me Bob: i i can i i i everything else
And so on. The first journalist says that the programs "invented their own language", which, alas, humans can't understand. The researchers shut down the program because they wanted programs that could communicate with people. The journalist then opines that we should let AI's develop their own languages, cuz it'll be more efficient. That's story #1.
The #2 author apparently read story #1 and decided to write up a paraphrase, but with a little more sensationalism. Now the AI "developed a system of code words to make communication more efficient". Then the #3 author apparently read story #2, and wrote up a paraphrase with an extra dose of irresponsible sensationalism. By the time we get to story #3, the researchers are "stopping a Skynet scenario" due to the "fear that machines may rise up and turn against humans".
I call bullshit on that. Now, I'm no expert and obviously have no inside knowledge about this facebook software project, so I am just guessing. Nevertheless, here are my guesses:
a. The exchange excerpted above isn't a "new language", there aren't any "code words" in it, and the machine doesn't have some hidden motive for "saying" "to me" eight times. It's just the output of a broken program.
b. The programmers were never afraid of losing control of this stupid, defective program, or having it turn into Skynet and create Terminator robots. They just shut it down because it wasn't working.
Oh, but my version of the story isn't very clickbaity, is it? Okay, let's get back to the Terminator robots.