Thoughts on “We elected Trump because of Facebook” argument

Summary: Nope. That’s way overstating the impact of social networking sites, and way understating the intelligence of the people who use them. The biggest problem: generating connections and empathy between disparate users.

***

We don’t live in a world were we are exposed solely to the information cocoon of our Facebook friends, unless you never leave your house, turn on the TV, or talk to other human beings. For instance, I doubt very much that people widely believed that Hillary Clinton was running a pedophile ring or that the Pope endorsed Donald Trump. If anything, so called “fake news stories” (read “lies”) nudge us in the same fashion as the National Inquirer, a publication no one believes (except for Trump); if you don’t like the person to begin with, you maybe like him/her fractionally less after reading “fake news.”

“But,” you say, “the BuzzFeed story said that anywhere from 20 to 38 percent of news on Facebook was ‘fake’ in one way or another.” Agreed, but here’s a rebuttal: people get hundreds of emails a year advertising “Canadian Pharmaceuticals” and “Sexy Asian Singles In Your Area” and “Why Global Warming is Fake,” but I doubt many are seriously investigating these possibilities. Just because someone clicks on something doesn’t mean they accept it as the truth, and it doesn’t mean they’re a rube who can’t tell the difference between bullshit and reality.

Facebook has a couple of problems on their plate right now. Problem one goes like this: they’re showing us things that we want to see in order to get us to click on more shit at the expense of showing us things we should probably know but maybe don’t want to see. That problem is bad, but not as bad as problem two: we aren’t learning very much about each other as a result of social networking sites, suggesting that these sites build little to no consensus or empathy. Rather than shit kick FB for failing to manually disambiguate so called “fake news” (we were all screaming for algorithms to control Trending Topics six months ago), we should think about the real problems posed by social networking sites, mainly that they aren’t doing a very good job of connecting us as a country.

Problem one

The underlying technological issue behind problem one presents itself if, by chance or choice, you click on a “fake news” story (or any other link, for that matter). Upon returning to your feed, you will be presented with the lamentable “People Also Shared” option that force feeds you more of the same (which you presumably click on). That leads to the inevitable worry: “If you see an argument enough, it starts to look true.” That’s a problem, but it’s a social problem not inherent to Facebook.

Rather than talk about an article on FB, we’re all more apt to fall into the “spiral of silence,” where we use the site to post news articles to an audience of our choosing (FB friends) that are representative of what we think, but we don’t particularly want to have a debate. Nor do most people actively seek out their ideological opposites to get their opinions. Instead, if something is possibly going to cause offense or even trigger a negative comment, we self censor. Hence, we end up reading a lot of repetitive things that possibly influence our thinking, but we don’t share our opinions (e.g. the so called “white silence”). In that way, problem one can cause a chilling effect, but I don’t see Facebook fixing that one by adding a self-flagellation icon for staying silent on a social issue, so I’ll defer that to the end of this post where I talk about hard things to fix.

What about the “if you see it enough it becomes true” argument? The underlying problem is that many of us are not trained or under-equipped to critically evaluate information we encounter on the internet. When we search, we look at the first page of results (at best), and if they conform to our assumptions, we accept it despite knowing that search results = algorithm + harvested personal data. When we look at product reviews, we go for the “most helpful,” ignoring potential manipulations from strategies like “sock puppeting” or “astroturfing” (phony reviews or comments made by the person hocking the product). We miss a lot of information that could otherwise be very useful. Also, search activities = relative individual worth + available time.

As a side note, I have observed one activity that transforms an average citizen into the grittiest of investigative reporters: saving ten bucks on a hotel. Maybe we should fine people ten bucks for re-posting a “fake news” article (sadly, it would put The Onion out of business, although by the above logic that may be a valuable service, especially for those confused over whether Kim Jung Un is the sexiest man alive).

I saw a suggestion that Facebook should discontinue the news feed and change the formatting on bogus stories, presumably to make them look “faker” than “real news” stories. I’ll be damned if I can tell the difference between the fake celebrity magazines and the real ones at the grocery store, but I just assume they’re all horseshit and proceed about my business.

No, it’s not Facebook’s job to search out “fake news” and reformat it. Nor is it their job to teach us to critically evaluate an article entitled “The Satanic Connection Hillary Clinton Doesn’t Want Anyone to Talk About” (actual website, by the way). It’s up to our educators to do a better job training the next generation to be skeptical about what they read, especially when it’s only what they want to hear. We’ve got a lot of work to do, but in my experience as an educator, young folks are doing a lot better than the OG in terms of filtering out the bullshit on the internet.

Problem two

Problem two is the underlying social and technical problem that will consume the next decade: How do we get people to connect to people outside of their own social circle, and how do we teach people to voice their opinions even when they are unpopular? I’m not sure you can change a platform like Facebook to make that happen, but I have some suggestions:

  • Optionally suggest a friend connection with one random person per month. You don’t need to force it on people, but put in a suggestion other than Mark Zuckerberg (I mean, Jesus, how many friends do you want, Mark?).
  • Promote some random posts into the news feed. Maybe even make them bulletproof by stripping the name of the person posting. Just let people see what others are thinking.
  • Instead of promoting narcissistic behaviors (selfies, FB Live, etc.) through the design of the platform, find a way to use it to build connections and empathy, maybe even hooking people up who want to debate issues and adding a moderation feature.

Every social networking site can’t do everything, so you can’t just kluge together a bunch of other features, but you can promote more inclusive behaviors among members instead of endlessly remixing content in their own personal information cocoons. Isolation leads to polarization, and polarization leads to a loss of empathy. A path to the dark side that is.

***

To conclude, I’m usually the first to kick the shit out of Facebook for everything, but let’s stop it with the whole “Facebook cost Hillary the election.” Just like our false belief in polling and statistics combined with our inability to connect and empathize with our fellow citizens, Facebook is one problem among many.

The IRB and emotional manipulation

The talk about the now famous Facebook study on emotional contagion got me thinking about the question of the role of institutional review boards (the IRB) and our responsibilities to participants in a study. I’m going to share a story here about an IRB approved study I participated in some years back as an undergraduate. I’m not trying to get anyone in trouble, and I’m not really bothered by the experience now, but I share it because it illustrates the point that even IRB approved research, when poorly designed, can and does screw up and cause emotional impacts that the researchers cannot fully understand or predict.

The National Institute of Health describes the responsibility of the researcher as minimizing harm and maximizing the benefits of research for the participants; this is a direct result of the Tuskegee syphilis experiment, where participants were not told for decades that they could be treated (at very low cost) for their syphilis infections so that the researchers could observe the long-term impact of the disease.

One of the big arguments I’ve heard is that the participants in the Facebook study were not given the option of informed consent: they didn’t know the risks of the study (the researchers probably didn’t have a total handle on those either) and they couldn’t opt out. I just read an excellent analysis on the FB study by danah boyd on the difference between obtaining approval from an IRB, and actually thinking critically as a researcher about the ethical impacts your research will have.

Informed consent does not mean that a study is without risk or emotional impact to a participant. Those risks should be anticipated and mitigated, but as my anecdote will demonstrate, that sometimes doesn’t happen like it should.

When I was about 19 years old, I was enrolled in a typical intro to Psych class at my university. As part of the learning experience about experiments (and as a way to drum up volunteers), I was required to participate in something like six hours of experiments. You didn’t actually have to do the experiments, but you had to show up and opt out on the informed consent document to get credit. I can’t remember if there was an alternative if you didn’t want to go at all (maybe write a paper), but nevertheless I had a positive attitude and felt like I could help researchers solve important problems if I participated. Most of the studies were just multiple choice quizzes or writing answers to timed questions. One study was a cooperative brainstorming task that I did in front of a one-way mirror. Nothing too outlandish.

Then I participated in a study that was not so pleasant. I showed up to the study room, where I was told that I would be watching videos with group of two other students, and I would be asked how I felt about the actions of different characters in those videos via a form. The films they showed were all of people getting verbally ridiculed, then getting angry and beating someone up. I think they were all Hollywood motion pictures (one was definitely Dazed and Confused), but I can’t remember. The questions on the form were about whether I thought the person was justified in attacking someone.

During the screening, a confederate (unknown to me at the time) offered me some unwrapped candy out of a bag. We’ll call him Person A.

During the screening, a confederate (unknown to me at the time) offered me some unwrapped candy out of a bag. We’ll call him Person A. I politely refused person A because I thought it was really weird to eat unwrapped candy from someone I didn’t know, and I’m not a big candy person to begin with. After the experiment was over, another confederate (again, unknown to me) stopped me in the hallway while I was walking out. We’ll call him Person B. Person B pointed my attention to a disc on the table that Person A had ostensibly forgotten. The label on the disc said “Final Paper.” I asked person B if he knew person A, and he said no. He then told me that Person A told him he would be going to a meeting in the basement of the building I was in. I told Person B that I would take the disc down to him, and Person B followed me into the elevator. I probably should have been more suspicious of all of this, but I figured that once I walked out of the lab room into the hallway, the experiment was over.

On the elevator, Person B asked me if I would pledge to donate to his AIDS walk charity. I didn’t have two nickels to rub together in college, but I said I would since I felt bad. I put my donation on the form and Person B got out at the ground floor.

Wait for it, it gets even weirder from this point on.

I got to the basement and I couldn’t find the room, so I asked a custodian who was mopping the floors if he knew where it was. He told me, in broken English, that the room didn’t exist as far as he knew. Afterwards, I thought I would walk around one more time just to be sure. It turns out I had just passed the room and not noticed it since the lights were out and no one was there. There was a note on the door that said the meeting had been moved to a room on the top floor of the building. I was a bit angry that I had to go back up to the 12th floor (or whatever it was), but I got back on the elevator.

When I got out of the elevator and walked to the room, Person B was waiting for me around a corner in the hallway. At first, I couldn’t figure out what was going on, and I was a bit disoriented. Person B told me that everything I had been doing for the last 10 minutes or so after I left the lab room was an experiment. Apparently the candy Person A offered me was somehow related to the violent movies, the disc left by Person A and whether I returned it was related to him offering me candy, my willingness to pledge to Person B’s AIDS walk was related to him encouraging me to return the disk to Person A, and moving the meeting to the top floor was testing how far I would go to return the disk. All this was in addition of me answering the questionnaire.

However, I wasn’t simply told all of this in my “debriefing,” I had to ask whether some of the scenarios were part of the experiment. I asked about the custodian and Person B said “What custodian?” I asked if the disc really did belong to Person A, and he said “you can just give that to me.” I went back and forth with him a couple times to make sure, because at that point I really couldn’t sort out all the different components of the experiment.

I would say I’m no more paranoid than your average person, but I was extremely uncomfortable in that moment. I was given more consent documents to sign by Person B, which I did because I wanted to leave as soon as possible. Also, even though I was told (probably repeatedly) that my participation wouldn’t affect my course grade, the experimental design was confusing to the point where I didn’t know if I could opt out at that point or even how many experiments this counted for towards the course requirement.

I would say I’m no more paranoid than your average person, but I was extremely uncomfortable in that moment.

Looking back years later, I was probably naive in thinking that the events after I left the lab room were separate from the experiment, but I let myself be fooled in the moment because I’m naturally trusting and wanted to help a fellow student out if I could. The experiment played on my disposition in that regard.

But wait, it gets better yet.

I asked Person B if the experiment was over now, and he said “Oh yeah, we’re all done here if you want to take off,” or something casual to that effect. As I left the building I paused in the lobby to cue up my CD player (dating myself here). Whether by design, or by unfortunate accident, Person B happened to exit the building right after me, and even walked in the same direction for two blocks. I know this because I kept looking over my shoulder to see when he would leave.

When I got back to my apartment, I would say that I was significantly emotionally impacted. I kept replaying the events of the experiment in my mind. I started wondering if the custodian was a secret confederate, and his broken-English explanation that the room didn’t exist was to see what my attitude to an ESL speaker was. I also wondered how Person B knew to wait for me on the top floor for my debriefing. What if I had just said “F**k it” and left the building with the disc. I concluded that there could have been someone silently observing from the darkened room I stood next to. What if I had accidentally discovered that observer? What would the emotional impact have been of discovering someone surveilling me from the shadows? As stupid as it sounds now, I even thought about the custodian calling up on a walkie talkie, saying something like “the package is en route.”

As stupid as it sounds now, I even thought about the custodian calling up on a walkie talkie, saying something like “the package is en route.”

I had a strong sense that even though Person B told me the experiment was over, that it was still going on in some capacity. Later, before I took the final exam in the course, the professor told us that the course itself was an experiment on college learners and asked us to sign informed consent documents. This was minutes before the exam started!!! As you might guess, having participated in this bizarre experiment, my suspicions about the experiment never ending were only heightened at the worst possible moment: right before I had to take a two hour long multiple choice exam.

I probably could have complained about the study, but I didn’t really want that kind of attention. Whether or not a complaint could have actually impacted my grade, I had perceived negative repercussions associated with making a formal complaint. I was compromised as a participant both during and subsequent to the secondary experiments outside of the lab room, because I didn’t feel like I could opt out.

This study was approved by my university’s IRB.

My point in sharing this is not to disparage human subjects research or the IRB system. I’ve come to think that the experiment was probably much more structured on paper, but was executed poorly. It’s possible there was a more structured protocol for debriefing, which was not followed in my case. Nevertheless, the sole fact that this study received IRB approval doesn’t mean that it should have been done, for a few reasons:

  1. The experimental design was shit. Embedding so many sub-experiments in the primary experiment meant, ultimately, that you couldn’t infer a damn thing from any of my actions past (I would say) the point where I agreed to return the disk. Even that action was primarily due to my empathizing with Person A about losing a term paper, and had nothing to do with any candy offers.
  2. Debriefing would be so complicated, that you have to wonder why they grouped all these sub-experiments together in the first place. I should have been made to understand the totality of the experiment and the ending conditions clearly before I was allowed to walk out of the room (or given something to read that contained that information at the very least). I definitely should not have been debriefed by a confederate, someone who knowingly deceived me during the experiment.
  3. The conditions have to be so carefully maintained, they make this experiment an incredibly complex machine that achieves very little. Having the confederate/debriefer/whoever the hell he was walk out of the building and follow me was idiotic. Person B should have gone out another exit or even waited ten minutes before leaving.

Even though I don’t technically think my rights as a participant were violated, and I’m not significantly affected by the experience now (other than it’s a funny story to tell at parties), it was seriously disconcerting at that time. I was made to feel unsure of my privacy at the university for a least a couple of months. I felt observed, and it was a feeling that took some time to get over.

As it relates to the Facebook study, I can totally empathize with people feeling like they were toyed with, and being told the effects were minimal does not do much to dispel that feeling. The reason we obtain informed consent and avoid using the word “subjects” is exactly to remove the detachment that makes researchers feel like those people are the other, the thing to be manipulated and run through a maze. We’re careful to distinguish that we manipulate conditions and observe responses, but it’s naive to think that you can design an experiment with such minimal impact that the participants don’t need to be informed or debriefed.

Emotional impact, even if it’s negative, is just a part of an experiment. Some of those other experiments where I answered multiple choice surveys repeatedly asked strange questions, like “Do you ever feel like the television is talking to you?” or “Do you ever feel like your limbs are detached from your body?” I wasn’t disturbed by the questions, but they were strange enough that I wanted to know why I was being asked them. Most of the time I got a debriefing statement (I think the test I mentioned was for schizophrenia). I was exposed to a slight emotional impact, but it may have helped doctors better diagnose and treat someone with serious problems. I think most people, if the impact truly is small and they are aware of the type and duration of the experiment, have no problem participating if it can help someone who needs it.

The IRB is supposed to help us define how to run human subjects research responsibly, but, as boyd suggests, we all need to think more about the actual execution of the research and what responsibilities (outside of just legal and IRB) we have to participants.

Facebook and other social networking sites shouldn’t stop doing research or publishing it, but they need to be more forthcoming to users. I don’t think informed consent is always the answer, but FB could have had a press conference where they clearly explained what they had done, why they did it, and what contribution it made to society and our understanding of human behavior. They should have sent a notification to all of their users, even those who didn’t participate. boyd even goes so far as to suggest that users should have a hand in determining what types of research Facebook does, but as we learned from the final site governance vote ever, that is probably just a fantasy.

Finding social networking site accounts for a list of organizations

I’m working on collecting data for my dissertation right now, and one major problem that I ran into was finding organizations on Twitter and Facebook. I have heard from more than one person who has a list of organizations (say the top non-profit organizations or Fortune 500 companies) and they want to make a collector to get Twitter posts, but they don’t have the usernames for those organizations. Twitter lists are great for finding lots of accounts, but there are two major problems: 1) The list you need may not exist, and 2) The accuracy and currentness of that list are wholly dependent on the curator. If you are concerned with getting the most accurate sampling of a group of organizations on social networking sites, chances are you have to make your own list.

I first encountered this problem when I was compiling Twitter lists of members of the U.S. House of Representatives and U.S. Senate in the 113th Congress. At the CaSM Lab, we use these lists to collect tweets authored by and directed at members of Congress (MOCs). To compile the lists, I had to do a Google search with the name of the MOC, plus the words “Congress” and “Twitter.” While adding these terms (usually) weeded out people who coincidentally had the same name as MOCs, it did not weed out MOC’s Congressional information pages and well-meaning websites like Tweet Congress. Even after a focused search, I still had to scan results, verify an account, and copy the URL or username.

For my dissertation, I am pulling from an initial list of 2,720 non-profit organizations that potentially have SNS accounts. Manually performing a search for each organization and extracting a potential URL for each organization would take far too long. Since there is some degree of human intelligence involved in such a task, paying someone to perform the searches and find URLs would seem to be the only option. Since this is a dissertation, however, I have approximately no funds allocated for this. Likewise, I wanted a method for finding URLs that works on a variety of projects so that I don’t have to pay someone every time I need to make a new list.

I had some previous experience with Ruby and the Watir gem so I chose that route for automating the search task. Watir is an application that allows you to automate a web browser to pass information to a website form and monitor the results. It also has some limited scraping abilities, which is perfect for scraping structured information such as search results or tables.

My initial script grabbed the first three URLs from a Google search indescriminately, but that caused a couple of problems. First, for organizations that return more than one page from their website in the top of Google results, you risk crowding out relevant social networking site URLs (a problem of recall). Second, the script returned lots of URLs from third-party non-profit information sites that had dummy entries for the organizations I searched for (similar to the TweetCongress problem). These non-relevant URLs lowered the instrument’s precision.

Unfortunately, since I wanted to start the Twitter collector immediately, that meant I still had a large amount of searching and scanning search results when collecting Twitter URLs for my study. For collecting Facebook URLs, I decided to return to the search script and fix these problems.

I recently finished a revised script (available on Github) that returns the first ten URLs for a given search term when the URL matches a predetermined string. In order to increase the instrument’s recall, I expanded the number of URLs it collects from three to ten (the number of URLs on the first page of a Google search). To increase the instrument’s precision, I changed the script to only collect URLs containing a given string (e.g. “facebook.com”). These changes greatly increased my confidence that when the script returns zero URLs for an organization, there are no social networking sites associated with that organization.

While this script doesn’t replace the need for human verification, it does eliminate the tedious process of performing initial searches and having to pick through the results to find a potential URL. There is certainly a chance that I’m missing a few accounts by using automation, but, as I learned when searching for MOCs, fatigue is equally as likely to result in a false negative as any automation.

Feel free to try the script out and if you do, please let me know how it works for your searches. It’s pretty versitile and can be adapted to most any search task where you need to find URLs for a list of people or organizations. Also, although I haven’t done so, I’m sure it could be modified to work with Ubuntu or as part of a Rails app. Its only limitation is that the available memory limitations slow it down after about 1,000 searches (a problem I don’t have time to investigate now).

Also, if you are looking for some introductory help on using Watir to automate a web browser, I have a tag on Diigo with links to some helpful resources.

The Facebook Site Governance Vote: why/how should I vote?

This is not a question asked by me personally, as I’ve already cast my ballot. I wanted to discuss some of the basic issues raised by this governance vote for the benefit of those yet to vote.

First of all, to get acquainted with the changes to the Statement of Rights and Responsibilities (SRR) document you are a party to and Facebook’s data usage policy which governs their use of your personal data, read this simplified, but accurate L.A. Times piece.

Below are some to-the-point observations on what’s at stake and my reflections on the Facebook voting process.

Are you being disenfranchised by the new policy?

I don’t think so. If the new SRR and policies go into effect, there will be no more referendum-style votes (there were two others prior to this one). These votes have always been “advisory” in that they did not “bind” the company to a specific course of action. The reason: the threshold for binding results is 30% of the total site membership, which is approximately 30 million persons. As of today when I voted, there were roughly 350,000 votes total, which means 29.65 million persons would have to vote in the next four days to make the resolution “binding” (whatever that actually means).

In essence, you can’t be disenfranchised if you never had the opportunity for your vote to count in a meaningful way in the first place. At least that is my opinion.

What is the deal with the frantic copyright disclaimer posts that people are posting to their walls?

They are a hoax. Copyrights in the U.S. are inherent to the author at the time a piece is created. Facebook even says as much on their website.

When you post to Facebook, you grant them a user license to use and display that content on Facebook.com according to the SRR document. If you want to sell that photo you uploaded of your Thanksgiving turkey to The New Yorker you may do so unencumbered! Copyright rules do not change under the proposed SRR.

If you violate someone else’s copyright by posting content illegally to Facebook, that is a different story and they have the power to remove that content (and you have the right to an appeal under the SRR).

Notification of Voting

Twitter is my bread and butter, so I shunt all Facebook related emails out of my inbox and into a folder where they stay for many months. I would expect that Facebook would put a banner on the top of the site when you log in for important things such as this (as Wikipedia does), but their subdued notifications probably missed a lot of people. Perhaps that is why on a site with one billion members, only one percent actually votes in these elections. We know that researchers motivated people to vote in the 2010 U.S. midterm election with a simple intervention on Facebook (the “I voted” button and counter), so it’s odd that Facebook can’t get out the vote with it’s own users.

Presentation of issues

When I went to vote earlier today, I expected to vote Yes or No on simply worded phrases explaining to me what the changes were in these proposed documents. For example, we voted on a constitutional amendment in Illinois this past election, and the wording was as follows:

If you believe the Illinois Constitution should be amended to require a three-fifths majority vote in order to increase a benefit under any public pension or retirement system, you should vote YES on the question. If you believe the Illinois Constitution should not be amended to require a three-fifths majority vote in order to increase a benefit under any public pension or retirement system, you should vote NO on the question. Three-fifths of those voting on the question or a majority of those voting in the election must vote “YES” in order for the amendment to become effective on January 9, 2013.

It’s not the best, but it clearly explains what the consequences of your vote will be.

When I went to review the issues for the Facebook election, there were four links to Very Long Documents: two links for the old SSR and data use policy, and two links for the proposed SSR and data use policy. As far as I could tell, there was no document telling you what the differences between the documents were or what would change based on your vote. Even the language on the ballot was vague:

Which documents should govern the Facebook site?

  • Proposed Documents: The proposed SSR and Data Use Policy
  • Existing Documents: The current SSR and Data Use Policy

So how should I vote?

I see this more as a referendum on the way these policy change ballots are handled. It’s hard to vote intelligently when you don’t understand the issues at hand (or at all if you don’t know that you’re supposed to vote). I personally voted against these new documents not because I am strongly opposed to the changes (as I understand them), but because I do not approve of the process for making myself heard to the governing body of this site, I am not satisfied with my past efforts in expressing my opinions, and because I would like the opportunity to do so through the existing comment/ballot system when future changes are proposed.

If I got anything wrong or you have specific language on the differences between the documents, I encourage you to leave a comment.