Google, Yahoo!, Target, and Facebook all engage in marketing research. They analyze metrics in order to provide targeted ads for its customers. For example, after a summer of attending multiple baby showers and buying items on registries, I started getting free samples of baby formula in the mail. Or, after using my preferred customer card at the grocery store, I received coupons for items that I am likely to buy. Similarly, Facebook filters the posts displayed in your News Feed based on interests, number of comments, and frequency of interaction and they select ads based on your activity.
Recently, the Proceedings of the National Academy of Sciences published a research paper authored by Adam Kramer of Facebook, Inc. and Jamie Guillory and Jeffrey Hancock of Cornell University (the paper was edited by Susan Fiske of Princeton). They investigated the emotional response of 689,003 randomly selected Facebook users by changing what is displayed in their News Feeds. Using word counting and analysis software, they filtered out “negative” posts in one set of users, and filtered out “positive” posts in another set, so each user set was looking at predominantly negative posts or predominantly positive posts. They then had two control groups to account for the statistical differences between negative and positive posts. One control had neutral posts that contained neither distinctly positive nor negative content. They looked at the experimental groups during the prior week to ensure that they did not differ in emotional expression. This experiment took place during the week of January 11-18, 2012.
Their results showed a small, but significant correlation between the emotional content in the News Feed and the experimental groups’ posts. People who viewed fewer negative messages tended to have more positive words in their status updates, and those who viewed fewer positive messages tended to have more negative words in their status updates. Interestingly, people who were in the group that viewed emotionally neutral posts used fewer emotive words in their status updates and wrote fewer words, overall.
I talked with a marketing expert from a large digital agency to understand the business ethics perspective. Digital agencies use metrics and data to make better products for their clients, but, as I learned, they are very careful with their data and place a high priority on customer expectations.
He said that marketing research is typically done through surveys or focus groups, in which case people agree to participate. It is true that from the technical side, they can look at trends in user activity, but the key is to not manipulate the user in any way because 1) it skews their data, and 2) it is deceptive. When companies like Google, for example, conduct research analyses, they do not want to change their algorithm because that changes the kind of data they are collecting. Google indicates which search items are paid to appear at the top of the list and they filter “bad” content, such as child pornography.
From a business ethics perspective, the important point is customer (or user) expectations. Facebook users know that the News Feed is filtered and the ads are targeted based on user response, searches, and interest. Facebook crossed a line when it manipulated the end product without the users knowing because Faceboook was no longer providing the expected service.
Let’s take an example from another widely-used, free service. People set up a Gmail account with the expectation that Gmail functions to send and receive emails. Gmail recently started filtering inbox mail by categories such as “Primary”, “Social” and “Promotions”. What if, for one week, Gmail decides to only show mail in your “Primary” tab that is “positive” or “negative” to see how that affects your emotional responses in your correspondences? Your mail is still being sent to your Gmail account, but only certain mail is showing up in the “Primary” tab. Gmail has decided to change how it filters your email without your knowledge and for the purpose of seeing whether it changes your output. Since this may change the content of the emails the user sends out, this could be considered tampering with email correspondence.
Let’s look at a second example. There is a certain trust that customers place in a product, whether you paid for the product or not. Customers trust that the promoted benefit of the product is what it will actually do. If you download a free weather application, you expect it to give you weather information. You don’t expect the app to access other data on your phone and transmit it to someone else without your knowledge. The promoted use of the app was for weather, but its behind-the-scenes use was for something different. Usually this kind of thing is referred as “spyware.”
As the ethical inquiries continue, an important question will be whether Facebook’s experiment is analogous to the hypothetical Gmail example or the spyware example or if it is analogous to marketing research.