Government Surveillance

It has already been discussed in this course the concerns we all have with overreach in data collection in our lives (for example, the weeks on privacy and algorithms). But so far these discussions have been mainly kept to how internet giants collect our data and use it to craft our online experiences. Moving toward government surveillance, programs like the NSA’s PRISM program collected “live information, photos, video chats and data from social networks (West 2013, point 2),” from individuals regardless of their level of suspicious behavior. For the average person, it may feel like an uncomfortable intrusion, but most people are busy and harmless, and this is not the cross they’re willing to die on. Richards (2013) gives us a detailed explanation, as well as the not-so-slippery slope of consequences this kind of monitoring can bring. The consequence that struck me the most is Richards’s discussion of sorting and discrimination. I believe this already happens. Ensuring people of color in our country are held one step back is certainly nothing new in this country. Government data collection and interference could ensure people of color don’t get access to the same information online (see also: net neutrality [!!!]), or that people of privilege could get entirely separate news when they open their phones (not personalized echo chambers, but actual government media appearing differently to different demographics). The government could also affect who has access to their own technological equipment. We know the government can hack phones to record, is it so far-fetched to think they could also disable our cameras in, for example, a riot, so that one could not capture police footage? Countries have done similar things during uprisings like block YouTube or “The Internet” as a whole so footage could not be shared. What if this was only done to certain demographics?

On an international level, West (2013) discusses the distrust caused by the United States’s surveillance of our allies (points 10, 12). Richards argues that surveillance at its core inherently “affects the power dynamic between the watcher and the watched (p. 1953).” There can never be a friendly situation where one side is monitoring the other and things are still considered equal. Surveillance always means we know more about the watched than they know about us, and that becomes the breeding ground for distrust and paranoia. In Germany especially, surveillance by the government of any kind is highly scrutinized and publicly frowned upon, since in former East Germany, the Stasi operation employed one of the largest citizen-spy organizations ever created by a government. Germans 25 and older (and, I would argue, those younger simply by learning from their parents) remember the wrath of the Stasi and the consequences of real government surveillance to “keep the peace.” I believe the overreach by the U.S., while perhaps not an issue we see a lot of literature on at the moment, will not be soon forgotten and forever affects our relationship with the country.

Advertisements

Crowdsourcing: Comments

https://jatdx311.wordpress.com/2017/11/12/crowdsourcing-a-misidentified-man-in-boston-bombing/comment-page-1/#comment-55

https://julygrace715.wordpress.com/2017/11/13/crowdsourcing-mcdonalds-crowdsourced-burgers/comment-page-1/#comment-64

https://newmediademocracyfall2017.wordpress.com/2017/11/13/crowdsourcing-mapping-the-brain/comment-page-1/#comment-50

 

Mobile Internet

In both Wijetunga’s 2014 study from Sri Lanka and Baird and Hartter’s 2017 study in Northern Tanzania, mobile phone, internet, and app use is examined among people of varying population demographics. In both studies, literacy (both linguistic and technological) was cited as a barrier to maximizing the utilization of the features of the phone. For example, in the Wijetunga study, it is mentioned that young people in the “underprivileged” group were not able to use computer-mediated communication in the same ways as their “privileged” counterparts because of lack of computing literacy, and in most cases, lack of English literacy to read instructions. By comparison, the study by Baird and Hartter discuss many more aspects of livelihood that were affected by mobile phone use (such as caring for livestock, marketplace trade, predicting weather, and disadvantages such as crime), but also mention technological and linguistic literacy as a barrier to full access to using the capabilities of a handset phone. Both articles discuss how phones strengthen or harm societal relationships depending on how and how frequently the phones are used. The participants in the Wijetunga study (the privileged group) used social networking and messaging on phones to communicate, while the participants in the Baird and Hartter study used phones to communicate within and among their neighborhoods, tribes, and networks (though they also commented on the disadvantages of such communications, like decreasing attendance at community meetings).

 

A study that would be worthwhile in examining further usage and utility of mobile phones and mobile computing would be to test cell phone and moble internet use among elderly users across the U.S. I think older people in the U.S. are often just assumed to have a lower technologically literate lifestyle, but I wonder what all older people are actually using phones for and how much it affects their lives and relationships. For example, most of my older family members now have Facebook and/or Instagram, and access it regularly from their phones. I wonder, in accordance with Uses and Gratifications theory, what benefits specifically older people feel this adds to their lives and relationships. It would answer the questions: (1) What mobile apps or programs are most used by senior citizens (65+) with a mobile phone? and (2) what benefits do the most-used apps provide? It would be a highly informational study because I think the elderly as part of the population are overlooked in technology research, as it is assumed most of them don’t use or understand technology as well as younger users. However, the 65+ age range is still a large demographic, by numbers, and knowing how and why they use mobile apps would give developers more insight into what kinds of apps would be best developed or markets for the elderly in the future. If it were more well-known what benefits senior citizens appreciated (for example, maybe they prefer skype because they can easily video chat with grandchildren, which strengthens relationships), more could be done to encourage them to use technology, become more technologically literate, and continue closing the digital divide.

Digital Outlaws

Coming from communications backgrounds, many of us are somewhat familiar with the concept of “framing,” or the mental and physical structures we use to present, process, and interpret information. Media present information through “frames” to achieve a desired result. We, in turn, process every bit of information (a smile, text, fashion choice, social situation) through our frames of reference and perspective, which have been built up through a lifetime of enculturation and experience.
Collective action frames attempt to analyze and describe how social movements build and operate within their own frames (how they view themselves and their struggle in relation to the world) as well as how external frames are impressed upon movements and portray those movements to the public. These internal and external frames are often at odds. Take, for example, the “Take a Knee” movement of NFL players. The players claim to be expressing themselves in order to highlight issues of police brutality, but the news often frames the movement as disrespectful to the American military. How individual viewers interpret the movement largely depends on the framing structures they already have, which vary widely across demographics. In the case of Söderberg’s article, he argues that hackers and hacktivists relate their call (or their being called) to action to the labor movements of the post-industrial era. He states that hackers “reinterpret [the labor movement’s narrative] to give meaning to their own existence and the struggles in which they are immersed.”

When John (2013) refers to the shift “from scarcity to abundance” in terms of how sharing (the word and the concept) has evolved in society, he is referring to the shift from pre- to post-industrial society. That is, before the industrial revolution, our largely agricultural and rural society lived such that every member of the family must work the land to eat and survive the day. The labor was divvied up by “sharing” it. Everyone pitched in. Relationships were often cultivated on what one could DO for the family (e.g. having lots of kids for free labor. As industrialization modernized the workforce and people moved into cities, relationships became strong based on interpersonal skills one acquired through business and social settings. Living in a city, the physical labor is minimal: we don’t grow our own food or sew our own clothes, we go shopping. This initiated a restructuring of social relationships that instead valued conversation, intellect, and emotional “sharing.” When resources are scarce, the labor involved in attaining resources is “shared” in the division or distribution sense, so that everyone might survive. When daily resources are abundant, we focus mental energy on “sharing” as communication, so that others around us might (presumably) enjoy life and our company more. This shift in society and definition of the word “sharing” is an integral part of John’s historical and contextual analysis of the issue of “file sharing.”

Foreign Policy and the Internet: Brazil

Daniel O’Maley, in his article “How Brazil Crowdsourced a Landmark Law,” discusses the lead up to, reasoning for, components of, and outcomes of Brazil’s user-generated Marco Civil da Internet, or Civil Rights Framework for the Internet. This landmark legislation, passed in 2014, is essentially the Brazilian Internet Bill of Rights. The law safeguards users’ privacy against corporate interests, mandates judicial reviews for removal of material, and upholds net neutrality. But what is most interesting is how the law came to be. Essentially a crowdsourced project, each version of the bill (which started in 2009) was put on a website for the public to debate, then was sent back to lawyers for legalese updates, then sent out to the public forum again. The bill especially gained public support after 2013’s revelations that the United States NSA was collecting information and data on Brazilians, including then-President Rousseff. Rousseff signed the final version into law at the 2014 NetMundial summit, where it was heralded as a landmark piece of legislation worldwide, not only for its progressive content but for the innovative participatory nature of how it came to be.

In this case, Shirky’s (2011) discussion of an “environmental” view of social media and internet use is supported. The internet forum in which anyone could participate (though, at times, only a few people did, possibly due to lack of interest, awareness, subject knowledge, access, or leisure time) created a public sphere that asked citizens to participate in their own law-making process. According to Shirky, the public sphere is slow to change, but access to conversation is more important to change than access to information. In the case of Brazil, which absolutely experiences more than its fair share of political turmoil (an impeachment in 2016 and another on the verge every few months) and publicly outraged protesting citizens (public transportation prices to mega-stadiums), this Bill of Rights for internet use brought active citizens together into their own public sphere to converse and decide their own fate.

Comor and Bean’s discussion on the shortcomings of America’s use of engagement in regards to Public Diplomacy can be applied to the references O’Maley makes to the NSA’s collection of Dilma Rousseff’s communication data. Brazil and the United States have long had strong diplomatic ties. Learning about the spying scandal in 2013 deeply strained the relationship of trust our countries shared, and I feel it’s a good example of the failures of the engagement policies at best, and at worst an example of outright deception. This breach of trust pushed Rousseff to implement the Internet Bill of Rights shortly after, seeing the legislation as a form of “repudiation of the use of the Internet for citizen surveillance and international espionage,” as had been committed by the United States (O’Maley).

 

Viral Online Media: “Love Has No Labels”

In 2015, the Ad Council started a campaign called “Love has no Labels.” In their first video, which went viral, participants stood behind a large x-ray screen, so the crowd which gathered could only see their skeletons. The skeletons dance, hug, or kiss behind the screen, then poke their heads out of either side to reveal that they are a couple of the same gender, different ethnicities, physical abilities, etc. The video aims to show us that under our skin, we are all the same, and love has no physical boundaries.

According to Berger and Milkman (2012), people are more likely to share media content when positive feelings are evoked more than when negative feelings are evoked. This stands to reason, as people enjoy sharing content on social media that makes everyone feel good. While they state that strong negative emotional content gets shared more than sad content, positive and awe-inspiring content was the most shared. It also follows Berger and Milkman’s theory that feelings which evoke stronger activation or arousal are more likely to go viral. In this case, the emotion of awe is evoked, as we are surprised and inspired by the couples who bravely declare their love in front of a crowd. The story is part of the emotional draw. As they discuss, people also share content as a reflection of their egos and their self- presentation online. This video is a good example of that, as people likely want to be seen as unprejudiced, open-minded and tolerant of everyone and every kind of love in society. Especially because this video went viral before marriage equality was a national law, sharing a video like this was a heart-warming way to share your stance on the topic without feeling political.

While the video has over 58 million hits on YouTube, I couldn’t find definitive numbers for how often it was liked, shared, and retweeted on Facebook and Twitter, to compare to Alhabash and McAlister’s research. But it stands to reason in line with their theory that things on Facebook are “liked” more often than shared. Liking an item is a less-involved way of participating in the support of a concept like marriage equality.

According to Peretti’s number one rule, have a heart, this video definitely hits the viewer right in the heartstrings. The beautiful, powerful chorus to “Same Love” by Macklemore, Ryan Lewis and Mary Lambert plays over the video, while the mixed rage, religion, ability, and aged couples show their love for their friends or partners on the stage speaks to our adoration of families, bravery, and universal love. As far as Peretti’s other points, especially those toward the end of his list, I found myself feeling like they were more instructions on how to create clickbait, rather than how to create viral media. While the “Love has no Labels” was also part of a marketing campaign because it was sponsored by Unilever, Coca-Cola, and some other brands, ultimately it was a social commentary meant to inspire and spread love and awareness, which I will take over a regular commercial any time. What Peretti’s list seems to be is a list of ways for budding web content creators to get clicks and go viral as a way of building up internet fame for themselves, which I can’t say I have very much appreciation for.

 

Algorithms

Algorithms are basically the patterns or mathematical equations programmed into the internet that try to give us the results we’re looking for, usually based on what we have done before and what other internet users have done before. So, if I need a recipe for “chocolate chip cookies,” the results Google gives me will likely be ordered by how many people have used that recipe before me and found it helpful. But algorithms aren’t perfect. For example, if Taylor Swift suddenly had a hit song called “Chocolate Chip Cookies” and it got 80 million YouTube hits overnight, now Google would probably feed you these results first.

One place I know algorithms affect my life is my Netflix account. Most people are aware and fine with the fact that Netflix recommends more shows and movies for the user based on what he or she has watched or “liked” by rating it a thumbs up or down. What is new is that Netflix now recommends a show or film by percentage of how much they recommend it for you. But the algorithm is still imperfect at best. If you watch something for even a few minutes and don’t like it, it still factors into your next recommendations, unless you explicitly give it a “thumbs down.” This is probably most evident with stand up comedy. I watch a lot of stand up and willingly watch new comedians. However, if I’m not laughing withing a few minutes, I can tell it’s not for me. However, Netflix is already recommending me similar comedians I am not likely to trust. Another issue is letting someone else use your Netflix account, and having their taste ruin your entire recommendations list.

 

I wouldn’t say I can “relate” to this example, but the scene which resonated with me the  most in Crawford’s article was the one in which, after the Boston Marathon Bombing, an innocent man was identified as the bomber, and was subsequently hunted down and killed. The Reddit community had to come to terms with the fact that they had started a witch hunt. This is a dark, dark consequence of algorithms, the internet, and democracy, which I think also relate to our discussions on terror groups, echo chambers, and fragmentation. If it appears to you that everyone else on the internet agrees with this idea, you are much more likely to believe that is the truth, without any other supporting evidence besides the support of this exact community. A few weeks ago, Chelsea mentioned an episode of the show “Black Mirror” in which a serial killer asks people on the internet to vote for who he should kill by putting their name on a hashtag, and the most re-tweeted hashtag would “win.” I feel like this is not so far off from that distopian concept. Given the current political climate and demonization of immigrants and minorities, I really fear for something like this taking off so quickly and have truly horrendous consequences.

Privacy on Tinder: Swipe Right to Learn More.

I’ve been in an open, toxic, love-hate relationship with Tinder since sometime in 2014. As if the world of dating weren’t awful enough in real life, why not add wading through the thigh-high swamp water of the internet to it?
It’s fairly recent that Tinder started advertising to users. My ads started with upcoming movies, featuring a handsome leading man as if he were a user (What? Channing Tatum is in my area? Swipe right!). Now they showcase an item on the profile. Like these boots? Swipe right for a code for 10% off! I must sadly agree with Taddicken’s hypothesis and say users like myself accept ads like this because the social relevance of online dating trumps the desire for more privacy in one’s dating life. Users like myself likely fall into the category of people with a predisposed personality for some degree of self-disclosure. In order to try dating apps, one must accept a certain amount of self-disclosure and vulnerability with what you share in order to get matches and dates. However, most users also want to retain some degree of privacy and anonymity, as the murky waters of dating can become hazardous fairly quickly, especially for women. Finding the balance between trust, disclosure, and privacy is incredibly difficult in this sort of environment.

 

As Fuchs discusses, most literature dealing with online privacy puts the user on the hook for disclosing information. Any consequences are the fault of the user oversharing and not protecting themselves. We see this when women are harassed online, and blamed for just existing on the internet in the first place. Don’t like getting harassed? Don’t use the service. I agree with Fuchs’s assessment that this ignores the societal need for information technology. But the information shared on Tinder, no matter how personal or intimate, is bought and sold at the same rate as your Facebook interests (possibly even more considering the personal nature). Fact is, online dating has become a normal and acceptable way that millions meet and find relationships. We’ve put our romantic lives online, now for the profit of advertisers.

 

Adding the very personal and intimate information in to the mix of the behavioral tracking online that Kovacs discusses puts users in a pretty scary situation. Having the most personal of information people might discuss in Tinder conversations as fuel for advertising items back at them on Facebook oversteps the bounds of what I feel comfortable with as a user. Ultimately, Tinder matches are unpredictable strangers, and someone can quickly lead a conversation to a topic I don’t want to discuss. In another example, my Tinder matches often show up in my Facebook “People You May Know.” This really makes me uneasy because a Tinder match is far from a Facebook friend (supporting Taddicken’s discussion of levels and dimensions of disclosure). My Facebook, after more than 10 years with the service, is like my own home. I’m careful who I let in.

 

Nonetheless, I continue swiping for the time being, because, as Taddicken emphasizes, our behaviors don’t always match our awareness of risk.