Author Archives: davidnemer

A Technological Deterministic Crash: The case of the flight MH370

On Saturday March 8th, Malaysia Airlines flight MH370 departed at 12:41 a.m. local time and was due to land in Beijing at 6:30 a.m. on the same day. The flight was carrying 227 passengers and 12 crewmembers. 20 days later, the only thing we can precisely say about this flight, as obvious as it may sound, is that the airplane went missing and its whereabouts is still unknown. But the puzzling question on everyone’s mind has been left answered: how could an aircraft like the Boeing 777-200ER simply vanish off the face of the Earth?

The motivation behind such a disquieting question is due to the trust and reliance on the Boeing’s Triple Seven. The aircraft is built with state of the art science and technology and, according to aviation specialists, is considered one of the world’s safest jetliners with a near-perfect safety record. The 777 has transponders, sensors and communication equipment that, even if it’s not triggered manually, still send data periodically and automatically. Mohan Ranganathan, an aviation safety consultant who serves on India’s Civil Aviation Safety Advisory Committee, said it was “very, very rare” for an aircraft to lose contact completely without any previous indication of problems. “The 777 is a very safe aircraft – I’m surprised,” (The Guardian, 2014). The situation becomes even more intriguing in light of the fact that the last known location of the airplane was the Strait of Malacca, which along with the area flown / to be flown by MH370, is one of the most radar monitored and busiest airways.

Knowing that the event was so heavily surrounded by technology adds to our frustration: “how could the best technology out there have failed us?” It is not surprising that people turn to technology looking for answers. Technology gives us single cause with a single effect and it is also predictive. Since the pieces of technology – the airplane or any debris – have not been found, no answers could have been given, and because our society is so strung up on this inexistent precise science, we pressure authorities for proper answers. The Malaysian authorities, seeking a scientific answer and trying to look progressive, released a statement saying that everyone on that flight is dead based on a complicated and confusing math theory. Unfortunately, the family members of the MH370 passengers got such uncomforting answer through a text message on their mobile phones.

I’m not here to discuss the possible theories out there to explain the plane’s disappearance, or to say whether the passengers are alive or not. I’m trying to stress that our technological deterministic hunger had led us to situations of absurdity, discomfort and frustration, just like what is happening to the MH370 event. Such mind set, made us find, in a mathematical formula, the answer for a very complex social situation. The answer given by the Malaysian authorities is causing international and political tensions since China is demanding Malaysia to hand over all relevant satellite data analysis on the missing plane. If these frictions keep happening, it could compromise the efforts done by the international search team since nations that are not happy with the way things are handled could leave the team.

Up to this point, the MH370 case is a clear example of technological determinism, to the point of being presented at “Introduction to Social Informatics” lectures along with WIRED Magazine statements. It is too soon to make any precipitated conclusions, but in such case we can already notice a suspension of ethical judgment and the unintended consequences caused by “naïve science”. From now on, I hope the passenger’s family find better comfort and the authorities involved in this case be less technological deterministic, even if society demands them to be so. As David E. Nye (2007) stated: technologies do not drive change. They are the product of cultural choices and their use often has unintended consequences.

The Guardian (March 8th, 2014). “Malaysia Airlines: experts surprised at disappearance of ‘very safe’ Boeing 777”. Retrieved from: http://www.theguardian.com/world/2014/mar/08/malaysia-airlines-experts-surprised-at-disappearance-of-very-safe-boeing-777

Nye, D. E. (2006). Technology matters: Questions to live with (pp. 194-198). Cambridge, MA: MIT Press.

Digital Divide Research as a Practice of Big Data

Big Data seems to be the new buzzword of the moment and the solution to all of society’s problems. Often we hear people coming up with studies involving a great amount of data aggregated from Twitter, Facebook and so on. I truly believe these studies are good; they take snapshots of scenes, let us know of interesting moments in a specific time and give us an overall idea of the problem.

boyd and Crawford (2012) define big data as “a cultural, technological, and scholarly phenomenon that rests on the interplay of: (1) Technology: maximizing computation power and algorithmic accuracy to gather, analyze, link, and compare large data sets. (2) Analysis: drawing on large data sets to identify patterns in order to make economic, social, technical, and legal claims. (3) Mythology: the widespread belief that large data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy.” (p. 663)

Big Data is usually thought as big numbers, the big N approached quantitatively. These numbers are generated based on people’s produced data; people that are online and constantly talking, sharing, posting, tweeting and “liking” things. But what about the people that are not doing that frequently, or even, not doing these activities at all? If we take Big Data and extend it to the ones experiencing digital inequalities, we would be imposing a colonial practice in which the voice of those constantly online will be obscuring the voice of those who are not. These voices are often clashing in different of contexts since they are rooted in social tensions and differences of power.

So, how can Big Data tell us the story of the people that are on the “wrong” side of the digital divide?

Mary L. Gray (2011) makes the case that Critical Ethnography is a practice of Big Data. She invites us to think of Big Data not solely as numbers and quantitative approaches, but also as a practice that is able to balance the value of ethnographic significance and statistical significance. Big Data is usually deeply concerned in mashing as much number as possible to be able to have some sort of reliability and statistics strength. The more you can get, the more reliable the information is.

Qualitative work is often seen as being too specific and doesn’t tell us anything, but Gray argues the opposite, qualitative approaches tell us something different, they give us a different perspective of the story. Ethnographic significance should be integrated as a complement in collaboration with statistical significance, so we are able to get something transformatively different.

I agree with Gray; at an earlier post here on the Social Informatics Blog (Digital Divide Research: one myth, problem and challenge) I make the case that the Digital Divide Research should move on from the statistical charts, census and Big Data, and go in the field to tell us about the context of those who are not on the internet, or not as often due to digital inequalities.

Big Data was the reason why I ended up going to the slum of Gurigica in Vitoria, Brazil. According to the census, the locals have a very low access to the LAN Houses and Telecentros that are inside the community. But if it wasn’t for my ethnographic research, I would have never known that this was happening due to the activities of the drug cartel that didn’t allow them to circulate freely on the streets. Therefore, Critical Ethnography is a powerful tool to approach the issues of the Digital Divide and contextualize the notions that Big Data gives us.

References (I highly recommend Gray’s video):

danah boyd, & Crawford, K. (2012). CRITICAL QUESTIONS FOR BIG DATA.Information, Communication & Society15(5), 662-679.

Gray, M. L. (2011). Anthropology as BIG DATA: Making the case for ethnography as a critical dimension in media and technology studies. http://research.microsoft.com/apps/video/default.aspx?id=155639

Digital Divide Research: one myth, problem and challenge.

The Myth: Digital Divide has a small literature. Pretty much, almost every book or paper on the topic will say this. I used to believe that not enough work has been done on Digital Divide, until I started studying for my qualifying exam. Fortunately or unfortunately I found out that the literature is actually very large. The problem is that the digital divide research is spread throughout all kinds of disciplines, such as: ICT4D, Community Informatics, HCI, Social Informatics, Sociology and Communication studies. In fact, the literature is not new, because it goes way back when academics were studying the diffusion of telephones and televisions.

The Problem: Quantitative approaches are addressed to answer the wrong questions. A lot of the research done on digital divide is done quantitatively. They rely on the data collected by International Telecommunication Union, World Bank and other agencies. And what these researches do is to identify a digital gap and try to correlate that gap with some sort of social, economic or political issue.  For example, there is a cross country study done by Luis Andres, he says that, based on his quantitative analysis, in order to bridge the digital gap we need to liberalize the telecommunication market to promote internet provider competition. I agree, but Brazil has had this free market for about 15 years, and we still have a vast digital divide. So, obviously, this is not an issue for Brazil, something must be happening that is keeping the divide wide. What I’m trying to say here is that in order to fully understand and propose meaningful solutions, the digital divide research requires local and context based research. It doesn’t matter if it’s quantitative or qualitative, I don’t want to get into this argument, but we need to understand that each country has its own set of policies, people have different cultural backgrounds, so solutions need to be tailored and not based on general auto analysis.

The Challenge: “How to talk to policymakers?”. Policymakers of the digital divide tend to have a technological deterministic perspective. They focus on single factors, such as “access”, because they are convenient since they are easy to measure. These simple measures can be used to influence public opinion since lay people can relate to them. Policymakers also need to justify allocation of resources, which is easier to do when they can create benchmarks (Barzilai-Nahon, 2006). So policymakers are strung up on numbers, and how can we show them that subjective factors such as education and training can be of much better value to promote the digital inclusion than pure access? I don’t want to blame policymakers for approaching the digital divide quantitatively, but I’d like to leave this challenge for us, digital divide scholars, to realize a way to start conversations with people that can only see numbers.

References
Barzilai-Nahon, Karine. 2006. “Gaps and Bits: Conceptualizing Measurements for Digital Divide/s.” Information Society 22:269-278.

Digital Inclusion in Brazil: a Social Informatics epistemological problematique

Brazil is currently the world’s fifth largest country, both by geographical area and by population. It is the world’s eighth largest economy by nominal GDP with one of the world’s fastest growing major economies (World Bank, 2011). With such outstanding macro indexes, it is a shame to look into a close up reality of the Brazilian society, which is characterized by its abysmal gap between the rich and the poor. The marginalized poor people are not only deprived from decent services to their basic needs, but also to the access of technology. About 47% of the Brazilian population never used a computer, and 66% of the population never had access to the Internet. 64% of the people that had/have some sort of access to the Internet, never had a formal training on how to use the internet (CGI, 2006), which highlights the need of critical education and consciousness of its use.

The Brazilian government has been trying to fight such digital divide by introducing digital inclusion programs in order to socially include the marginalized population. Before moving on, I would like to revisit such terms since they have different meanings but often times are used as the if they were the same. Digital divide refers to inequalities between any groups in terms of access and use of digital technologies. Digital divide is usually concerned with statistics of access and can help us by acknowledging where the problem is situated. Digital inclusion refers to the process of democratizing the access to digital technologies in a way that the digitally marginalized is inserted in the information society.  For digital inclusion, access is not enough; the process should be worried about empowering the marginalized and teach them how to appropriate the digital technologies.

Digital Inclusion policies in Brazil have a technological deterministic approach, in which policymakers are mainly concerned about giving access to technology to the poor classes. Issues such as empowerment and appropriation of technology don’t seem to be on their priorities. In 2005 the Brazilian government invested over $400 million in various programs, equipment, infra-structure and tools to afford the poor population to access to technology. The Brazilian government was mostly concerned about lowering the price of computers and pushing them into the people’s homes instead of providing social programs that would involve technology. (Rebelo, 2005; “Info Plantao” 2007).

Currently, the Brazilian government has two main strategies to promote digital inclusion: LAN houses and Telecentros. LAN houses are establishments where, like a cyber cafe, people can pay to use a computer with Internet access and a local network (LAN). According to the Internet Steering Committee in Brazil, LAN houses are responsible for almost 50% of Internet access in Brazil and in poor areas it is responsible for 82% of the accesses (“O GLOBO”, 2009). Although LAN houses are privately owned business, the government provides several credit lines and loans with low interest rate in order to spread the number of facilities, especially in poor areas. Telecentros are facilities where the general public can access the computers for free. The computers are equipped with a variety of software and connected to the Internet. Several computer lectures are offered to the population throughout the year in order to fight the digital divide. Some Telecentro programs are owned by the government and some others by the private sector. Telecentros are usually implemented in areas where the populations with low income reside.

Because of the relative nuance of the Digital Inclusion programs in Brazil and even in the rest of the world, little substantive research/theory literature exists on the effective ways to measure change brought about by providing access to ICTs (O’Neil, 2002). The reason for such inefficiency is due to the erroneous methodological approach by policymakers whom are mostly strung up on hard numbers and statistics. The “problematique” of Digital Inclusion should be approached by qualitative methods which work well for exploratory studies in new fields as monitoring their progress and offers a holistic view of a dynamic situation (Patton, 1990). In this way, Digital inclusion research can build on Social Informatics research that considers social factors influencing ICT use. Social Informatics provides theoretical tools that can assist researchers in considering and understanding the social factors influencing ICT utilization (Kling, 2000).

The topic of digital inclusion hasn’t been fully explored in the eyes of Social Informatics. A lot of analyses have been done on policies regarding the topic, but a proper study that researches the users’ behavior, culture and attitude towards digital technology is almost nonexistent. No one can argue whether digital inclusion leads to social inclusion or not, because the previous studies try to tackle such question in terms of numbers, and as I already mentioned, it can’t be answered quantitatively. Digital Inclusion has been my main research interest, and as a Social Informatics PhD student, my goal is to ethnographically explore the actual digital inclusion units (LAN houses and telecentros), talk to people and understand their culture in order to properly answer some questions.

References

World Bank (2011, April 15). World Development Indicators database. Retrieved from http://go.worldbank.org/I358WVLTT0

CGI (2006, May 30). Survey on the Use of Information and Communication Technologies in Brazil: e-Government Indicators – Households and Enterprises. Retrieved from http://www.cetic.br/palestras

Info Plantao. Retrieved December 7, 2011, from http://info.abril.com.br/aberto/infonews/082007/08082007-17.shl

Kling, R. (2000). Learning about information technologies and social change: the contribution of social informatics. The Information Society. 16(3), 217-232.

O Globo. Retrieved December 7, 2011, from http://oglobo.globo.com/blogs/cat/posts/2009/06/19/lan-houses-caminho-da-responsabilidade-social-197174.asp

O’Neil, D. (2002). Assessing community informatics: a review of methodological approaches for evaluating community networks and community technology centers. Internet Research, 12(1), 76-102.

Patton, M. (1990). Qualitative evaluation and research methods. Beverly Hills, CA: Sage.

Rebelo, P. (2005, May 12). Inclusão digital: o que é e a quem se destina? Webinsider. Retrieved from http://webinsider.uol.com.br/2005/05/12/inclusao-digital-o-que-e-e-a-quem-se-destina/

Social Informatics, 9/11 and ICJS: an opportunity for research

From: Social Informatics: Principles, Theory, and Practices

(Sawyer and Tyworth)

We see integrated criminal justice systems (ICJS) as one area that presents a significant opportunity for social informaticists to both develop theory and contribute to practice. E-Government, or digital governance, is both an emerging area of scholarship and a fast evolving phenomenon in society. This is particularly true for issues of law enforcement and national defense where there is increasing pressure to computerize or modernize existing information and communication technology (ICT) given the recent attention to international terrorism (National Commission on Terrorist Attacks upon the United States, 2004). And, for at least the United States, it may be that there is no other area where the consequences of adhering to the deterministic view of ICT are as potentially catastrophic. In spite of these risks, the deterministic model continues to be advocated.

For example, in his article on improving intelligence analyzing systems Strickland (Strickland, 2004) focused exclusively on technological change as the solution to the problems of information sharing among agencies. Strickland identifies data disintegration, problems in analytical methodology, and technological obsolescence as the primary areas of concern. Yet, as Richard Shelby noted in his addendum to the Senate Select Committee investigating pre- and post-9/11 intelligence (Shelby, 2002):

The CIA’s chronic failure, before September 11, to share with other agencies the names of known Al-Qa’ida terrorists who it knew to be in the country allowed at least two such terrorists the opportunity to live, move, and prepare for the attacks without hindrance from the very federal officials whose job it is to find them. Sadly, the CIA seems to have concluded that the maintenance of its information monopoly was more important that stopping terrorists from entering or operating within the United States.

Though Senator Shelby’s language is polemic, the message is clear: without significant changes to the organizational cultures, simply implementing new technological systems or updating existing ones will in many instances fail to achieve policy goals. It is exactly this type of problem for which social informatics theory is particularly applicable. An e-Government policy area directly related to the issue of intelligence sharing is the problem of integrating information systems among law enforcement and criminal justice agencies. Prior to, but especially after 9/11, there has been a significant movement within government to integrate ICT across law enforcement and criminal justice agency boundaries in order to facilitate cross-agency communication and information sharing. See for example (General Accountability Office, 2003).

Criminal justice information systems have historically been developed in an ad hoc manner, tailored to the needs of the particular agency, and with minimal support resources (either fiscal or expertise) (Dunworth, 2000, 2005; Sawyer, Tapia, Pesheck, & Davenport, 2004). As a result federal and state governments have begun the process of trying to develop and implement integrated criminal justice systems that allow agencies to share information across organizational boundaries. Examples of such systems are Pennsylvania’s Justice Network (JNet), the Washington D.C. metro area’s Capital Wireless Integration Network (CapWIN), and the San Diego region’s Automated Regional Justice Information System (ARJIS) among others.

We find ICES s to be ideal opportunities to conduct social informatics research for three reasons. First, law enforcement is a socially complex domain comprised of and embedded in multiple social institutions (Sawyer, Tapia, Pesheck, & Davenport, 2004). Such institutions include organizational practice and culture, societal norms and values, and regulatory requirements. Second, law enforcement agencies have long been adopters of ICT to the point where ICT are now so ubiquitous that they are viewed as integral to policing (Hoey, 1998). This remains true in spite of a decidedly mixed record of success (Baird & Barksdale, 2003; Bureau of Justice Assistance, 2002). Third, the historical practice of ad hoc and siloed systems development suggests that law enforcement is an area where new systems development approaches are needed.

Revisiting the term and categories of Social Networking Sites

Throughout the S604 Online Social Networking Sites lectures, we have been reading articles that somehow attempts to define the term social networking site (SNS). Even though social networking sites became popular around 2002 and 2003 (boyd & Ellison, 2007), the concerns and issues about people socializing on computer networks and the Internet goes back to the 1990’s. In 1996 Wellman et al. (1996) brought up issues that are still relevant to nowadays’ social networking sites but, at that time, they were referring to any computer networks.

Wellman et al. argue that when computer networks connect people along with machine, they turn themselves into social networks. The authors call them computer-supported social networks (CSSNs), which are able to sustain strong, intermediate and weak ties that supply information and promote social support in specialized and broadly based relationships. CSSNs foster virtual communities that are commonly partial and narrowly focused. The communication within these networks was basically done through electronic mails and computerized conferencing usually text-based and asynchronous.

According to the authors, CSSNs “accomplish a wide variety of cooperative work, connecting workers within and between organizations who are often physically dispersed”. Like any other social ambience, CSSNs have developed their own social norms and structures. The essence of the framework can limit as well as facilitate social control. CSSNs “have strong societal implications, fostering situations that combine global connectivity, the fragmentation of solidarities, the de-emphasis of local organizations (in the neighborhood and workplace), and the increased importance of home bases”.

It is interesting to highlight that, even though Wellman et al.’s definition came around in 1996, it is really close to the definition of social networking sites given by boyd and Ellison (2007). The main difference is that the “social network” in 1996 comprised of the entire computer network, due to its simplicity, and nowadays it is “only” a website. Wellman et al. do not talk specifically about impression management, but based on their article, we can infer that it was done through the language and writing styles of the person, it was also done by the person’s signature, which was considered their profile.

The author’s narrative gives us an impression that the concept of CSSNs and its development are not technological deterministic. In other words, technology (CSSNs) does not change society — it only affords possibilities for change. CSSNs are social institutions that should not be studied in isolation but as integrated into everyday lives. Thus, it gives sociologists wonderful opportunities to research the CSSNs’ area and develop social systems and not just study them after the fact. Like William Buxton once said, “the computer science is easy, the sociology is hard.”

As the years go by, social networking sites get more and more sophisticated, as well as infiltrated in our daily lives. Nowadays, it seems that almost every new website is some sort of SNSs, since they are very oriented in connecting people, getting discussions going and sharing them with the largest amount of people as possible – through Facebook, Twitter, etc. Having this SNS trend on the Internet, it makes the definition of social networking sites to seem somehow fuzzy and uncertain. boyd and Ellison (2007) were the precursors in attempting to define SNSs as we know them nowadays. Their study was conducted in 2006, when the use of SNSs (i.e. MySpace, Facebook, Orkut…) was just reaching a fast pace of adoption.

Since “an Internet year is like a dog year, changing approximately seven times faster than normal human time” (Wellman, 2001), a lot has changed regarding social networking sites. The definition proposed by boyd and Ellison can still be considered valid, but as we re-read it, it seems too broad and vague, encompassing pretty much almost every new tool nowadays, since they are SNS-oriented (mentioned before).

Kaplan and Haenlein (2010) analyze this SNS trend on the Internet and come up with a more current and organized definition of social networking sites as well as other medias that are oriented towards socializing. They first attempt to characterize three terms that are usually confused by managers and academic researchers: Social Media, Web 2.0 and User Generated Content. Their approach to such terms is:

  • User Generated Content (UGC) can be seen as the sum of all ways in which people make use of Social Media. It is usually applied to describe the various forms of media content that are publicly available and created by end-users (i.e. Wikipedia, Blogs)
  • Web 2.0 represents the ideological and technological foundation, it is the platform for the evolution of Social Media (i.e. Adobe Flash, RSS, AJAX);
  • Social Media is a group of Internet-based applications that build on the ideological and technological foundations of Web 2.0, and that allow the creation and exchange of User Generated Content (i.e. Youtube, Facebook, Twitter).

Having those terms defined, Kaplan and Haenlein go into “Social Media” and detail the six different types of Social Media:

  • Collaborative projects: it enables “the joint and simultaneous creation of content by many end-users and are, in this sense, probably the most democratic manifestation of UGC.” The main idea underlying collaborative projects is that the joint effort of many actors leads to a better outcome than any actor could achieve individually. (i.e. Wikipedia)
  • Blogs: “represent the earliest form of Social Media, are special types of websites that usually display date-stamped entries in reverse chronological order (OECD, 2007)”.
  • Content Communities:  “sharing of media content between users. Content communities exist for a wide range of different media types, including text (e.g., BookCrossing, via which 750,000+ people from over 130 countries share books), photos (e.g., Flickr), videos (e.g., YouTube), and PowerPoint presentations (e.g., Slideshare)”.
  • Social networking sites: applications that enable users to connect by creating personal information profiles, inviting friends and colleagues to have access to those profiles, and sending e-mails and instant messages between each other. These personal profiles can include any type of information, including photos, video, audio files, and blogs. (i.e. Facebook)
  • Virtual game worlds: “platforms that replicate a three dimensional environment in which users can appear in the form of personalized avatars and interact with each other as they would in real life”. (i.e. World of Warcraft)
  • Virtual social worlds:  “allows inhabitants to choose their behavior more freely and essentially live a virtual life similar to their real life. The users appear in the form of avatars and interact in a three-dimensional virtual environment; however, in this realm, there are no rules restricting the range of possible interactions, except for basic physical laws such as gravity”. (i.e. Second Life)

Although Kaplan and Haenlein definitions and categories seem very accurate and recent, from 2010, there seems, already, to be a new category on the rise: Mobile Social Media. This category does not only encompass the social apps (social applications for mobile devices), but also the fact that users would be able to fallow their friends across the world (i.e. Google Latitude). This new Mobile Social Network is a lot alike the CSSNs proposed by Wellman et al, but more sophisticated. It seems that Mobile is taking control of the Social Media locomotive, so let’s hop on, and see where it will take us.

Main Articles:

  • Kaplan, A.M. & Haenlein, M. (2010). Users of the world, unite! The challenges and opportunities of social media;
  • Utz, S. (2000). Social information processing in MUDs: The development of friendships in virtual worlds;
  • Wellman, B., Salaff, J., Dimitrova, D., Garton, L., Gulia, M., & Haythornthwaite, C. (1996). Computer networks as social networks: Collaborative work, telework, and virtual community;

Cited Articles:

  • boyd, d. m., & Ellison, N. B. (2007). Social network sites: Definition, history, and scholarship. Journal of Computer-Mediated Communication, 13(1), article 11.
  • Wellman, B. (2001). Computer networks as social networks. Science, 293, 2031–2034.
Follow

Get every new post delivered to your Inbox.

Join 141 other followers

%d bloggers like this: