Category Archives: Critical perspectives
Big Data seems to be the new buzzword of the moment and the solution to all of society’s problems. Often we hear people coming up with studies involving a great amount of data aggregated from Twitter, Facebook and so on. I truly believe these studies are good; they take snapshots of scenes, let us know of interesting moments in a specific time and give us an overall idea of the problem.
boyd and Crawford (2012) define big data as “a cultural, technological, and scholarly phenomenon that rests on the interplay of: (1) Technology: maximizing computation power and algorithmic accuracy to gather, analyze, link, and compare large data sets. (2) Analysis: drawing on large data sets to identify patterns in order to make economic, social, technical, and legal claims. (3) Mythology: the widespread belief that large data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy.” (p. 663)
Big Data is usually thought as big numbers, the big N approached quantitatively. These numbers are generated based on people’s produced data; people that are online and constantly talking, sharing, posting, tweeting and “liking” things. But what about the people that are not doing that frequently, or even, not doing these activities at all? If we take Big Data and extend it to the ones experiencing digital inequalities, we would be imposing a colonial practice in which the voice of those constantly online will be obscuring the voice of those who are not. These voices are often clashing in different of contexts since they are rooted in social tensions and differences of power.
So, how can Big Data tell us the story of the people that are on the “wrong” side of the digital divide?
Mary L. Gray (2011) makes the case that Critical Ethnography is a practice of Big Data. She invites us to think of Big Data not solely as numbers and quantitative approaches, but also as a practice that is able to balance the value of ethnographic significance and statistical significance. Big Data is usually deeply concerned in mashing as much number as possible to be able to have some sort of reliability and statistics strength. The more you can get, the more reliable the information is.
Qualitative work is often seen as being too specific and doesn’t tell us anything, but Gray argues the opposite, qualitative approaches tell us something different, they give us a different perspective of the story. Ethnographic significance should be integrated as a complement in collaboration with statistical significance, so we are able to get something transformatively different.
I agree with Gray; at an earlier post here on the Social Informatics Blog (Digital Divide Research: one myth, problem and challenge) I make the case that the Digital Divide Research should move on from the statistical charts, census and Big Data, and go in the field to tell us about the context of those who are not on the internet, or not as often due to digital inequalities.
Big Data was the reason why I ended up going to the slum of Gurigica in Vitoria, Brazil. According to the census, the locals have a very low access to the LAN Houses and Telecentros that are inside the community. But if it wasn’t for my ethnographic research, I would have never known that this was happening due to the activities of the drug cartel that didn’t allow them to circulate freely on the streets. Therefore, Critical Ethnography is a powerful tool to approach the issues of the Digital Divide and contextualize the notions that Big Data gives us.
References (I highly recommend Gray’s video):
danah boyd, & Crawford, K. (2012). CRITICAL QUESTIONS FOR BIG DATA.Information, Communication & Society, 15(5), 662-679.
Gray, M. L. (2011). Anthropology as BIG DATA: Making the case for ethnography as a critical dimension in media and technology studies. http://research.microsoft.com/apps/video/default.aspx?id=155639
I have always enjoyed fixing computers. This is not because of the challenges that are presented by the process of computer repair (although there is a certain amount of enjoyment to be found there as well) but because it is interesting to hear how people feel about their computers both in terms of their normal functioning and their malfunctioning. There seemed to be a near-infinite number of ways that people had come up with to make the functioning (or malfunctioning) of these machines make sense. I came to think of these little quirky approaches to grappling with the black box of computational devices as little rituals. Cultural anthropologist Victor Turner describes rituals as symbolic actions, grouping them alongside other forms of symbolic action such as social drama and metaphor (4). However, I did not have a concrete definition of what a technological ritual was; I just knew it when I saw it.
Fundamental to these is the idea that rituals are activities that occur in the material world, but have some sort of importance beyond their material qualities. Metaphor has become an important to aid users in understanding the functioning of the otherwise complex functioning of digital devices (e.g. 1). Digital technology also has its share of social drama: Facebook relationship status being one way to solidify a romantic engagement between two people. Even ritual itself has been spoken of in the context of computation. One study has examined how “ritualized interactions often play a major role in the performance and experience of the art or performance work,” (2) while another has looked at how ritual activities could be used to make virtual characters seem more like real characters (3). However, art performances hold a kind of lofty ambition and a focus on making virtual characters have rituals focuses on representing people to make them easier to interact with. I wonder how looking at the more everyday practices of people as they relate with technology could lead to a better understanding of both people and the technology they use. As an example of how to look at technological interactions in terms of ritual, I point to Merlin Mann’s Inbox Zero.
It is common to hear people complain about having too much email. It takes a lot of time to sort through all of one’s messages, it causes problems with missed communication, and it can make people feel overwhelmed with the amount of information they are receiving. As an answer to this problem, Merlin Mann describes Inbox Zero (http://inboxzero.com/) , a way of handling email overload. At one level, this is a prescription of simple actions of sorting, removing and addressing the demands presented in a person’s inbox. However, it is also a set of small actions that in combination hold a certain higher personal and social value. The empty inbox described by the processes name not only reduces distractions when new email comes in, it also gives a symbol of technological well-adjustment. It is social in the sense that the person’s relations to others are kept in check. The material of Inbox Zero is an empty in box, it’s meaning is control of technology in a way that also incorporates interactions with other people.
This idea of ritual, as it pertains to technology, is still quite rough. However, as HCI has focused more on experiences and the designing thereof, the kind of duality of meaning that comes from ritual acts may prove to be a valuable way of understanding the relationships between the form and function of artifacts and the meanings that people ascribe to them. Looking at interactions as rituals may point to better understandings of digital artifacts and the people who interact with them.
 Blackwell, A. F. (2006). The reification of metaphor as a design tool. ACM Transactions on Computer-Human Interaction (TOCHI), 13(4), 490-530.
 Loke, L., Khut, G. P., & Kocaballi, A. B. (2012, June). Bodily experience and imagination: designing ritual interactions for participatory live-art contexts. InProceedings of the Designing Interactive Systems Conference (pp. 779-788). ACM.
 Mascarenhas, S., Dias, J., Afonso, N., Enz, S., & Paiva, A. (2009, May). Using rituals to express cultural differences in synthetic characters. InProceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems (Vol. 1).
 Turner, V. W. (1975). Dramas, fields, and metaphors: Symbolic action in human society. Cornell University Press.
Designers tend to approach ideas from a certain bias, which may require some explanation. While design is focused on the process of creating artifacts, it is rarely a straightforward endeavor. Of particular importance is the accountability that comes from creating a new artifact, the ethics of design so to speak. In the most general and common sense, the impetus is to solve a problem, and the solution is assessed on the basis of its efficacy. This can be thought of as the function of a particular design – what it does as a means of resolving a problem. The designer, in the ideal circumstance, builds that function into the artifact. In addition to this functional aspect, there is also a process of changing and reframing problems [see Nelson for more clarity on this]. This procedure carries with it yet another aspect of evaluation– the framing of the problem is judged on the basis of how well it captures some aspect previously unconsidered that, nonetheless, is integral to resolving the problem. To put this all more simply, a design can fail procedurally due to improper problem framing, regardless of how well the it functions, or it can fail functionally, regardless of how well the procedure of framing the problem goes. The results of either of these failings have implications for the designer. A failing of functionality indicts the designer on charges of poor craftsmanship, while a failure of procedure points to general ineptitude. The inverse is equally true – merit is given for functional and novel approaches.
While there are a number of good and bad designs in the world, this topic has been covered considerably, and so the nature of such evaluation will not be addressed here. The proceeding is presented with the hopes of identifying how a designer is ethically tied to the success or failure of an artifact. If this is taken as true, then what happens in the grey areas? If two ends of the spectrum refer back to the designer, is it not reasonable that the middle ground has a similar effect? The situation above becomes socially relevant when one considers Winner’s argument that artifacts can have politics [Winner]. Those politics become built into the artifact both procedurally and functionally; both with implications for the designer. In the case of Winner’s examples, Moses’s bridges are problematic due to their function - their function is limited by the way they were made. Alternately, the tomato harvester suffers from a procedural issue – namely that the framing of the problem showed greater concern for efficiency and cost-effectiveness than the implications of mechanization with economical and ecological consequences. In both cases, Winner’s description seems to fit well within a model of accountability as prescribed by design. But lets suppose a situation where the decisions are not quite so clear. As an example of such a situation, consider this Pennsylvania polling station.
In a Philadelphia polling station in the 2012 election, one of the booths had a problem regarding candidate selection. When the space on the screen occupied by Barak Obama’s name was clicked, the box for Mitt Romney would be checked. Now, in a situation similar to Moses’s bridges it could be imagined that this machine was designed with the specific intent of favoring a specific candidate. This would be a functional aspect, in that the artifact’s functioning had a specific bias. But let us suppose that the person who posted this video’s first inclination (from going into “troubleshoot mode”) is correct and the problem is a malfunction rather than a deliberate decision. It seems reasonable that a touchscreen could break, particularly if used repeatedly (as would be the case of a polling station). Then it would seem that the accountability would fall upon the individual who chose that particular touch screen, making it procedural – rooted in a concern of cost over functional robustness. This need not imply any political orientation with regards to Romney and Obama, but it certainly represents a political statement nonetheless. However, suppose that such was not the case. Suppose, rather, that the reduced size of one option’s button was the result of a contextual issue. A power surge, a component broken during shipping, or any number of events that had happened to that specific machine could be at fault. In such a case, what would be the ethical standing of the designer? Would the complexities of the context caused a newly emergent political stance without an actor behind it, or is there an implication at the level of deciding to use such a machine in the first place?
If that sounds somewhat far-fetched, consider the 2010 “Flash Crash.” Sommerville et al. describes how a $4.1 billion block sale that was “executed with uncommon urgency” resulted in a “complex pattern of interactions between high-frequency algorithmic trading systems… that buy and sell blocks of financial instruments on incredibly short timescales” [Sommerville]. The systems employed had functioned together well, until that context had arisen. But when that context DID arise, roughly $800 billion disappeared [ibid]. As in the final hypothetical situation regarding the voting booth, it becomes difficult to consider the ethical position of the designer(s). Both describe systems of systems (the algorithms in the market and the technological parts of the voting machine). Both also describe situations where the final result is emergent, as opposed to a situation that is deliberately created. Risatti makes a distinction between function, and emergent application: use (Risatti). It would seem that these issues fall more under latter than the former, and by virtue of the fact that use is not constructed into the artifact in the way that function is, that the designer is somewhat free from blame. After all, designers cannot be expected to be capable of predicting the future, can they?
As a somewhat unsettling conclusion to this case study, what happens when the model of accountability that is defined by function and procedure becomes less common? It is becoming more difficult to consider any one given technology in isolation. Phones sync to computers that sync to bank accounts; information is stored to a cloud where multiple people, from multiple devices, can access it. Systems of technology are moving towards systems of systems of technology. As this increases, the chances for emergence also increase. Buried in this complex scenario is a notion that is as lucid and cutting as what Winner expresses: if artifacts have politics, do systems have politics as well? It seems evident that the answer is a resounding “yes.” However, that answer only leads to a more worrisome question. If systems have politics, who is accountable for those politics?
Nelson, H. and Stolterman, E. (2012) The Design Way: Intentional Change in an Unpredicatble World. 2nd ed. MIT Press.
Winner, L. (1986) Do Artifacts Have Politics? The Whale and the Reactor: A Search for Limits in an Age of High Technology. U. Chicago Press: 19-39.
Sommerville, I. Cliff, D., Calinescu, R., Keen, J., Kelly, T., Kwiatkowska, M., McDermid, J., and Paige, R. (2012) Large-Scale Complex IT Systems, Communicatons of the ACM 55(7): 71-77.
Risatti, H. (2007) A Theory of Craft and Aesthetic Expression. U. North Carolina Press.
In the New York Times today there is an article about Google X, the top-secret lab for big ideas at Google. According to the article, the future being imagined here is “a place where your refrigerator could be connected to the Internet, so it could order groceries when they ran low. Your dinner plate could post to a social network what you’re eating. Your robot could go to the office while you stay home in your pajamas. And you could, perhaps, take an elevator to outer space.”
This is indeed a compelling vision.. maybe. Am I the only one who finds this future a little underwhelming, maybe even problematic and dysfunctional? For one thing, aren’t there already enough what-I-had-for-lunch tweets without plates getting in on the action? And what if the plate (because of course it has artificial intelligence) decides to chime in with some commentary: ‘pizza leftovers again?! @John’sMom are you seeing this?’.
And while staying at home in pajamas does sound pretty attractive, how does sending your robot into the office help? Does it make typing noises at your computer so people think you’re there? Does it go to meetings for you? Does it make decisions for you? What if it messes up? Could you really relax at home in your pajamas knowing that your robot might create a huge mess (bureaucratic or physical) that you will need to clean up? What if your robot knows how you really feel about your coworker and gets into a fight with your coworker’s robot? Could your robot be fired? Could your robot get you fired? Could it get promoted? Who would be held responsible for its actions: you, the robot, the robot’s designer? Would the robot have a moral compass, and if so, whose? Would everyone send their robots in for them, so the workplace would be entirely robots? Would it be all the same to them if the lights and heat were shut off to save electricity? Would there be robot unions to protest this mistreatment?
And then there’s the grocery-ordering refrigerator. This seems to be one of the most common images of a digital future of pervasive computing, no doubt inspired by a moment of watching the last few drops of milk drip onto still-dry cereal and thinking ‘man, I wish the refrigerator could have just taken care of that.’ But what kind of groceries would it order? It stands to reason that a digital refrigerator might need to deal in SKUs, which would make it easy to order more frozen pizza but maybe more difficult to order ‘the best-looking local in-season fruit’. Also, what infrastructure would this require? In addition to the refrigerator, the ordering system would need to be in place on the grocery store end, as well as maybe a delivery service. It’s hard to imagine smaller markets being able to invest in this, and vendors at the local farmers’ market would be out of the loop entirely. This would undoubtedly be unproblematic for many people, but it is significant that these biases could be encoded in technical systems that could encourage already-existing (unhealthy) habits to become even more entrenched.
As Langdon Winner has argued, technologies shape forms of life: technology design is ultimately about choosing ways of living, of ordering the world around us and our activities in it. While geeky technophiles tend to do a pretty good job of dreaming up some very cool and labor-saving technologies, they are less good at envisioning the forms of life that they might institute.
This is where more nuanced and critical approaches like Social Informatics might be useful. As scholars who study social dimensions of technologies we are used to teasing apart their various social, cultural, philosophical, historical, political, and ethical aspects, and looking at them critically. These aspects are just as much, if not more, important than technical feasibility, yet they are discussed far less frequently (if at all) during technology development and assessment. Maybe one of the reasons for this is that our existing critical approaches focus on technologies that already exist, not ones that have yet to be implemented.
But why should geeks working at big corporations with deep pockets be the ones who get to decide what our (digital) future should look like? What sorts of futures might Social Inforfmatics scholars envision? And as we’re imagining futures, could we also maybe move past our own laziness to consider how we might build a future with less inequality and more justice, less stress and more health, less poverty and excess and more true wealth and happiness?
All of these may sound like unattainable goals. But imagining a future in which they are true would be a first step toward making them a reality. And I would take that over a ‘smart’ refrigerator any day.