Last fall, I did something that, in retrospect, feels like it was height of masochism: I enrolled in a standard, first-term grad level microeconomics course. My colleagues in the iSchool program probably thought I was crazy. Heck, I thought I was crazy. You might imagine that, looking back, I would say that I’m glad I took it. I will say that, but not for the reasons you might think.
But perhaps I should introduce myself. I’m a second-year PhD student interested in studying ICT and development. Although I studied econ as an undergrad, multiple decades ago (eek!), it really wasn’t my intent to study it at the grad level. But I now find myself gravitating back in that direction in my work
Why pedagogy matters
So, let’s return to this entry-level econ course. Frankly, pedagogy in econ seems to be stuck in about 1950. The idea of students co-creating a learning environment would be laughed at. Come in, the instructor writes some stuff on the board, you take notes, later on you take an exam. Questions are rare. Feedback as to whether anyone is learning anything is non-existent. (The TA, not the instructor, gave me the understanding I needed to pass that course.) Colleagues at other institutions tell me that it’s really a specific phenomenon to any one place. Some econ professors are better teachers than others, but in general the standard seems quite low.
This is a real shame, because sometimes the discipline of economics feels more like a fortress than a silo. Another reason for this — actually closely related — is a historical emphasis on a certain kind of mathematics, generating airtight proofs of theoretical propositions, but historically marginalizing empirical findings. I tend to think of this as characterizing a new science trying to prove itself, indeed trying to prove all of social science, in the gung-ho positivistic mid-1800s.Alas, econ seems to still live in that world where credibility is established by being mathy and theoretical.
It’s fine to value this sort of analysis, but it tends to shut out valuable contributions from those who don’t think as abstractly. That’s also too bad, because there’s is a lot of really valuable cross-disciplinary work that could benefit from incorporating economic models; this work could not only be improved, but could also give back to economics by enhancing those models. But it doesn’t happen.
Why economics matters
So what? At its root, economics is the study of how humans respond (usually rationally) to incentives. ICTD frankly makes no sense without a consideration of incentives. Will some new policy intervention have the desired effect, or won’t it? Will technology improve people’s lives, or will it hurt them? These aren’t easy questions, but they’re impossible questions if we have no basis to predict how individuals will respond to changes in their capabilities. (And really, isn’t the promise of ICT supposed to be about broadening capabilities? Or if you prefer the contrarian view, to argue ICT on balance has a negative effect, don’t you still need to address changing capabilities?)
To be sure, economics also involves a particular toolkit, oriented around models tested through quantitative analysis. It’s not always the best toolkit for this job of predicting behavior, and it might never be the only relevant toolkit. Anthropology, sociology, political science, psychology — all of these have an immense role to play in ICTD, and we need interdisciplinary ICTD scholars who can speak the different languages of their own respective disciplines and of the interdisciplinary field of ICTD. (The same need is no doubt there for fluency in more interdisciplinary fields like communications, international studies, education, women’s studies, and indigenous studies.)
I don’t really know all that much about econ, relative to, you know, actual economists. Frankly, I err on the side of being too brash in my criticisms of it. In giving my outsider’s impressions of the field, I’ve probably made assertions that are highly debatable. If any economists are reading, I’d like to have that debate.
The good news is, I do see a lot of change in the field from my undergrad days. Development econ seems to be much more about generating smaller theoretical models that help us understand empirically observable reality, rather than fitting everything into canonical theories come hell or high water. This quarter, our iSchool has started cosponsoring (with the econ department and the public policy school) a series of seminars on development econ that’s attracted interdisciplinary involvement. So I’m really optimistic that different fields are starting to talk to each other.
Was I a masochist to press on with my micro class? Probably. Will I use those specific theories discussed, or be called upon to derive proofs? Maybe, maybe not. But you can’t enter the conversation if you don’t speak the language. ICTD needs more folks learning the language.
During the last year of my undergraduate education, I (Shad) encountered my first experience of the video games are(n’t) Art debate. While there was certainly a lot of passion surrounding the argument, the logic was somewhat lacking. One side seemed to center around the fact that Grand Theft Auto: Vice City (which had been released earlier that year) contained vestiges of the stylistic aesthetic of the 1980s and presented a compelling point for social engagement with a distinct cultural setting from recent history. Alternately, the opposing side argued that these elements were simply superficial, and that the game’s message, at least in terms of any artistic merit, did not represent a real cultural statement to the degree required by the title of Art. However, these arguments seemed to center on the games themselves, as if Art were a property that is intrinsically part of some artifacts and intrinsically not part of some other artifacts. In short, the argument had missed the social connections that surround Art evaluation: the relationship between the concepts of Art, the Art World (comprised of critics and consumers of Art who ascribe a cultural value as well as a monetary value to Art objects), and the objets d’art themselves. At the time, I felt as if the current courses of debate were not ever going to result in any kind of conception of video games as Art, and that it would be a while before the discourse would develop to a point where video games could be spoken of as Art.
Ten years have passed since that point. At CHI 2013, the opening plenary was presented by Paola Antonelli, Senior Curator and Architecture & Design Director for Research & Development at the MOMA. Her presentation focused on exhibitions that have looked at video games as Art at the MOMA and, more broadly, the importance of the relationship of design to art (and vice versa). It seems that, at least in practice if not in theory, my question from a decade earlier has been partially answered. Video games are beginning to be treated as Art is treated. Design, as an applied art, could act as an indicator or close relative of Art, but not a true member of the club. Video games should be appreciated in a manner similar to Art, video games, as designed experiences, can equally be treated as art.
Art and art theory have had a history of relevance to HCI, as is especially evident in the ACM SIGGRAPH Digital Arts Community (http://siggrapharts.ning.com/) and example topics including (but not limited to) the convergence of goals of Art and HCI (e.g. Sengers and Csikszentmihályi, 2003, Blythe 2013), collaboration (e.g. Adamczyk et al., 2007, England, 2012), and creativity support (e.g. Morris et al., 2009, Kerne et al 2013). While not a comprehensive list, from this it can be discerned that there is some sort of connection between the aims of HCI and Art, that there are challenges in the connecting of the two (both in terms of aims and in terms of what is considered valuable in a piece of art) and that supporting art is one possible goal for interaction design. As a possible overarching theme, there are elements of Art that are important to the practice of HCI and the creation of technology in general, but that there are both practical issues such as the economics of art and concerns of would-be collaborators and theoretical issues such as the density of art theory. Supporting creativity makes a convenient bridge point because it is a concept of equal importance to art as it is to HCI.
Returning to the consideration of video games as art, from the end of technological design there is some sort of convergence between the two and that this has warranted looking at art as a mean of understanding interaction design. As partial confirmation of this, it would seem that the Art World (of which the MOMA is certainly a part) has an interest in looking at some of the results of interaction design, including video games. So, for both stakeholders in the discourse, there is a benefit to treating video games like Art. But again I return to the question of discourse surrounding video games, which we believe leads straight to the questions of why and how to study video games.
So why should it matter that video games are beginning to be considered as art is considered. First, it means that there may be even greater cause to take video games seriously. Not just Games With a Purpose or games that are explicitly made to embody a political statement (such as the excellent games created by Lucas Pope (http://dukope.com/) but video games in general. Previous work has already started to look at video games from an ethnographic standpoint (see Boellstorff et al 2012 as well as the individual works of all its authors) as well as more quantitative approaches that look at data taken from play (e.g. Yee et al, 2012). There also have been calls for a much more in-depth study of games as a source of “social rationality,” taking a more critical stance of their content (Grimes and Feenberg, 2009). As a continuation of this trend, it seems like the way that games are observed – as both an aspect of social engagement and a reflection of society in general, needs to change. As more artistic elements become prevalent in a greater number of games, it will be important to understand how these elements developed in a historical sense. Even in games that do not attempt to challenge the norms and folkways of virtual worlds, as players become more aware of video games as art, there performances within those games may very well change with respect to this perception. Looking at virtual worlds as some sort of indicator of social phenomena, then, not only has a number of different approaches but seems to demand them to varying degrees. Employing the tactics of art theory and new media along with ethnographic investigations and analysis of data traces may very well result in new understandings not only of games, but also society and art.
It is an exciting time for the study of games. While they now have an increasing number of different meanings to different people, the fact that they have importance is becoming more difficult to ignore. However, along with the increased potential of game studies, there is also a necessity to broaden the approaches used to study virtual worlds.
Adamczyk, P. D., Hamilton, K., Twidale, M. B., & Bailey, B. P. (2007). HCI and new media arts: methodology and evaluation. In CHI’07 extended abstracts, pp. 2813-2816
Blythe, M., Briggs, J., Hook, J., Wright, P., & Olivier, P. (2013). Unlimited editions: three approaches to the dissemination and display of digital art. In Proc. CHI’13 pp. 139-148.
Boellstorff, T., & Marcus, G. E. (2012). Ethnography and Virtual Worlds: A Handbook of Method. Princeton University Press.
England, D. (2012) Digital art and interaction: lessons in collaboration. In Proc. CHI’12. Pp. 703 – 12
Grimes, S. M., & Feenberg, A. (2009). Rationalizing play: A critical theory of digital gaming. The Information Society, 25(2), 105-118.
Kerne, A., Webb, A. M., Latulipe, C., Carroll, E., Drucker, S. M., Candy, L., & Höök, K. (2013). Evaluation methods for creativity support environments. In CHI’13 Extended Abstracts. pp. 3295-3298.
Morris, D., & Secretan, J. (2009). Computational creativity support: Using algorithms and machine learning to help people be more creative. In CHI’09 Extended Abstracts pp. 4733-4736.
Sengers, P and Csikszentmihaly, C. (2003) HCI and the arts: conflicted convergence? In Proc. CHI’03, ACM, pp. 876-7.
Yee, N. Ducheneaut, N. Yao, M and Nelson, N. (2011). Do men heal more when in drag?: conflicting identity cues between user and avatar. In Proc. CHI’11. pp.773-776.
By Grant Webb
Many people don’t automatically think of the human element when they think of technology, but people and technology can’t help but influence each other. This mutual influence, which forms the basis of the field of social informatics, can be seen in the way that we use technology and the way that technology shapes our daily lives. Social informatics involves the study of information and communication tools in cultural or institutional contexts. Specifically, it examines the social aspects of computerization and its role in social and organizational change as well as how social practices influence information technology.
One of the most important contexts for social informatics is healthcare. Historically, healthcare has been a paper-intensive industry as practitioners kept printed copies of patient records and created written orders for tests and medications. Perhaps due to habit or possibly due to mistrust or unfamiliarity with computers, many healthcare professionals continued to rely on paper-based systems long after computerization gained wide acceptance and usage within the field.
One significant problem with paper-based systems is the lack of consistency in how records are filled out and maintained and how long they are stored. Individual doctors, nurses and other providers often have their own way in which they record notes and update patient records, even those who hold the same job title within the same institution. Thus, records differ from doctor to doctor, nurse to nurse and facility to facility, which introduces inconsistency and fosters miscommunication. These differences can also lead to a variety of errors that can negatively affect patients.
In addition to the differences in the ways that individuals keep records, manual record-keeping typically introduces a significant amount of human error, which also increases medical errors. Medical errors can range from relatively minor impacts, such as ordering unneeded diagnostic tests, to major impacts that can put a patient’s life at risk. At the point in which a provider’s personal social informatics habits, as related to patient record-keeping, conflict with those of other providers, paper-based systems then become detrimental to patients’ wellbeing. Discrepancies inherent in paper systems can also inhibit information sharing, collaboration and the expansion of collective knowledge.
As a result of various medical errors over the years, the Federal government has mandated that healthcare providers implement electronic health records by January of 2014. This mandate, part of the Affordable Care Act, represents a drastic change for the healthcare field in an effort to reduce medical errors and streamline healthcare delivery and has increased the breadth of health informatics job offerings as a result. The electronic health record requirement has prompted many healthcare providers to abandon social informatics based on manual record keeping. In turn, this increasing implementation of electronic health records has led to the rapid expansion of health informatics.
Health informatics combines information technology, health science information and patient data to enhance and support clinical care, health services, administration, research and education while helping to contain costs and increase efficiency. Health informatics relies heavily on healthcare information technologies, such as electronic health records, computerized physician order entry and decision support systems but the implementation of these technologies is only as good as the people who use them. Management, clinicians and health information technology staff often assume that healthcare information technologies will deliver the results promised by vendors. As a result, they may unintentionally overlook the impact of interactions between new technologies and the existing sociotechnical environment. In the same manner, those who take for granted that technology will improve things may underestimate the contributions of clinical judgment and interaction with patients.
Healthcare providers are often quick to blame undesirable consequences and implementation failures on new technology. In reality, although technical issues are sometimes at the root of the problem, negative outcomes of healthcare information technology more often stem from the providers themselves due to differences between the new technology and the existing social and technical systems.
Health informatics can help pinpoint changes needed to existing social informatics such as workflows, culture and technology, to minimize negative outcomes and maximize the benefits of healthcare information technology. These benefits include improved patient safety, increased positive patient outcomes and greater levels of efficiency.
Grant Webb is an SEO Specialist at Bisk Education
Questions about the practice of ethnographic research, both as a method and as an analytic way of knowing, have been a focus of my dissertation work. The new Ethnography and Virtual Worlds: A Handbook of Method by Boellstorff, Nardi, Pearce, and Taylor has been helpful to think through my own ethnographic experiences. Although the subjects of my research do not inhabit virtual worlds as defined within this handbook, the bulk of their interaction occurs through networked digital media. The handbook defines a virtual world as requiring the following traits: place, worldness, multi-user, persistence, and user embodiment (p 7). The groups that I study construct a social world (Star and Clark) that exist offline and online across many different media platforms (for example, interaction happens in person, through text messaging, across Twitter, YouTube, Facebook, and other online media), and as such they do not inhabit a particular virtual place. I have called this type of social engagement transmediated sociality (Terrell 2011).
While Boellstorff et al encourage ethnographers of virtual worlds to follow their informants into contexts (both online such as blogs, message forums, and Facebook and offline such as meetups and conferences) that extend beyond the in-world platform around which they are centralized (for instance, Second Life or World of Warcraft) ethnography of groups that are decentralized, spread across many online/offline spaces might be different in nuanced, but meaningful ways.
Doing ethnographic research with groups that are highly transmediated has presented a number of different challenges. Participant observation, a key component of ethnographic research, can be particularly challenging in transmediated settings. In my experience, participant observation can happen in two different ways. First one can attend, participate in, and observe events that are more formal and scheduled. In my work this is something like attending a wizard rock concert or a festival, which may be digitally mediated or may be in person. The second way one needs to participate is to just hang out, to be around to interact with others or observe interactions and cultural production as they happen in mundane everyday interaction, without a scheduled event.
Learning, knowing, and deciding where to hang out seems to be the most difficult aspect of participant observation of transmediated groups because one’s informants could be, and indeed are, hanging out in several different spaces all at once. As researchers we must struggle to define our field site. This never seems to be a simple task, even when our field site is apparently tied to a specific space; we must make choices about whom and what we include within our study. This is true for sites that are both virtual and non-virtual. While I recognize the difficulty in defining one’s field site, I wonder the extent to which the transmediated nature of the groups that I study give this struggle a new dimension.
In what ways is the lack of persistent placeness needed for the construction of a virtual world a challenge to the construction of the ethnographic field site? How does one decide where to hang out when the people she is studying could be interacting in several other mediated spaces? Are the challenges faced by the ethnographer of transmediated groups different than those faced by the ethnographer of virtual worlds where place is more strongly defined and more centrally located?
These are of course broad questions, but they are issues with which I struggle. I would love to hear your thoughts and experiences.
Boellstorff, T., Nardi, B., Pearce, C., and Taylor, T.L. 2012. Ethnography and Virtual Worlds: A Handbook of Method. Princeton University Press: Princeton, New Jersey
Star, S.L. and Clarke, A. 2007. The Social Worlds Framework: A Theory/Methods Package, in Hackett, E., Amsterdamska, O., Lynch, M., and Wajcman, J. (Eds.) Handbook of Science, Technology, and Society.(113-138). MIT Press
Terrell, J. 2011. Transmediated Magic: Sociality in Wizard Rock. In Proceedings of International Conference on Information Technology: New Generations (ITNG 2011), April 2011, IEEE
Big Data seems to be the new buzzword of the moment and the solution to all of society’s problems. Often we hear people coming up with studies involving a great amount of data aggregated from Twitter, Facebook and so on. I truly believe these studies are good; they take snapshots of scenes, let us know of interesting moments in a specific time and give us an overall idea of the problem.
boyd and Crawford (2012) define big data as “a cultural, technological, and scholarly phenomenon that rests on the interplay of: (1) Technology: maximizing computation power and algorithmic accuracy to gather, analyze, link, and compare large data sets. (2) Analysis: drawing on large data sets to identify patterns in order to make economic, social, technical, and legal claims. (3) Mythology: the widespread belief that large data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy.” (p. 663)
Big Data is usually thought as big numbers, the big N approached quantitatively. These numbers are generated based on people’s produced data; people that are online and constantly talking, sharing, posting, tweeting and “liking” things. But what about the people that are not doing that frequently, or even, not doing these activities at all? If we take Big Data and extend it to the ones experiencing digital inequalities, we would be imposing a colonial practice in which the voice of those constantly online will be obscuring the voice of those who are not. These voices are often clashing in different of contexts since they are rooted in social tensions and differences of power.
So, how can Big Data tell us the story of the people that are on the “wrong” side of the digital divide?
Mary L. Gray (2011) makes the case that Critical Ethnography is a practice of Big Data. She invites us to think of Big Data not solely as numbers and quantitative approaches, but also as a practice that is able to balance the value of ethnographic significance and statistical significance. Big Data is usually deeply concerned in mashing as much number as possible to be able to have some sort of reliability and statistics strength. The more you can get, the more reliable the information is.
Qualitative work is often seen as being too specific and doesn’t tell us anything, but Gray argues the opposite, qualitative approaches tell us something different, they give us a different perspective of the story. Ethnographic significance should be integrated as a complement in collaboration with statistical significance, so we are able to get something transformatively different.
I agree with Gray; at an earlier post here on the Social Informatics Blog (Digital Divide Research: one myth, problem and challenge) I make the case that the Digital Divide Research should move on from the statistical charts, census and Big Data, and go in the field to tell us about the context of those who are not on the internet, or not as often due to digital inequalities.
Big Data was the reason why I ended up going to the slum of Gurigica in Vitoria, Brazil. According to the census, the locals have a very low access to the LAN Houses and Telecentros that are inside the community. But if it wasn’t for my ethnographic research, I would have never known that this was happening due to the activities of the drug cartel that didn’t allow them to circulate freely on the streets. Therefore, Critical Ethnography is a powerful tool to approach the issues of the Digital Divide and contextualize the notions that Big Data gives us.
References (I highly recommend Gray’s video):
danah boyd, & Crawford, K. (2012). CRITICAL QUESTIONS FOR BIG DATA.Information, Communication & Society, 15(5), 662-679.
Gray, M. L. (2011). Anthropology as BIG DATA: Making the case for ethnography as a critical dimension in media and technology studies. http://research.microsoft.com/apps/video/default.aspx?id=155639