Monthly Archives: November 2012

On Building Social Robustness and Enduring Computing

As many of you know, I am now directing a Social Informatics (SI) Group in a School of Informatics and Computing (SoIC) at Indiana University Bloomington. The SI group is quite unique in Informatics/Computer Science/Information Studies, it that is has chosen to oriented itself explicitly to the field of Science, Technology, and Society (STS, also referred to as Science and Technology Studies). I am also thinking about retirement in the next 3-5 years. Being in these situations has shaped the research agenda that follows.

My current research is all framed generally within Socially Robust and Enduring Computing. SREC is based on the notion that developing a notion of social robustness, comparable to the technical notion of robustness in Computer Science, is a goal worth pursuing. I have developed SREC with colleagues in Trento, Italy.

My main research time commitment at the moment is to a writing project on Value(s) with Maurizio Teli, a young researcher at the Foundation in Trento, where I spend a couple months every year. My interest in this area grew out of efforts to identify the forms whereby and the extent to which computing professionals are responsible ethically for the current economic and social crisis set off in finance. Maurizio’s and my value(s) project is a continuation of this work on the crisis and is linked to the project of David Graeber in Debt: The First 5000 Years, itself a work that builds on much of the recent anthropology of value. That is, we want to give a similar account of the ways in which value and values are and should be treated and thought about in the reproduction of current social formations. Such an account is made necessary by the ways in which contemporary reproduction is increasingly detached from the prior industrial dynamics but which has not yet established a new dynamic. In our view, establishing new social formation reproduction dynamics requires identification of new values, new institutions for pursuing those values, and new means to measure especially value relating to the success or failure of establishing these new values and institutions. A major point we wish to make regards the increasingly larger role in these new dynamics we see being played by common pool resources, the focus of Eleanor Ostrom, winner of the 2009 Nobel Prize in economics, and, until she died last Spring, a valued colleague here in Bloomington.

It is my hope that this writing will be paralleled by a research and demonstration project in Trentino on new systems, including information systems, for supporting the independent living of Seniors. This Suitcaseproject will build on my previous work in disability studies and technology, as well as more general ethnography in this region. Another aspect of the Trento ethnography is an attempt to understand what has made the region relatively hospitable to Participatory Design. PD is the focus of what I hope to and expect will be my last permanent contribution to the curriculum in the SoIC. In addition, I am working on another, related writing project, a text on Organizational Informatics, with Stefano De Paoli, another researcher. This text will incorporate much of the work behind my 2011 AAA paper in the business anthropology sessions as well as my current teaching, including my course on the Ethnography of Information.

A final areas of research, this time in collaboration with two SoIC graduate students, Nic True and Shad Gross, is on Massive Multiplayer Online Games (MMOGs). In this work, we engage the current interest in Big Data, intending to show how some of the epistemological shortcomings in its standard approaches can be address when it is triangulated with ethnography. In our case, we argue that a preliminary ethnography of gaming can provide clearer direction regarding what we should be looking for in the automated analysis of large corpora of game play data. This work is directly related to the effort in SoIC to create a professional masters degree in Big Data.

Presented in this way, it should be easy to see, as I said initially, the multiple ways in which this research agenda is a function of my current position. While I have participated in the AAA meetings and CASTAC occasionally since I went to Indiana in 2004, this occasional connection has not been enough to justify systematic orientation of my research toward anthropology. Ironically, when I studied the careers of anthropologists interested in STS in the 1980s, I found a similar phenomenon; there were few if any examples of individuals who developed these interests while sustaining strong connections with academic anthropology. I should mention that my efforts to interest Indiana University Bloomington Department of Anthropology scholars in this type of work has born little fruit.

I mention these things as a warning: Interest in the anthropology of science, technology, and computing is not automatically, or even generally, a good way to build a career in anthropology. Working in and through vehicles like CASTAC should thus be understood as essential to the work of anthropologists who wish to continue to do so.

This blog post can also be seen at: http://blog.castac.org/2012/11/on-building-social-robustness/

The Ethics of Design in Increasingly Complex Situations: The Case of a Broken Voting Machine

Designers tend to approach ideas from a certain bias, which may require some explanation. While design is focused on the process of creating artifacts, it is rarely a straightforward endeavor. Of particular importance is the accountability that comes from creating a new artifact, the ethics of design so to speak. In the most general and common sense, the impetus is to solve a problem, and the solution is assessed on the basis of its efficacy. This can be thought of as the function of a particular design – what it does as a means of resolving a problem. The designer, in the ideal circumstance, builds that function into the artifact. In addition to this functional aspect, there is also a process of changing and reframing problems [see Nelson for more clarity on this]. This procedure carries with it yet another aspect of evaluation– the framing of the problem is judged on the basis of how well it captures some aspect previously unconsidered that, nonetheless, is integral to resolving the problem. To put this all more simply, a design can fail procedurally due to improper problem framing, regardless of how well the it functions, or it can fail functionally, regardless of how well the procedure of framing the problem goes. The results of either of these failings have implications for the designer. A failing of functionality indicts the designer on charges of poor craftsmanship, while a failure of procedure points to general ineptitude. The inverse is equally true – merit is given for functional and novel approaches.

While there are a number of good and bad designs in the world, this topic has been covered considerably, and so the nature of such evaluation will not be addressed here. The proceeding is presented with the hopes of identifying how a designer is ethically tied to the success or failure of an artifact. If this is taken as true, then what happens in the grey areas? If two ends of the spectrum refer back to the designer, is it not reasonable that the middle ground has a similar effect? The situation above becomes socially relevant when one considers Winner’s argument that artifacts can have politics [Winner]. Those politics become built into the artifact both procedurally and functionally; both with implications for the designer. In the case of Winner’s examples, Moses’s bridges are problematic due to their function – their function is limited by the way they were made. Alternately, the tomato harvester suffers from a procedural issue – namely that the framing of the problem showed greater concern for efficiency and cost-effectiveness than the implications of mechanization with economical and ecological consequences. In both cases, Winner’s description seems to fit well within a model of accountability as prescribed by design. But lets suppose a situation where the decisions are not quite so clear. As an example of such a situation, consider this Pennsylvania polling station.

In a Philadelphia polling station in the 2012 election, one of the booths had a problem regarding candidate selection. When the space on the screen occupied by Barak Obama’s name was clicked, the box for Mitt Romney would be checked. Now, in a situation similar to Moses’s bridges it could be imagined that this machine was designed with the specific intent of favoring a specific candidate. This would be a functional aspect, in that the artifact’s functioning had a specific bias. But let us suppose that the person who posted this video’s first inclination (from going into “troubleshoot mode”) is correct and the problem is a malfunction rather than a deliberate decision. It seems reasonable that a touchscreen could break, particularly if used repeatedly (as would be the case of a polling station). Then it would seem that the accountability would fall upon the individual who chose that particular touch screen, making it procedural – rooted in a concern of cost over functional robustness. This need not imply any political orientation with regards to Romney and Obama, but it certainly represents a political statement nonetheless. However, suppose that such was not the case. Suppose, rather, that the reduced size of one option’s button was the result of a contextual issue. A power surge, a component broken during shipping, or any number of events that had happened to that specific machine could be at fault. In such a case, what would be the ethical standing of the designer? Would the complexities of the context caused a newly emergent political stance without an actor behind it, or is there an implication at the level of deciding to use such a machine in the first place?

If that sounds somewhat far-fetched, consider the 2010 “Flash Crash.” Sommerville et al. describes how a $4.1 billion block sale that was “executed with uncommon urgency” resulted in a “complex pattern of interactions between high-frequency algorithmic trading systems… that buy and sell blocks of financial instruments on incredibly short timescales” [Sommerville]. The systems employed had functioned together well, until that context had arisen. But when that context DID arise, roughly $800 billion disappeared [ibid]. As in the final hypothetical situation regarding the voting booth, it becomes difficult to consider the ethical position of the designer(s). Both describe systems of systems (the algorithms in the market and the technological parts of the voting machine). Both also describe situations where the final result is emergent, as opposed to a situation that is deliberately created. Risatti makes a distinction between function, and emergent application: use (Risatti)It would seem that these issues fall more under latter than the formerand by virtue of the fact that use is not constructed into the artifact in the way that function is, that the designer is somewhat free from blame. After all, designers cannot be expected to be capable of predicting the future, can they?

As a somewhat unsettling conclusion to this case study, what happens when the model of accountability that is defined by function and procedure becomes less common? It is becoming more difficult to consider any one given technology in isolation. Phones sync to computers that sync to bank accounts; information is stored to a cloud where multiple people, from multiple devices, can access it. Systems of technology are moving towards systems of systems of technology. As this increases, the chances for emergence also increase. Buried in this complex scenario is a notion that is as lucid and cutting as what Winner expresses: if artifacts have politics, do systems have politics as well?  It seems evident that the answer is a resounding “yes.” However, that answer only leads to a more worrisome question. If systems have politics, who is accountable for those politics?

Nelson, H. and Stolterman, E. (2012) The Design Way: Intentional Change in an Unpredicatble World. 2nd ed. MIT Press.

Winner, L. (1986) Do Artifacts Have Politics? The Whale and the Reactor: A Search for Limits in an Age of High Technology. U. Chicago Press: 19-39.

Sommerville, I. Cliff, D., Calinescu, R., Keen, J., Kelly, T., Kwiatkowska, M., McDermid, J., and Paige, R. (2012) Large-Scale Complex IT Systems, Communicatons of the ACM 55(7): 71-77.

Risatti, H. (2007) A Theory of Craft and Aesthetic Expression. U. North Carolina Press.

Calling into question design’s ability to solve problems: a quick look at micromanagement technologies for low-wage service jobs

In academia, we often talk about technology becoming increasingly pervasive (or ubiquitous) in daily life, referring to technologies moving beyond the personal computer and present in multiple locations. Technologists often herald this vision of technological pervasiveness as a positive change: having more technology opens up new spaces for design to explore solving problems. While new pervasive technologies are able to account for problems in more innovative ways, these new forms create as many problems as they are purported to “solve”. In the case we examine today, new technologies are not shown to solve problems as much as they displace burdens from one set of people to another.

This article from the New York Times outlines a plight of retail and wholesale service workers (e.g., cashiers, cooks, stockers, etc.).  Newly adopted time management technologies micromanage workers’ work hours to such a degree that in impacts their non-work lives. From one perspective, these technologies solve employers’ problems such as creating new ways to deal with peak customer demands and getting the most out of workers in four-hour periods. This may be beneficial for the employers, but in the process of creating efficiencies and responsiveness to economic pressures and trends, however, the new technologies have essentialized human beings as parts of algorithms. By understanding what these new technologies are doing to low-wage service employees, we understand that this time-management software is not solving a problem; it’s shifting a burden.

“We’re seeing more and more that the burden of market fluctuations is being shifted onto the workers, as opposed to the companies absorbing it themselves” – from the article

By using these neoliberal micromanagement technologies, employers want to have access to a flexible on-demand workforce, but without the responsibility (or cost) for officially placing individuals on-call. In more skilled labor jobs, companies often have to pay for the privilege of having a person “on-call” (meaning they can request for you to come in work), which is not the case for these new service workers, which indicates that with the introduction of these new practices and technologies there are also shifting of worker’s workplace expectations.

This article leaves me with a few thoughts:

To be clear, I don’t think shifting burdens happens in every case of design, but becomes likely in cases where design enrolls multiple parties and stakeholders with unequal positions of power. In this scenario, you have employers and employees both impacted by the novel micromanagement technology, but employees are made to bear the responsibility to be responsive to market pressures.

These new micromanagement technologies create new ways for employers to understand their workforce and efficiently allocate their human and non-human resources. These technologies create different types of visibility and understanding of these resources, but we do not entirely understand the potential impacts of these technologies and their accompanying practices on employers and employees. If anyone has any links to relevant research regarding the impact of such technologies on lower waged service jobs, I would welcome their suggested readings.

As I’ve argued, designers and technologists are not always “solving problems” through their innovations; in their efforts to solve problems, they are also creating new problems by displacing and shifting burdens to others. This leaves unanswered questions regarding how design might better account for shifting burdens and what the processes are by which these shifts actually happen. This also brings about a new occasion for design to create new opportunities for these low-wage auto service workers. Prior research documents the rarity for new technologies to disrupt power structures, but it is not impossible. At the end of the article, the author points to workers’ diminished power to collectively organize and form unions as part of why such technologies exist and why low-wage service jobs without much mobility may increasingly become the norm. This point presents an opportunity for design to better help low-wage service workers better understand how technology impacts their everyday working experiences as well as designing for new methods for collectively organizing for better treatment, wages, and working expectations. Which leaves open questions of how can design change and help improve low-wage service workers’ situations? What kinds of new technologies, visibilities, practices and norms would need to be established  and/or supported to help low-wage service workers collectively produce action?

It is important to note that new micromanagement technologies that rely on creative and novel ways of algorithmically thinking and collecting data will continue to pervade the lives of low-wage service workers. This leaves open areas of research to explore the relationship and impact of these technologies, workers, and market-forces.

Follow

Get every new post delivered to your Inbox.

Join 140 other followers

%d bloggers like this: