Research paper: Why CSCW applications fail
Jonathan Grudin’s 1988 paper “Why CSCW applications fail” is a classic in the field of social computing and the field of computer-supported cooperative work (CSCW). The basic arguments are still highly relevant, so here’s my quick summary.
CSCW as a field is concerned with how computers support (or fail to support) humans in completing cooperative tasks. In 1988 when Grudin wrote this paper, “cooperative work” primarily referred to office settings although more contemporary CSCW research uses the term to refer to basically any group activity or collaborative endeavor. It’s a little depressing to notice how relevant Grudin’s critiques are even for this broader definition of work.
Why cooperative work software applications fail
Grudin identifies three primary reasons that CSCW applications fail.
1. The disparity between who does the work and who gets the benefit
Implicitly, CSCW applications redistribute labor over a group. If the CSCW application distributes that work to create unevenly-distributed benefits, adoption is likely to suffer.
I can think of dozens of corporate CSCW tools where using the software created an advantage for the organization (e.g. by creating better visibility for my boss or my department) but without a corresponding advantage for myself. Correspondingly, adoption required incentivizing use in other ways (e.g. “you must do this or you’re fired”), which usually only encourages bare-minimum adoption.
Tools that create mutual benefit (even if uneven) see better adoption in my experience. For example, software development ticket-tracking tools provide the most benefit for the product managers, but they provide lots of benefit for me as a software engineer as well because they create visibility into what my teammates are doing!
2. The breakdown of intuitive decision-making
An expert will often have an intuitive sense of whether a software tool will be useful for that domain. Grudin says:
Intuitions about what will be useful to people similar to ourselves are generally good.
But with CSCW applications, those inuitions go out the window. We need to consider whether an application will be useful not only for ourselves, but for all other categories of user.
Think here of annoyingly-granular ticket systems; useful for the product manager, but cumbersome for the developer! Grudin points out that it goes the other way too: decision-makers might not see the value in a tool that would benefit the whole organization (but not the particular manager in charge of procurement).
3. The underestimated difficulty of evaluating CSCW applications
Evaluating CSCW applications is harder than evaluating single-user interactions.
It is relatively easy to bring a single user into a lab to be tested on the perceptual, cognitive, and motor variables that have been the focus for single-user applications. But it is difficult or impossible to create a group in the lab that will reflect the social, motivational, economic, and political factors that are central to group performance.
The problem of evaluation motivates a huge amount of human-computer interaction and social computing research. (It’s a problem I’ve spent lots of time on in my own research.) In my opinion, the challenges of evaluating the utility and usability of CSCW systems is the primary reason that HCI uses such a diverse set of methods.
Even now, 35+ years later, these critiques are astute. Designers (and adopters!) of CSCW applications would do well to keep them in mind. There are many ways to address these risks, but that will have to be the subject of a future post.
Here’s the paper’s citation:
Jonathan Grudin. 1988. Why CSCW applications fail: problems in the design and evaluation of organizational interfaces. In Proceedings of the 1988 ACM Conference on Computer-supported Cooperative Work (CSCW ‘88). Association for Computing Machinery, New York, NY, USA, 85–93. https://doi.org/10.1145/62266.62273