In this post, I quote from and link to two interesting articles.

Communications researcher Fred Turner argues that we should think of information technologies as infrastructures, and then think of the political possibilities created by that infrastructure.

Here’s Turner in Logic (“Don’t Be Evil: Fred Turner on Utopias, Frontiers, and Brogrammers”):

Any whole-system approach doesn’t work. What I would recommend is not that we abandon technology, but that we deal with it as an integrated part of our world, and that we engage it the same way that we engage the highway system, the architecture that supports our buildings, or the way we organize hospitals.

The technologies that we’ve developed are infrastructures. We don’t have a language yet for infrastructure as politics. And enough magic still clings to the devices that people are very reluctant to start thinking about them as ordinary as tarmac.

But we need to start thinking about them as ordinary as tarmac. And we need to develop institutional settings for thinking about how we want to make our traffic laws. To the extent that technologies enable new collaborations and new communities, more power to them. But let’s be thoughtful about how they function.

What might “an institutional setting for thinking about how we want to govern the processes built on large language models” look like? I’m not sure, but Michael Muller’s arguments suggest that we might look for inspiration in the governance of human-animal relationships.

Here’s Michael Muller debating with Ben Shneiderman in “On AI Anthropomorphism”:

I suspect that we will need to break down our human / non-human binary into a dimension, or into multiple dimensions. In a conventional EuroWestern way of thinking, we already do this in our relationship to animals. …. Elizabeth Phillips and colleagues (2016) have explored the deeper relationships that we have with some animals, with dogs being the primary example of a social presence.

….

Phillips et al. (2016) were using human-animal relationships to think about human-AI relationships. I think that, as with animals, there are degrees of sociality, or degrees of social presence, that may be applicable to computational things. I don’t think we know enough yet to foreclose these possibilities.

This post was based on an old tweet.