The fact that the Centrelink extortion/government fraud saga continues shows how far we have fallen (as detailed by Jason Wilson recently in an excellent Guardian article—if you’re not in Australia and you’re wondering what’s happened there’s a good video below). It also sadly shows how far the public service has been dragged into the mud (this is not at all a criticism of the many good public servants who keep trying, despite it all). For this is a government and public service using the confected cover of “Fraud!” to ironically commit fraud themselves. They’re also very obviously extorting money from vulnerable people on a massive scale. Yet as well as the usual stage managed propaganda about dole cheats (does anyone really care anymore for this kind of thing?), they’re travelling under the cover of “data”. In general, “data” has become a large part of a kind of overwhelming spectacle that justifies anything. I’d say it’s the perfect con, except that it’s far from perfect.
[general jetlagged rave follows—basic perhaps but the main points here are pretty basic]
My general take on this, for what it’s worth, links what’s happened with Centrelink to a lot of data use (including in universities, and in learning and teaching, mainly because these are now so heavily managed). The Centrelink scandal is an example/even a template of what’s happening everywhere. It’s also an exact counter-example to taking embodiment, situations and the immanence of the communicative event seriously when it comes to data (on this another time, but consider here Whitehead’s thinking of “data as the potential for feeling” [academia link—sorry]).
Consider:
* governments and management often, even usually, see data totally in whatever reactionary and totally outmoded imaginaries they possess. They cannot admit that they don’t really begin to understand it except as some kind of extension of their control. They refuse the simple fact that data has it’s own (newer) realities that are not those of the last fifty years.
* governments (and other institutions), and their departments in the public service (or in institutions) are perfectly willing to decide that data can do what it cannot, and not to know or care about the details or the consequences, even at the most basic level. The risk is the snowballing of controls greater even that today, based on the worst possible mythologies and uses of technology.
* all these institutions, and also, crucially, the tech companies to which this is all outsourced (who really should and do know better), are perfectly happy to design systems that they would KNOW are deeply flawed at the most basic levels, and will seriously disadvantage people.
* corruption is rife throughout, but it’s legal corruption, because no one understands or wants to understand, and thus to produce relevant legislative controls, and because that would be the end of the reactionary love in. Lots and I mean lots of money is changing hands via government departments ($93 million circulating through the Centrelink debacle?) with almost no accountability .. even a gigantic scandal may not end it—in fact, several of them, following hard on the heels of each other, haven’t ended it. They rather seem to embolden people, to, as above, provide a template for what people can get away with.
* when things inevitably go horribly wrong, in some cases before the system has been implemented (cf. the Census), the only response is denial/spin .. almost no one is ever responsible, at the highest levels of either the public service or in gov (see Stefano Harney on how algorithmic institutions means that no one is ever actually able to be responsible—only the algorithm is—and all the fuss about “leadership” is only a fig leaf covering this up).
* all this is happening throughout many/most systems
* it has nothing to do with interesting or effective uses of data
* a lot of critiques of data and algorithms etc, as valuable as these are, assume data being *effectively* and expertly used, but that’s only one side of the problem. The other is that it’s possibly even worse, and more common, to find it completely misapplied in appallingly designed systems with horrible effects. We won’t, for example, have to wait until AI is somehow the equivalent to human intelligence, or beyond it (as the myth often goes), for AI to have profound effects on work, life, lives. In fact, it’s already happening but in a completely different register. First up, a lot of “AI” is no better than a bad spellchecker (a lot is a lot better, but interestingly neither governments nor other institutions seem to want to know too much about that). Second up, we are currently guinea pigs, to see how much the political/organisational systems can get away with in deploying the like of AI, even without expertise.
* So with events such as automation, AI and machine learning etc, it’s not that we should only begin worrying when some super AI controls everything (who knows, one day that might happen or it might be happening now in pockets—as William Gibson says, the future’s here but not evenly distributed). It’s more more that organisations are willing to deploy even the most clunky systems in the name of saving money, or more likely in the name of avoiding what is really going on (Tim Dunlop is very good on this), a “going on” that ironically involves a more sophisticated use of the very same tech they’re deploying so badly while refusing to know anything about them for real …
* part of this is just greed and cronyism and being determined to be as despicable as possible in the service of one’s own, or one’s group’s interests, and in denial of one’s own complete out of touchness, as a politician or perhaps a manager (the T person fits the bill here perfectly once again—”white life” seems sometimes defined by this enhanced take up of what can only be called stupid control, which reminds me of Sianne Ngai’s Stupliminity, which is undoubtedly at the core of things here too … though I know there are other aspects to white life, more “expert” .. and again Stefano and Fred Moten are really good on all this). But part of it is probably that, as Dylan Hendricks notes, ‘The world became too complex for this generation of governments.’ It’s time perhaps that we all admitted this and moved on … if such a thing is possible. Before even more damage is done.
* It would be lovely to see a public service freed from so much political interference and forced brutality towards the public they are supposed to serve. I know many people are still trying in difficult circumstances and more kudos to them for that.
* It would also be great to think that thinking about/with data was going forward rather than backward (and yes, it’s true that the future is distributed unevenly as there are a great many higher level uses of data—the way this relates to the like of the Centrelink scandal is another interesting question). There are so many great thinkers on this (from Antoinette Rouvroy to Matteo Pasquinelli, Kate Crawford, Luciana Parisi, Benjamin Bratton, McKenzie Wark, Tiziana Terranova, Audrey Watters is great on education, as are Greg Thompson and Ian Cook, see also this piece on Centrelink and data ethics by Ellen Broad—and so many others). Yet still, we get Centrelink and Mr T., and really rubbish data uses in the service of the worst kinds of aims. That “still” needs to be thought a little more. There’s a great piece by Lauren Berlant today, not quite on data, but in a way, great on that “still”. Also pretty great is this tweet by @ArtVolumeOne.
Pingback: Centrelink scandal – Theoreti.ca