Text

“Picture a system that makes decisions with huge impacts on a person’s prospects – even decisions of life and death. Imagine that system is complex and opaque: it sorts people into winners and losers, but the criteria by which it does so are never made clear. Those being assessed do not know what data the system has gathered about them, or with what data theirs is being compared. And no one is willing to take responsibility for the system’s decisions – everyone claims to be fulfilling their own cog-like function.

This is the vision offered to us by Franz Kafka in his 1915 novel, The Trial. In that book, Kafka tells a parodic tale of an encounter with the apparatus of an indifferent bureaucracy. The protagonist, Josef K, does not know why he has been arrested, or what the evidence against him is; no one is willing to take responsibility for the decision, or to give him a proper account of how the system works. And it ends gloomily, with Josef K utterly defeated, resigning himself to his fate.

Fast forward 100 years and artificial intelligence and data-driven computer systems are frequently portrayed in a similar way by their critics: increasingly consequential, yet opaque and unaccountable. This is not a coincidence. There is a direct link between the trials of Josef K and the ethical and political questions raised by artificial intelligence. Contrary to the hype, this technology has not appeared fully formed in the past couple of years. As the historian Jonnie Penn has recently pointed out, it has a long history, one that is deeply entwined with state and corporate power. AI systems were developed largely to further the interests of their funders: governments, military and big business.

Most importantly, the models of decision-making that these systems sought to automate were taken directly from these bureaucracies. The two great pioneers of machine intelligence, Alan Turing and John von Neumann, both developed their prototypes in the crucible of the second world war. Under Von Neumann’s oversight, the very first task in 1946 of the very first general-purpose computer, the Eniac, was running computations for the hydrogen bomb.

In other words, the “intelligence” in “artificial intelligence” is not the intelligence of the human individual – not that of the composer, the care worker or the doctor – it is the systemic intelligence of the bureaucracy, of the machine that processes vast amounts of data about people’s lives, then categorises them, pigeonholes them, makes decisions about them, and puts them in their place. The problems of AI resemble those of the Kafkaesque state because they are a product of it. Josef K would immediately recognise the “computer says no” culture of our time”.

To save us from a Kafkaesque future, we must democratise AI | Stephen Cave https://www.theguardian.com/commentisfree/2019/jan/04/future-democratise-ai-artificial-intelligence-power

Video

On Afrofuturism

Photo
Good Gig, Bad Gig: Autonomy and Algorithmic Control in the Global Gig Economy. Mark Graham

Good Gig, Bad Gig: Autonomy and Algorithmic Control in the Global Gig Economy. Mark Graham

Quote
"if the product is free, you are the training data"

— Dan MacQullan

Quote
"Making knowledge is not simply about making facts but about making worlds"

— Karen Barad, Meeting the Universe Half Way

Text

Barad’s diffractive methodology

“Barad’s diffractive methodology finds critical practice to contain both attentiveness to the detail of an argument (in order to do justice to it) as well as an uncanny proximity to that which we engage—a relation of entanglement which, even if tensile and complicated (entanglement involves simultaneous attraction and repulsion, as Barad points out ("Transmaterialities” 397); its constitutive capacity also involves cutting across or interrupting), necessarily implicates, reiterates, and transforms our ‘own’ positions, rendering them immanently dynamic, incomplete, co-authored, non-innocent, contaminated, and indebted. The ethical gesture of critique, par excellence, would then be to do justice to this relation without attempting to veil or repair its complicated, at times challenging and uncomfortable, suggestions, nor regulate or emend the shifts in theoretical and methodological perspective and practice that it calls through us to enact. It proposes a critical approach that neither sanctions nor censures, but rather accounts for indeterminacy.“

Karin Sellberg & Peta Hinton

Video

Tractor hacking: the farmers breaking big tech’s repair monopoly

Text

Climate Gentrification

wolfliving:

*Or, the smart money fleeing the danger zones.  They’re gonna find out that there aren’t any safe ones, since the greenhouse rainbomb falls on the just and the unjust alike.



https://www.citylab.com/equity/2018/07/the-reality-of-climate-gentrification/564152/

It’s no surprise that a list of places most at risk from climate change and sea-level rise reads like a Who’s Who of global cities, since historically, many great cities have developed near oceans, natural harbors, or other bodies of water. Miami ranks first, New York comes second, and Tokyo, London, Shanghai, and Hong Kong all number among the top 20 at-risk cities in terms of total projected losses.

Cities in the less developed and more rapidly urbanizing parts of the world, such as Ho Chi Minh City and Mumbai, may experience even more substantial losses as a percentage of their total economic output. Looking out to 2050, annual losses from flooding related to climate change and sea-level rise could increase to more than $60 billion a year.

But global climate change poses another risk for cities: accelerated gentrification. That’s according to a new study by Jesse Keenan, Thomas Hill, and Anurag Gumber, all of Harvard University, that focuses on “climate gentrification.” While still emerging and not yet clearly defined, the theory of climate gentrification is based, the authors write, “on a simple proposition: [C]limate change impacts arguably make some property more or less valuable by virtue of its capacity to accommodate a certain density of human settlement and its associated infrastructure.” The implication is that such price volatility “is either a primary or a partial driver of the patterns of urban development that lead to displacement (and sometimes entrenchment) of existing populations consistent with conventional framings of gentrification.”

The study, published in Environmental Research Letters, advances a simple “elevation hypothesis,” arguing that real estate at higher elevations in cities at risk for climate change and sea-level rise appreciates at a higher rate than elsewhere. It focuses on Greater Miami (defined as Miami-Dade County), the area of the country and of the world most at risk from climate change. The authors track the differential in values, between 1971 and 2017, of properties at different levels of elevation and risk from sea-level rise (based on data from the U.S. Geological Survey), while controlling for other factors. They draw from data on more than 800,000 property sales (from the Miami-Dade County Property Appraiser’s Office), including information on property value, building size, year built, bed and bath counts, and tax-assessment values.

The study finds considerable evidence of climate gentrification, and for the elevation hypothesis in particular. Properties at high elevations have experienced rising values, while those at lower elevations have declined in value….

Text

Tim Morton on Meditation

“[H]ow does meditation look on the ground, in practice, “where the rubber  meets the road” to use the awful bureaucratic phrase? One is allowing one’s thoughts to exist, without trying to delete them. Thus one is   allowing oneself to be inconsistent: the mind is making some effort   towards mindfulness, yet there are also thoughts occurring that distract  the mind. In higher forms of meditation, the practice has less effort.  One is simply allowing whatever happens to happen, no matter what the thought is. Some kind of commitment is required, a commitment not to adjust what is happening. This non-adjusting allows beings to resound in all   their contradictory plenitude. Since all phenomena radiate from the   nature of mind or from Atman (and so forth, depending on which school of  thought one is following), all is purified in advance within the larger  space of freedom. Purified here means left in its natural state, which  is open and vivid. There thus arises what in Mahamudra and Dzogchen is  called non-meditation. This non-meditation is different from not  meditating, and also different from meditating. It is simply coexisting  with what is.“

Text

Parisi: Post-truth computational machine

“Post-truth politics is the art of relying on affective predispositions or reactions already known or expressed to stage old beliefs as though they were new. Algorithms are said to capitalize on these predispositions or reactions recorded as random data traces left when we choose this or that music track, this or that pair of shorts, this or that movie streaming website. In other words, the post-truth computation machine does not follow its own internal, binary logic of either/or, but follows instead whatever logic we leave enclosed within our random selections. To the extent that post-truth politics has a computational machine, then, this machine is no longer digital, because it is no longer concerned with verifying and explaining problems. The logic of this machine has instead gone meta-digital because it is no longer concerned with the correlation between truths or ideas on the one hand, and proofs or facts on the other, but is instead overcome by a new level of automated communication enabled by the algorithmic quantification of affects.The meta-digital machine of post-truth politics belongs to an automated regime of communication designed to endlessly explore isolated and iterated behaviors we might call conducts. These are agencies or action patterns that are discrete or consistent enough to be recognized by machine intelligence. Post-truth political machinery employs a heuristic testing of responses interested in recording how conducts evolve, change, adapt, and revolt. This is not simply a statistical calculation of probabilities following this or that trend in data usage, but involves an utter indifference towards the data retrieved and transmitted insofar as these only serve as a background.¨

Luciana Parisi, Reprograming decisionism

https://www.e-flux.com/journal/85/155472/reprogramming-decisionism/