As an update to the last post, Limn has published a fascinating political history of data mining methods:
The history of statistical methods has always been plagued by a tension between the aims of pure knowledge and social criticism on the one hand, and practical application in the fields of social governance or commerce on the other. This being said, Benzecri’s data analysis and more recent methods of data mining cover the entire spectrum, from the most radical criticism up to and including political and commercial endeavors. It is also another and more serious way to pose that naïve question of the 1970s: is correspondence analysis leftist or rightist?
Alain Desrosières, Mapping the Social World: From Aggregates to Individuals
The unintentional hilarity of well-meaning scientism:
“Even in one-shot interactions, humans are not as selfish as theory suggests,” write physicist-sociologist Dirk Helbing and colleagues. “A large body of experimental and field evidence indicates that people genuinely care about each other.”
Gets me all the time.
A few more words on Ray Kurzweil.
His whole teleological (if not chiliastic) argument for the inevitable coming of the singularity rests on his idiosyncratic interpretation of Moore’s Law, the often-cited 1965 observation by engineer Gordon E. Moore that semiconductor capacity doubles roughly every two years leading to an exponential increase in computing power over the last few decades.
In Kurzweil’s hands this observation turns into the Law of Accelerating Returns, positing an unstoppable exponential increase in technological ‘progress.’ Graphs of exponential functions make for compelling visual arguments, and it comes as no surprise, then, that even as distinguished a sociologist as Anthony Giddens would refer to Kurzweil’s futurological vision when talking about humanity’s challenges in the 21st century.
Interestingly, one of the most substantive engagements with Moore’s Law comes from the most unexpected of places, namely critical accounting theory (wherein ‘critical’ stands for asking what people in organisations actually do rather than accepting the normative ideal taught in the business school).
In their 2007 paper on capital budgeting in the US semiconductor industry, Peter Miller and Ted O’Leary effectively argue that Moore’s Law needs to be understood as performative. Rather than expressing some sort of natural tendency of semiconductor development, Moore’s Law partakes in the construction of the reality it purports to describe.
In the 1980s the US were lagging behind Japan in the semiconductor field. Moore’s Law formulated an imperative spelt out and operationalised within technology roadmaps that guided a concerted effort by government agencies, research institutions and private business each of which had to make budgeting decisions according to their specific rationalities. As a ‘mediating instrument,’ the Law linked science and economy by
shaping the fundamental expectations of an entire set of industries about increases in the power and complexity of semiconductor devices, and the timing of these increases.
Simply put, the Law acted as an argument for making certain capital allocation decisions appear more plausible than others in the context of the American interest in maintaining geopolitical leadership through technological advantage. Moore’s Law, then, was as much a description of technological change as it became operative in facilitating this very change as an effect of a historically specific, complex assemblage of material and human agencies. Thus, the current trajectory of technological change is no destiny but rather open to a reconfiguration of the agencies that bring it about.
While this might not come as too huge a surprise to those who followed the recent pragmatic and performative turns in social theory, the detailed analysis by Miller and O’Leary provides a powerful antidote against the impoverished account by Kurzweil and his followers. To Kurzweil, exponentially increasing computing power is no less than an expression of a general evolutionary tendency wherein the increasing fitness of organisms (itself a gross oversimplification) is lumped together with technological change leading eventually to the supercession of organism-based intelligence by artificial intelligence. His is a crudely biologistic determinism that absolves us of the responsibility to question the politico-economic dimension of current technological developments and completely neglects the historical contingencies affecting science and technology as social phenomena.
But I guess it helps filling the coffers of his church.
Philosopher of mind Colin McGinn reviews Ray Kurzweil’s new book on the prospects of building an artificial brain (and thus artificial intelligence):
Here then is my overall assessment of this book: interesting in places, fairly readable, moderately informative, but wildly overstated.
That is a rather polite assessment. But even more interesting is what McGinn has to say about the computational metaphor in contemporary neuroscience:
Even in sober neuroscience textbooks we are routinely told that bits of the brain “process information,” “send signals,” and “receive messages”—as if this were as uncontroversial as electrical and chemical processes occurring in the brain. We need to scrutinize such talk with care. Why exactly is it thought that the brain can be described in these ways? It is a collection of biological cells like any bodily organ, much like the liver or the heart, which are not apt to be described in informational terms.
Which is precisely what Lily Kay’s and Katherine Hayles’ work implied all along: we are not walking information processors but fleshy, embodied beings. The computational metaphor is nothing but a metaphor, and it may even turn out to hinder our better understanding of how brain is related to mind. Deal with it, transhumanists.