Blog

Work, Technology, and How We Understand Human Dignity

Certain technologies generate a degree of anxiety about the relative status of human beings or about what exactly makes human beings “special”–call it post-humanist angst, if you like.

Of course, not all technologies generate this sort of angst. When it first appeared, the airplane was greeted with awe and a little battiness (consider alti-man). But it did not result in any widespread fears about the nature and status of human beings. The seemingly obvious reason for this is that flying is not an ability that has ever defined what it means to be a human being.

It seems, then, that anxiety about new technologies is sometimes entangled with shifting assumptions about the nature or dignity of humanity. In other words, the fear that machines, computers, or robots might displace human beings may or may not materialize, but it does tell us something about how human nature is understood.

Is it that new technologies disturb existing, tacit beliefs about what it means to be a human, or is it the case that these beliefs arise in response to a new perceived threat posed by technology? It is hard to say, but some sort of dialectical relationship is involved.

A few examples come to mind, and they track closely to the evolution of labor in Western societies.

During the early modern period, perhaps owing something to the Reformation’s insistence on the dignity of secular work, the worth of a human being gets anchored to their labor, most of which is, at this point in history, manual labor. The dignity of the manual laborer is later challenged by mechanization during the 18th and 19th centuries, and this results in a series of protest movements, most famously that of the Luddites.

Eventually, a new consensus emerges around the dignity of factory work, and this is, in turn, challenged by the advent of new forms of robotic and computerized labor in the mid-twentieth century.

Enter the so-called knowledge worker, whose short-lived ascendency is presently threatened by advances in computers and AI.

This latter development helps explain our present fascination with creativity. It’s been over a decade since Richard Florida published The Rise of the Creative Class, but interest in and pontificating about creativity continues apace. What I’m suggesting is that this fixation on creativity is another recalibration of what constitutes valuable, dignified labor, which is also, less obviously perhaps, what is taken to constitute the value and dignity of the person. Manual labor and factory jobs give way to knowledge work, which now surrenders to creative work. As they say, nice work if you can get it.

Interestingly, each re-configuration not only elevated a new form of labor, but it also devalued the form of labor being displaced. Manual labor, factory work, even knowledge work, once accorded dignity and respect, are each reframed as tedious, servile, monotonous, and degrading just as they are being replaced. If a machine can do it, it suddenly becomes sub-human work.

It’s also worth noting how displaced forms of work seem to re-emerge and regain their dignity in certain circles. I’m presently thinking of Matthew Crawford’s defense of manual labor and the trades. Consider as well this lecture by Richard Sennett, “The Decline of the Skills Society.”

It’s not hard to find these rhetorical dynamics at play in the countless presently unfolding discussions of technology, labor, and what human beings are for. Take as just one example this excerpt from the recent New Yorker profile of venture capitalist, Marc Andreessen (emphasis mine):

Global unemployment is rising, too—this seems to be the first industrial revolution that wipes out more jobs than it creates. One 2013 paper argues that forty-seven per cent of all American jobs are destined to be automated. Andreessen argues that his firm’s entire portfolio is creating jobs, and that such companies as Udacity (which offers low-cost, online “nanodegrees” in programming) and Honor (which aims to provide better and better-paid in-home care for the elderly) bring us closer to a future in which everyone will either be doing more interesting work or be kicking back and painting sunsets. But when I brought up the raft of data suggesting that intra-country inequality is in fact increasing, even as it decreases when averaged across the globe—America’s wealth gap is the widest it’s been since the government began measuring it—Andreessen rerouted the conversation, saying that such gaps were “a skills problem,” and that as robots ate the old, boring jobs humanity should simply retool. “My response to Larry Summers, when he says that people are like horses, they have only their manual labor to offer”—he threw up his hands. “That is such a dark and dim and dystopian view of humanity I can hardly stand it!”

As always, it is important to ask a series of questions: Who’s selling what? Who stands to profit? Whose interests are being served? Etc. With those considerations in mind, it is telling that leisure has suddenly and conveniently re-emerged as a goal of human existence. Previous fears about technologically driven unemployment have ordinarily been met by assurances that different and better jobs would emerge. It appears that pretense is being dropped in favor of vague promises of a future of jobless leisure. So, it seems we’ve come full circle to classical estimations of work and leisure: all work is for chumps and slaves. You may be losing your job, but don’t worry, work is for losers anyway.

To sum up: Some time ago, identity and a sense of self-worth got hitched to labor and productivity. Consequently, each new technological displacement of human work appears to those being displaced as an affront to the their dignity as human beings. Those advancing new technologies that displace human labor do so by demeaning existing work as below our humanity and promising more humane work as a consequence of technological change. While this is sometimes true–some work that human beings have been forced to perform has been inhuman–deployed as a universal truth, it is little more than rhetorical cover for a significantly more complex and ambivalent reality.

_____________________

An earlier version of this post appeared at The Frailest Thing.

Latour on Products and Processes

From Bruno Latour’s We Have Never Been Modern:

“The moderns confused products with processes. They believed that the production of bureaucratic rationalization presupposed rational bureaucrats; that the production of universal science depended on universalist scientists; that the production of effective technologies led to the effectiveness of engineers; that the production of abstraction was itself abstract; that the production of formalism was itself formal …. The words ‘science’, ‘technology’, ‘organization’, ‘economy’, ‘abstraction’, ‘formalism’, and ‘universality’, designate many real effects that we must indeed respect and for which we have to account. But in no case do they designate the causes of these same effects. These words are good nouns, but they make lousy adjectives and terrible adverbs. Science does not produce itself scientifically any more than technology produces itself technologically or economy economically.”

 

Shannon Vallor on Technology and the Virtues

In recent posts, we’ve commented on Peter-Paul Verbeek’s work on the ethics of technology. Verbeek commends an approach to the ethics of technology focused on what he terms technological mediation. “Artifacts are morally charged;” Verbeek notes, “they mediate moral decisions, shape moral subjects, and play an important role in moral agency.” Additionally, he suggested that the tradition of virtue ethics offers important resources for our ethical reflection about technology.

Later this year, Shannon Vallor, a philosopher at Santa Clara University, will publish Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. It promises to be a valuable contribution to the field with interesting parallels to Verbeek’s work. Here is an excerpt from the Introduction cited in a blog post reporting on Vallor’s book:

“No ethical framework can cut through the general constraints of technosocial opacity. The contingencies that obscure a clear vision of even the next few decades of technological and scientific development are simply far too numerous to resolve— in fact, given accelerating physical, geopolitical, and cultural changes in our present environment, these contingencies and their obscuring effects are likely to multiply rather than diminish. What this book offers is not an ethical solution to technosocial opacity, but an ethical strategy for cultivating the type of moral character that can aid us in coping, and even flourishing, under such chal­lenging conditions.”

Here’s more from the post:

“In her recent work, Vallor focuses primarily on recent advances in robotics, artificial intelligence, new (social) media, and bio-enhancement. As many have noted, advances in all of those fields are happening quickly—often faster than laws and even social norms can develop around them. Rather than simply noting (or bemoaning) that fact, Vallor asks us, first, to recognize that technologies are not ‘value-neutral’:  that every technology presupposes a vision of what the ‘good life’ is. (Technologists are often reluctant to acknowledge this—or simply assume that their vision of the ‘good life’ is widely shared.) She then points out that technological advances have unpredictable and open-ended consequences, and that current technologies pose ‘new global problems of collective human action,’ many of which have implications for the future of human flourishing on this planet.

With this background in mind, Vallor draws on virtue ethics to ask, ‘What sort of people will deal well with the challenges posed by emerging technology? What qualities will they need in order to flourish?’ In other words, what are the virtues demanded by a rapidly changing world—the virtues that would allow people to anticipate challenges, perhaps meet them before they arrive, or at least respond best to them when they do?”

These are precisely the sorts of questions we should be asking. We look forward to Vallor’s exploration of these matters. It promises to be wise and helpful.

Practice and Counterpractice

From Albert Borgmann’s Power Failure:

“… for a long time time to come technology will constitute the common rule of life.  The Christian reaction to that rule should not be rejection but restraint … But since technology as a way of life is so pervasive, so well entrenched, and so concealed in its quotidianity, Christians must meet the rule of technology with a deliberate and regular counterpractice.

Therefore, a radical theology of technology must finally become a practical theology, one that first makes room and then makes way for a Christian practice. Here we must consider again the ancient senses of theology, the senses that extend from reflection to prayer.  We must also recover the ascetic tradition of practice and discipline and ask how the ascesis of being still and solitary in meditation is related to the practice of being communally engaged in the breaking of the bread. The passage through technology discloses a new or an ancient splendor in ascesis.  There is no duress or denial in ascetic Christianity. On the contrary, liberating us from the indolence and shallowness of technology, it opens to us the festive engagement with life.”

The “rule of technology” engraves itself on us by shaping the routines and habits of daily life so that it is both pervasive and unnoticed. In other words, it is not enough to merely desire or will to live well with technology. Borgmann’s crucial insight, for Christians and non-Christians alike, is the necessity of deploying deliberate and intentional counterpractices that embody and instantiate an alternative form of life.

Virtue and Technology

“Questioning is the piety of thought,” or so Martin Heidegger would have us believe. It is with that line that he closed his famous essay, “The Question Concerning Technology.” Indeed, the right question or a new question can lead our thinking to fresh insights and deeper reflections.

With regards to the ethics of technology, we typically ask, “What should I or should I not do with this technology?” and thus focus our attention on our actions. In this, we follow the lead of the two dominant modern ethical traditions: the deontological tradition stemming from Immanuel Kant, on the one hand, and the consequentialist tradition, closely associated with Bentham and Mill, on the other. In the case of both traditions, a particular sort of moral subject or person is in view—an autonomous and rational individual who acts freely and in accord with the dictates of reason.

In the Kantian tradition, the individual, having decided upon the right course of action through the right use of their reason, is duty bound to act thusly, regardless of consequences. In the consequentialist tradition, the individual rationally calculates which action will yield the greatest degree of happiness, variously understood, and acts accordingly.

If technology comes into play in such reasoning by such a person, it is strictly as an instrument of the individual will. The question, again, is simply, “What should I do or not do with it?” We ascertain the answer by either determining the dictates of subjective reasoning or calculating the objective consequences of an action, the latter approach is perhaps more appealing for its resonance with the ethos of technique.

We might conclude, then, that the popular instrumentalist view of technology—a view which takes technology to be a mere a tool, a morally neutral instrument of a sovereign will—is the natural posture of the sort of individual or moral subject that modernity yields. It is unlikely to occur to such an individual that technology is not only a tool with which moral and immoral actions are preformed but also an instrument of moral formation, informing and shaping the moral subject.

It is not that the instrumentalist posture is of no value, of course. On the contrary, it raises important questions that ought to be considered and investigated. The problem is that this approach is incomplete and too easily co-opted by the very realities that it seeks to judge. It is, on its own, ultimately inadequate to the task because it takes as its starting point an inadequate and incomplete understanding of the human person.

There is, however, another older approach to ethics that may help us fill out the picture and take into account other important aspects of our relation to technology: the tradition of virtue ethics in both its classical and medieval manifestations.

In Moralizing Technology, Peter-Paul Verbeek comments on some of the advantages of virtue ethics. To begin with, virtue ethics does not ask, “What am I to do?” Rather, it asks, in Verbeek’s formulation, “What is the good life?” We might also add a related question that virtue ethics raises:  “What sort of person do I want to be?” This is a question that Verbeek also considers, taking his cues from the later work of Michel Foucault.

The question of the good life, Verbeek adds,

“does not depart from a separation of subject and object but from the interwoven character of both. A good life, after all, is shaped not only on the basis of human decisions but also on the basis of the world in which it plays itself out (de Vries 1999). The way we live is determined not only by moral decision making but also by manifold practices that connect us to the material world in which we live. This makes ethics not a matter of isolated subjects but, rather, of connections between humans and the world in which they live.”

Virtue ethics, with its concern for habits, practices, and communities of moral formation, illuminates the various ways technologies impinge upon our moral lives. For example, a technologically mediated action that, taken on its own and in isolation, may be judged morally right or indifferent may appear in a different light when considered as one instance of a habit-forming practice that shapes our disposition and character.

Moreover, virtue ethics, which predates the advent of modernity, does not necessarily assume the sovereign individual as its point of departure. For this reason, it is more amenable to the ethics of technological mediation elaborated by Verbeek. Verbeek argues for “the distributed character of moral agency,” distributed that is among subject and the various technological artifacts that mediate the subject’s perception of and action in the world.

At the very least, asking the sorts of questions raised within a virtue ethic framework fills out our picture of technology’s ethical consequences.

In Susanna Clarke’s delightful novel, Jonathan Strange & Mr. Norrell, a fantastical story cast in realist guise about two magicians recovering the lost tradition of English magic in the context of the Napoleonic Wars, one of the main characters, Strange, has the following exchange with the Duke of Wellington:

“Can a magician kill a man by magic?” Lord Wellington asked Strange. Strange frowned. He seemed to dislike the question. “I suppose a magician might,” he admitted, “but a gentleman never would.”

Strange’s response is instructive and the context of magic more apropos than might be apparent. Technology, like magic, empowers the will, and it raises the sort of question that Wellington asks: can such and such be done?

Not only does Strange’s response make the ethical dimension paramount, he approaches the ethical question as a virtue ethicist. He does not run consequentialist calculations nor does he query the deliberations of a supposedly universal reason. Rather, he frames the empowerment availed to him by magic with a consideration of the kind of person he aspires to be, and he subjects his will to this larger project of moral formation. In so doing, he gives us a good model for how we might think about the empowerments availed to us by technology.

As Verbeek, reflecting on the aptness of the word subject, puts it, “The moral subject is not an autonomous subject; rather, it is the outcome of active subjection” [emphasis his]. It is, paradoxically, this kind of subjection that can ground the relative freedom with which we might relate to technology.

Moralizing Technology: A Social Media Test Case

Where do we look when we’re looking for the ethical implications of technology? A few would say that we look at the technological artifact itself. Many more would counter that the only place to look for matters of ethical concern is to the human subject. As noted in an earlier post, philosopher of technology, Peter-Paul Verbeek, argues that there is another, perhaps more important place for us to look: the point of mediation, that is the point where the artifact and human subjectivity come together to create effects that cannot be located in either the artifact or the subject taken alone.

Verbeek would have us consider the ethical implications of how technologies shape our perception of the world and our action into the world. Take the following test case, for example.

In a witty and engaging post at The Atlantic, Robinson Meyer assigned each of the seven (+2) deadly sins to a corresponding social network. Tinder, for example gets paired with Lust, LinkedIn with Greed, Twitter with Wrath, and, most astutely, Tumblr with Acedia. Meyer mixed in some allusions to Dante and the end result was a light-hearted discussion that nonetheless landed a few punches.

In response, Bethany Keeley-Jonker questions the usefulness of Meyer’s essay. While appreciating the invocation of explicitly moral language, Keeley-Jonker finds that the focus on technology, in this case social media platforms, is misleading.

In her view, as I read her post, the moral blame and/or praiseworthiness can only ever be assigned to people. One thing she appreciates about Myer’s essay, for instance, is that “it locates our problems where they’ve always been: in people.” “Why the fixation, then,” she wonders, “on the ways our worst impulses show up in social media?”

She goes on to explain her reservations this way:

“I am not so sure that Facebook increases our desire for approval so much as it broadcasts it. That broadcasting element is the second reason I think people worry a lot about social media. Folks have engaged in the same kinds of bad behavior for centuries, but in the past it wasn’t so easy to search, archive and share your vices with a few hundred of your friends, family and acquaintances.”

Recalling Verbeek’s discussion, we recognize in Keeley-Jonker’s analysis an instrumentalist approach that appears to take the technology in question to be a morally neutral tool. The ethical dimension exists entirely on the side of human subjectivity. The behavior is historically constant; in this case, social media just exposes to public view what would’ve been going on in any case.

Consider one more of Keeley-Jonker’s examples:

“Plenty of pixels have been spilled over the way Pinterest sparks envy (and Instagram, for that matter), but I’ve also seen it spark connection and sharing. I’ve seen it reproduce something that’s happened between women for decades or centuries in low-tech ways: here’s that recipe I was telling you about; here’s how I made this thing; here’s where I bought that thing; here’s the secret to chocolate chip cookies.”

Same old activity, new way of doing it. The technology, on this view, leaves the activity essentially unchanged. There is a surface similarity, certainly, in the same way that we might say a hurricane is not unlike a cool breeze.

Of course, we do not want to suggest that a social media platform can itself be guilty of a vice; that would be silly. Nor is it the case that moral responsibility does not attach to the human subject. But is this all that can be said about the matter? Is it really misleading to consider the role of social media when talking about virtue and vice? What if, following Verbeek’s lead, we focus our attention on the point of mediation. How, for example, do each of these platforms mediate our perception?

Verbeek turns to the work of philosopher Don Ihde for some analytic tools and categories. Among the many ways humans might relate to technology, Ihde notes two relations of “mediation.” The first of these he calls “embodiment relations” in which the tools are incorporated by the user and the world is experienced through the tool (think of the blind man’s stick). The second he calls a “hermeneutic relation.” Verbeek explains:

“In this relation, technologies provide access to reality not because they are ‘incorporated,’ but because they provide a representation of reality, which requires interpretation [….] Ihde shows that technologies, when mediating our sensory relationship with reality, transform what we perceive. According to Ihde, the transformation of perception always has the structure of amplification and reduction.”

Verbeek gives us the example of looking at a tree through an infrared camera: most of what we see when we look at a tree unaided is “reduced,” but the heat signature of the tree is “amplified” and the tree’s health may be better assessed. Ihde calls this capacity of a tool to transform our perception “technological intentionality.” In other words, the technology directs and guides our perception and our attention. It says to us, “Look at this here not that over there” or “Look at this thing in this way.” This function is not morally irrelevant, especially when you consider that this effect is not contained within the digital platform but spills out into our experience of the world.

What, then, if we consider social media platforms not merely as new tools that let us do old things in different ways, but as new ways of perceiving that fundamentally alter what it is that we perceive and how we relate to it? In the case of social media, we might say that what we ordinarily perceive are things like our own self reflected back to us, other people, and human relationships. Perhaps it is in the nature of the unique architecture of each of these platforms to activate certain vices precisely because of how they alter our perception.* Is there something about what each platform allows us to present about ourselves or how each platform manipulates our attention that is especially conducive to a particular vice?

Again, it is true that apart from a human subject there would be no vice to speak of, but it would be misleading to say that the platform was wholly irrelevant, innocent even, of the vice it helps to generate. We might do well, then, to distinguish between an ever-present latent capacity for vice (or virtue) and the technological mediations that potentially activate the vice, or, to stick with the moral vocabulary, constitutes a field of temptation where there was none before.

And we have not yet addressed how the platforms might be conceived of as engines of habit formation– generating addiction by design, to borrow Natasha Dow Schüll’s apt formulation–and thus incubators of moral character.

The first of Melvin Kranzberg’s useful laws of technology states, “Technology is neither good nor bad; nor is it neutral.” Let us conclude with a corollary: “Technology is neither moral or immoral; nor is it morally neutral.”

___________________________________

*A recent post by Alan Jacobs provides an illustration of this dynamic from an earlier era and its own emerging media landscape. Of Martin Luther and Thomas More, Jacobs writes, “To put this in theological terms, one might say that neither More nor Luther can see his dialectical opponent as his neighbor — and therefore neither understands that even in long-distance epistolary debate one is obligated to love his neighbor as himself” (emphasis mine).

Ethics of Technological Mediation

Early on in Moralizing Technology: Understanding and Designing the Morality of Things (2011), Peter-Paul Verbeek briefly outlines the emergence of the field known as “ethics of technology.” “In its early days,” Verbeek notes, “ethical approaches to technology took the form of critique (cf. Swierstra 1997). Rather than addressing specific ethical problems related to actual technological developments, ethical reflection on technology consisted in criticizing the phenomenon of ‘Technology’ itself.” Here we might think of Heidegger, critical theory, or Jacques Ellul.

In time, “ethics of technology” emerged “seeking increased understanding of and contact with actual technological practices and developments,” and soon a host of sub-fields appeared: biomedical ethics, ethics of information technology, ethics of nanotechnology, engineering ethics, ethics of design, etc.

This approach remains “merely instrumentalist.” “The central focus of ethics,” on this view, “is to make sure that technology does not have detrimental effects in the human realm and that human beings control the technological realm in morally justifiable ways.” It’s not that these considerations are unimportant, quite the contrary, but Verbeek believes that this approach “does not yet go far enough.”

Verbeek explains the problem:

“What remains out of sight in this externalist approach is the fundamental intertwining of these two domains [the human and the technological]. The two simply cannot be separated. Humans are technological beings, just as technologies are social entities. Technologies, after all, play a constitutive role in our daily lives. They help to shape our actions and experiences, they inform our moral decisions, and they affect the quality of our lives. When technologies are used, they inevitably help to shape the context in which they function. They help specific relations between human beings and reality to come about and coshape new practices and ways of living.”

Observing that technologies mediate perception, how we register the world, and action, how we act into the world, Verbeek  elaborates a theory of technological mediation, built upon a postphenomenological approach to technology pioneered by Don Ihde. Rather than focus exclusively on either the artifact “out there,” the technological object, or the will “in here,” the human subject, Verbeek invites us to focus ethical attention on the constitution of both the perceived object and the subject’s intention in the act of technological mediation. In other words, how technology shapes perception and action is also of ethical consequence.

As Verbeek rightly insists, “Artifacts are morally charged; they mediate moral decisions, shape moral subjects, and play an important role in moral agency.”

Truth and Trust in the Age of Algorithms

A great deal has been written in the last few days about how Facebook determined which stories appeared in its “Trending” feature. The controversy began when Gizmodo published a story claiming to reveal an anti-conservative bias among the sites “news curators”:

Facebook workers routinely suppressed news stories of interest to conservative readers from the social network’s influential “trending” news section, according to a former journalist who worked on the project. This individual says that workers prevented stories about the right-wing CPAC gathering, Mitt Romney, Rand Paul, and other conservative topics from appearing in the highly-influential section, even though they were organically trending among the site’s users.

Several former Facebook “news curators,” as they were known internally, also told Gizmodo that they were instructed to artificially “inject” selected stories into the trending news module, even if they weren’t popular enough to warrant inclusion—or in some cases weren’t trending at all. The former curators, all of whom worked as contractors, also said they were directed not to include news about Facebook itself in the trending module.

Naturally, the story generated not a little consternation among conservatives. Indeed, a Republican senator, John Thune, was quick to call for a congressional investigation.

Subsequently, leaked documents revealed that Facebook’s “Trending” feature was heavily curated by human editors:

[…] the documents show that the company relies heavily on the intervention of a small editorial team to determine what makes its “trending module” headlines – the list of news topics that shows up on the side of the browser window on Facebook’s desktop version. The company backed away from a pure-algorithm approach in 2014 after criticism that it had not included enough coverage of unrest in Ferguson, Missouri, in users’ feeds.

The guidelines show human intervention – and therefore editorial decisions – at almost every stage of Facebook’s trending news operation […]”

The whole affair is not inconsequential because Facebook is visited by over one billion people daily and is now widely regarded as “the biggest news distributor on the planet.” In her running commentary on Twitter, Zeynep Tufikci wrote, “My criticism is this: Facebook is now among the world’s most important gatekeepers, and it has to own that role. It’s not an afterthought.”

Along with the irritation expressed by conservatives, others have criticized Facebook for presenting its Trending stories as the products of a neutral and impersonal computational process. As the Guardian noted:

“The topics you see are based on a number of factors including engagement, timeliness, Pages you’ve liked and your location,” says a page devoted to the question “How does Facebook determine what topics are trending?”

No mention there of the human curators, and this brings us closer to what may be the critical issue: our expectations of algorithms.

First, we should note that the word algorithm is itself part of the problem. In his thoughtful discussion of the Facebook story, Navneet Alang called the algorithm the “organizing principle” of our age. For this reason, we ought to be careful in our use of the term; it does both too much and too little. As Tufekci tweeted, “I *do* wish there were a better term than algorithm to mean ‘complex and opaque computation of consequence’. Language does what it does.”

Secondly, Rob Horning is almost certainly right in claiming that “Facebook is invested in the idea that truth depends on scale, and the size of their network gives them privileged access to the truth.” To borrow a phrase from Kate Crawford, Facebook wants to be the dominant force in a “data driven regime of ‘truth.'”

Thirdly, it is apparent that in its striving to be the dominant player in the “data driven regime of truth,” Facebook is answering a widely felt desire. “Because they are mathematical formulas,” Alang observed, “we often feel that algorithms are more objective than people.” “Facebook’s aim,” Alang added, “appears to have been to eventually replace its humans with smarter formulas.” 

We want to believe that Algorithms + Big Data = Truth. We have, in other words, displaced the old Enlightenment faith in neutral, objective Reason, which was to guide democratic deliberation in the public sphere, onto the “algorithms” that structure our digital public sphere.

False hopes and subsequent frustrations with “algorithms,” then, reveal underlying technocratic aspirations, the longing for technology to do the work of politics. A longing that may be understandable given the frustrating, difficult, and sometimes even dangerous work of doing politics, but a longing that is misguided nonetheless.

Our desire for neutral, truth-revealing algorithms can also be framed as a symptom of a crisis of trust. If we cannot trust people or institutions composed of people, perhaps we can trust impersonal computational processes. Not surprisingly, we feel badly used upon discovering that behind the curtain of these processes are only more people. But the sooner these false hopes and technocratic dreams are dispelled, the better.

 

Promiscuous Tech Criticism

Early on in The Value of Convenience: A Genealogy of Technical Culture (1993), Thomas Tierney distinguishes his work from that of earlier critics who sought to reveal something of the essence of technology. “Rather, what I offer here,” Tierney explains, “is nothing more than a perspective on technical culture.” “This perspective,” he continues

treats technology as something which can be thought of along various lines, none of which is capable of revealing the heart of the matter of technology. It is only by approaching technology from various perspectives that one can begin to understand and, perhaps, resist it. And there is no reason for believing that after experiencing technology from various perspectives, one will be able to completely grasp it and utter a final word on the subject. So in regard to those interpretations which have been offered as revelations of the essence of technology, it is not so much that I find them wrong, but that I find they claim too much for their insights.”

This is a sensible approach to the work of understanding all that we group under the concept of technology, and it is a good summary of the approach that informs the work of CSET. We encourage and seek to practice a mode of tech criticism that borrows promiscuously from a wide array of disciplines and perspectives. We do so because the “various lines” by which we might approach technology converge on the human being. If technology is deeply entangled with our humanity and if it is inseparable from the countless ways our humanity is expressed and articulated, then we are as unlikely to find a singular, all-encompassing account of it as we are to find such an account of being human.

Remembering Jacques Ellul

Writing in Comment magazine in 2012, David Gill considered the legacy of Jacques Ellul, the French sociologist and theologian best known for his searching critiques of modern technology, most notably in The Technological Society.

Gill summarizes Ellul’s diagnosis of modern society this way:

In The Technological Society and many subsequent works, Jacques Ellul detailed the emergence and the universal, global, intensive, and extensive dominance of la technique [his term for the all-encompassing nature of technology as artifact, system, and mode of thought] in our civilization. Technology affects every aspect of our lives and every part of the world. It is the defining characteristic of the general milieu in which we live and think. It is not just that technological tools and machines are everywhere, he says, but that technological rationality dominates our every thought and activity (political, religious, therapeutic, artistic, sexual, and otherwise).

Elsewhere, Ellul described the sacred as “the unimpeachable, inviolable order to which man himself submits and which he uses as a grid to decode a disorderly, incomprehensible, incoherent world that he might get his bearings in it and act in it.” This was, in his view, precisely the function technology had assumed in modern society. Technology, Ellul believed, is “not merely an instrument, a means. It is a criterion of good and evil. It gives meaning to life. It brings promise. It is a reason for acting and it demands a commitment.”

In keeping with technology’s sacred status, Ellul wrote, “All criticism of it brings down impassioned, outraged, and excessive reactions.” “Whenever anyone suggests that technology presents certain disadvantages,” he added, “people rush to its defense …. One can call everything in our society into question (including God), but not technology.”

Some have accused Ellul of describing a situation so dire that it was unavoidable and unredeemable. That was not quite the case. As Gill explains,

Ellul says we should “profane” and desacralize false gods and idols. Treat them as ordinary and profane, joke about them, ignore them, refuse to sing their praises or bow down to them, limit their presence and position in our life (and organization). Question them. There is a time to attack and mock a false or predatory god. To get free, we may need to taunt it, profane it, take its name in vain, commit sacrilege, and brashly break its commandments. Words must be accompanied by actions.

For more on the life and work of Ellul, including a discussion of the values embedded in technology, read the rest of Gill’s essay.