Technology and the Emergence of “Computational Parasites”

Modern technology now allows the earth to support more than eight billion inhabitants, and many get to enjoy the myriad, wonderful conveniences of modern life. Yet new digital technologies — so lucrative to inventors and entrepreneurs, and so alluring to users — often lead to problems and side effects that society as a whole must address.  

We see this, for example, when social media support the spread of misinformation, disinformation, and hatred, while letting users live in an information bubble that shields them from ideas that might challenge their own point of view. Tech companies extract a profit, while leaving society with serious new challenges that others must work to address.

Many recent technological advances leave similar problems for society, putting our social systems at risk and imposing external costs via the way the resultant problems require the government’s time and attention. These “computational parasites” allow some entrepreneurs to profit from disruptive change, even when their new algorithms come at a net cost to society as a whole. The overall risk increases as new technologies emerge with increasing rapidity, and as entrepreneurs keep trying to restructure the social and economic system with little consideration of the broader consequences of their new technologies. Governments must develop better ways to control and limit the emergence of such parasitic technologies, and I propose some first steps towards the design of new regulatory strategies. 

 

Examples of Such Parasitic Technologies

Bitcoin provides a good example of a dubious, dangerous new kind of “digital progress.” At one level, the algorithms supporting Bitcoin are very clever and sound very secure (with the ownership of tokens represented by long cryptographic strings and with transactions approved and registered on the blockchain only after verification by other systems spread around the world). Bitcoin has spurred the development of thousands of other cryptocurrencies (mostly very obscure), and cryptocurrency markets have attracted many eager participants — speculators, dreamers, criminals, and people who hope that Bitcoin will protect their life savings from future devaluation of their nation’s currency.  

Some individuals, of course, have made fortunes by creating new cryptocurrencies, or by getting in (and out) of these markets at the right time. Yet there’s no reason to believe that Bitcoin and other cryptocurrencies serve any (net) useful role for society as a whole. Indeed, the use of Bitcoin introduces a fundamentally new kind of tension into our financial and social systems: Users may distrust fiat currencies, yet their own actions impose significant demands on society via the way in which such cryptocurrencies require the continued attention of government regulators, lobbyists, and the legal system. Seen from this larger, system-level perspective, Bitcoin and other cryptocurrencies thus behave as a computational parasite, extracting resources — money, time, attention, and the electricity needed for mining — from a larger social system. 

Similar problems arise with high-speed trading in financial markets, and with the proliferation of new financial instruments. Some ETFs (exchange-traded funds) are useful to individuals making long-term investments, but many others are just set up to offer new ways of betting on fluctuations in the financial markets. Huge amounts of money move back and forth (often in milliseconds) in a set of zero-sum games that I discussed in an earlier essay on problems with the financial system. 

The betting leads to a kind of information arms race, where the winners are those with better algorithms, faster computers, more information about the markets, and a bigger pot of money to wager. Those who are rich enough to invest in hedge funds, and who have full-time financial advisors, can extract money from fluctuations in stock and bond markets — taking money from anyone who temporarily happens to be on the wrong end of the trade. They can “short the market” during times of crisis, often making immense profits while others are at risk of losing their homes and their life savings.

 

Impacts on the Rest of Society

These changes in the financial system — with cryptocurrencies, high-speed trading, and an endless stream of new financial derivatives — have (quite obviously) been made possible by recent developments in computer science and computer technology. Yet these trends in the financial system also must be understood in a larger social and political context: These new “money machines” can operate as they do only because of the political power and influence offered to an elite, wealthy class in modern Western society.  

Educational privileges play a role as trading firms tend to hire some of the most talented students coming out of schools like Harvard, Princeton, Yale, MIT, Stanford, and Caltech. And financial privilege helps ensure that the games are allowed to go on. Wealthy investors can hire lobbyists and lawyers, and also can make massive contributions to political campaigns — helping to ensure that politicians do not impose any fundamental restraints on this free-wheeling financial system.

Perhaps the examples above would leave nothing for us to worry about if — hypothetically — speculators, scam artists, and financial elites could extract money from the system without affecting everyone else in society. Yet these new schemes take money from others without offering anything in return. They are “profitable” only because of the way in which they can attach themselves to some larger “host” and then suck money out of the rest of the social, financial, and political system. 

Thus, as described in Zeke Faux’s amazing book Number Go Up, many people lost their life savings when duped by scam artists or when placing ill-advised bets in the cryptocurrency markets.

And, although (perhaps) less obviously unsavory, financial elites in the U.S. have constructed a system that allows them to profit while ignoring the ways in which their activities may affect society as a whole: 1) public funds are spent to regulate these complex financial markets; 2) money is — in effect — steadily siphoned out of the accounts of less sophisticated investors; 3) banks are bailed out when they have trouble (as in the financial crisis of 2007-2009); and 4) the rich use their political power to fight for special tax cuts and loopholes. And, echoing a concern raised by Peter Turchin in End Times, financial elites have created such powerful “wealth pumps” (extracting money from the social system so rapidly) that this may lead to the immiseration of much of the rest of society, and this risks setting the stage for social upheaval.

These are extremely serious concerns, and they show some of the ways in which parasitic computational processes can pose a fundamental risk to society as a whole. Yet — as I explain below — innovations in financial markets are just part of a much broader set of dangerous trends in modern society. There are many other ways in which granting everyone the freedom to create new algorithms (designed to restructure our social, political, and economic systems) can impose indirect costs on those who have no interest in, and no use for, these new computational schemas.

 

The Risk Posed by Parasitic “Money Machines”

In principle, government should help protect society from any problems caused by such parasitic processes (protecting the majority against a minority who just happen to have more computers and better programming skills). Yet it’s hard for governments to act effectively in these new realms when governments already face a crisis of complexity and have trouble fulfilling their part of the social contract.  And — as discussed below — it becomes ever harder for governments to detect and address such challenges as the rate of change accelerates and as these computational schemes become ever-more complex.

These new computational technologies — as with recent developments in AI — have the prospect of being far more disruptive than earlier stages of industrial progress that simply offered new, physically embodied goods and services. With AI, even those developing the technology admit that they don’t know whether this will usher in some bold, beautiful new era for humanity or whether it will lead to the extinction of the human species. Nonetheless, developers press ahead at breakneck speed — hoping that society will find some way to handle any resultant problems and will clean up any mess that they make.

Of course, many technological advances have had real utility for society (as with the Internet, cell phones, biotechnology, the personal computer, word processors, spreadsheets, and video conferencing). Yet it’s often unclear when, or whether, new computational systems will have a net long-term benefit to society. The rate of change, and the increasing complexity of these new “digital technologies” often means that society has neither the time nor the ability to analyze the potential risks and rewards. 

No one individual or group is at fault, but the structure of our social system creates immense pressure to quickly implement and test every possible computational idea that might allow entrepreneurs and venture capitalists to find new ways of making money. The impetus for change comes in part from the huge supply of new graduates in the computer sciences (with this major becoming increasingly popular in recent years). Many of these newly minted graduates start dreaming of the “next big thing,” thinking of endless new ways in which they might “disrupt” existing systems. In the pursuit of their dreams, some are willing (in the words of Mark Zuckerberg) to “move fast and break things,” and their efforts are supported by venture capitalists and existing tech companies who — collectively — have hundreds of billions of dollars they can invest in such efforts.   

There are so many talented programmers, and so much money in the pockets of investors, and such pressure to move quickly, that there’s little time or incentive for new companies to think carefully about the long-term social impact of any of the new technologies they are developing.

Life in the San Francisco area, and my part-time work within the healthcare sector of the VC community, gives me a little glimpse of what’s on the horizon. There is, for example, active work on development of human-machine hybrids (allowing direct, reciprocal connections between the brain and computers). Other groups have been developing decentralized autonomous organizations/corporations (which would lie outside of current regulatory guidelines since there would be no CEO and no board, but just a network of people and machines connected by blockchain). There also are serious attempts to develop AI systems that will be far more intelligent than people, attempts to endow robots with consciousness, to design “digital doubles” to expand the power of the individual, and to create “virtual selves” that will survive the physical death of the individual (or allow creation of a digital simulacrum of someone who’s already dead).

Given all the pressure driving technological elites to find new ways of making money, and given their willingness to change some of the most fundamental aspects of society as they do this, we need some way to protect ourselves in cases where innovation may impose real costs (of time, attention, or money) on the rest of society. 

 

Dealing with Parasitic Algorithms

Let me be clear: I do not believe that every new “disruptive innovation” will have net negative consequences for society, and I’m not trying to place blame for the problems that arise. (Most people are just acting in exactly the way — seeking more power for themselves — that you’d expect human beings to act.)

However, I do believe that disruptive algorithmic innovation poses serious risks for society. There have been innovations (such as cryptocurrencies) that serve no net purpose for society as a whole, and ideas under discussion in high-tech circles show the pressure to develop new technologies without any real thought about their potential long-term impact on society. Problems associated with these new technologies may eventually get the kind of careful scrutiny they deserve, but — even when doing so — they waste the government’s time, attention and resources. Society needs better ways of detecting, regulating, and eliminating such computational parasites.

This will not be easy, but I propose several types of approaches that should help, and I hope that others can build on the frame offered here:

#1). It will be important to have better ways of modeling complex human systems, perhaps via agent-based programming (in which programmers try to model how individuals make choices, and then let myriad individual “agents” act and interact to see how the larger social/economic system might behave). Even if the first models just work at a qualitative level, they should provide a way of exploring different assumptions and possibilities, and thus stimulate further thought and analysis. Elaboration and refinement of such models should eventually help society find better ways of regulating algorithmic innovations that benefit the inventors and entrepreneurs but end up imposing substantive new costs on everyone else.

#2). Even as better models are being developed, society would benefit from a more open, honest discussion about potential costs involved when introducing new algorithms and restructuring the social system in the name of “progress.” We cannot afford to blindly follow the advice of those with billions of dollars at play in the venture capital markets. The nature and meaning of progress is nowhere near as simple or obvious as purported in Marc Andreessen’s “Techno-Optimist Manifesto.”

#3). Although it may be hard to “tap the brakes” when money speaks so loudly in Washington, the government needs better ways of evaluating how new schemas proposed by the technological elite may affect the rest of society. The government should also make it clear to companies and their customers that acceptance of new computational schemas (that attempt to restructure society and the economy) is only provisional, since it is not always possible to identify parasitic processes in the moment when they are first introduced. Regulations and restrictions may need to be added as we start to see the side effects and get feedback from those affected by the new developments. 

#4). Minimizing the proliferation of computational parasites is so important that the government must provide funding (just as it has done in biomedical research) for outside groups who are working to explore these problems. Given the multiple layers of system-level complexity that are involved, it takes time and effort to analyze the implications of new (computational) ideas about ways to restructure our social and economic systems. And, if/when problems are discovered, it takes time to develop arguments that are clear enough and powerful enough so as to challenge the statements of the well-paid lobbyists working for the billionaires who benefit from the continued proliferation of complexity.

#5). The United States will need a new government agency to help analyze and address problems posed by these new computational parasites. Companies with new computational schema should be required to file “algorithmic impact statements.” Like environmental impact statements required by other agencies, they will provide a basis for analysis and discussion of external costs that are likely to arise from a new technology. 

None of this will be easy, but we must start now. Changes in this realm are so shocking — so different than other, earlier technological advances that we cannot afford to blindly accept each “gift horse” that high tech now offers. With governments already reeling under a crisis of complexity (with all the other challenges of the Anthropocene), society needs better ways to ask questions, to push back, to control the type and rate of change that is introduced in the name of “technological progress.” As if attempting to keep the ship upright in a storm, society must maintain a social and economic system that citizens and leaders have some chance to understand and control.