Perfect Storm: Big Data-Surveillance-AI

Peter_Robertson
8 min readSep 23, 2017

Artificial Intelligence (AI) has hit some important benchmarks. In some fields it’s not just able to eclipse your average human, but to beat world masters in such pursuits as chess and go. This appears to have happened overnight, but in reality comes on top of fifty years of alternating optimism and despair. For example, I recall opinions from the early 1980s discouraging careers in computer programming, based on the view that computers would very soon be programming themselves. How’s that working out? In reality, compelling general AI is still a ways off, but there are plenty of specific tasks that AIs can be trained to do, and this is where we are seeing fairly breathtaking progress right now. AI’s impact on the future of work for the world’s people is real and imminent. That, however, is not the particular threat that we deal with here.

So, AI, what is it?

Let’s start with a scope, per Wikipedia, “..the problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects.” These capabilities are seen as evidencing ‘intelligence’. What about consciousness? If the notion of AI includes a tacitly held notion of consciousness in the mass mind, then maybe the following from ‘The Terminator’ has something to do it: “The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th.” The machine/network has become self-aware, identifies humans as the enemy and proceeds with an extermination plan that it autonomously conceived. Is this possible? My point is that it doesn’t matter if such extreme AI is close or even possible. AI, in its current forms, is dangerous right now and is set to become immensely more dangerous over the coming decades, (even without achieving sentience).

If you want to delve into AI state of the art, here is a really good article on Medium.

Why is AI dangerous?

Firstly, to be clear, we’re grouping all ‘levels’ of AI into the threat space here. As indicated, AI doesn’t need to be super smart, and certainly not conscious, to pose a threat. Massive armies of adaptive, semi-autonomous agents are already scouring the Internet to collect, collate, process and digest information for a variety of purposes.

What’s on the Internet? We are. Our very selves, in various ways, are spattered all over the Internet. Effectively, we’ve lost control of our identities. When data is collected about us, algorithms analyse that data and draw conclusions, then attach these conclusions to some system’s representation of us, along with the actual information harvested. A machine ‘attests’ to a version of our reality and we have no say in that attestation. In this way, we can be served ‘targeted ads’, or, if we’re in Afghanistan, and the machine finds our conduct suspicious, we might be served a drone. The latter case, as dire as it is, doesn’t make much impression in the West. I suggest that our inner Western voice, if we even notice it, is saying something like, “nobody I know got droned, if you get droned, you must have been up to no good, hey, 9–11, right?”

If you are unmoved by the plight of ordinary people in far off lands, and you feel safe, secure and unconcerned as a citizen of some modern western democracy, I’d like to pose a question to you: “can you be certain that you, your children and your grandchildren will always live under a safe administration that respects your freedom and general civil rights?” Really? Perhaps you should take a look at what happens to civil rights in a time of war, or when a wealthy, modern nation state fails, eg Venezuela. Civil societies can collapse quickly and spectacularly. My point is that we have no guarantees of a continuing, stable, civil situation. We certainly can’t rely on the good will or integrity of individuals to ensure that. Consider that outlook now in the context of emerging technology.

I’m suggesting that together big data, surveillance, and AI are forming the ideal infrastructure for future repressive and dictatorial regimes. Whoever is prepared to use the AI toolset without restraint and has free reign over a population’s personal data will have incredible power. They will know everything about their opponents and will have a basis to reward ‘good’ behaviour and punish ‘bad’ behaviour throughout the general population. China does the latter today with a huge army of people across social media. China is also very interested in AI and is investing heavily, and Vladimir Putin recently stated that “whichever country leads the way in AI research will come to dominate global affairs”. What emerges here is that nations have become as obsessed about monitoring and controlling their own populations as they are with playing brinkmanship at the international level. In fact the threats attributed to other nations form the central justification for domestic abuses of civil rights and privacy across the developed world today. We are told we must give up the freedoms we assume sacrosanct in order to protect those very freedoms and this contradiction is rarely confronted.

We have ample historical dystopias to refer to, and some examples achieved extraordinary capacity in surveillance and data management, just with the technologies available during the second world war and subsequent cold war. It is sobering to try to consider what our present might be like if any of these past regimes had possessed even a fraction of current AI capabilities, let alone those that lie in the near future. It’s also very important to acknowledge that there was nothing special or unique about the individuals who have lead nations into these disasters. The same types of people are present throughout all of our societies today, and if they get control of the capabilities currently being developed, then we have serious cause to worry.

How can we stop AI?

We can’t. AI represents one of the next logical steps in human achievement. It is already here in a preliminary way and it can’t be stopped. Any attempts at regulation will create strategic and marketplace distortions, benefiting elites and quite possibly putting some nations at risk as they fall behind.

If we can’t stop AI, what then?

Remember, we’ve identified big data, surveillance, and AI together as the ingredients of a very dangerous future. If we can’t stop AI, what about the other two bits, big data and surveillance? AI, without huge troves of data to harvest, can’t do much. If I weren’t in the dataset, and couldn’t be surveilled, then I couldn’t be data farmed, and as an individual, my hazards would be much reduced. Clearly we can’t just shut down big data and surveillance, but can we opt out and still participate fully as citizens?

How do we opt out of big data and surveillance?

Given that western leaders have largely already embraced intrusive surveillance of their own citizens and that big data has enormous commercial impetus, we have to take responsibility for these matters ourselves, individually. Neither government, nor business are going to help here, they’re all addicted to big data surveillance, and are in fact at the root of the problem. They certainly won’t voluntarily moderate their conduct.

Today, using a combination of tools, we can reduce the chances of being surveilled and data mined on the WWW. It isn’t easy or totally effective, nor is it a solution for the vast majority of the population. Being online for most of us involves risks at the hands of incompetents (Equifax), criminals (any number of shady hackers world-wide) and over-reaching governments (ie, member nations of Five Eyes intelligence alliance). The big, enabling factor for surveillance and big data is centralisation. Centralised databases hosted in server farms today contain millions of individuals’ critical, personal information. The incentives for breaking into a large database are irresistible. For criminals there is the prospect of millions of credit cards that can be sold to other criminals and for governments the intelligence payloads override any concerns that otherwise might be held for the dignity and privacy of their own citizens, or, for the citizens and officials of other nations for that matter.

How do I escape centralised databases?

A number of projects are currently striving to answer this question. For the most part, they base themselves on, or are in themselves, decentralised networks. Peer to peer (p2p) networks achieve this by removing all servers and centralised management and control from the architecture, immediately achieving some very important advantages. If truly p2p, such a network has no single point of failure or attack surface, making it immune to capture, censorship, or widespread disruption. Large centralised data centres provide bandwidth choke points, consume a lot of power, require cooling (more power) and provide great rewards for attackers. As mentioned, a single, successful break in to a large database nets a huge payload. Data centres also provide opportunities for various actors to ‘sniff’ the large, converging data feeds for valuable information and opportunities to track individuals. P2p decentralised networks on the other hand, (depending on their specific design), can also provide direct economic and performance advantages, often actually speeding up access to shared resources that are attracting high amounts of traffic, the opposite case to centralised systems.

Examples of p2p networks include IPFS (InterPlanetary File System), which is currently available, and The Safe Network, which is still in development but making great progress. It is still early for these new networks, but together they begin to demonstrate how we will have alternatives and choices in the near future about how we go online and how we protect our own identities and private information.

Conclusions

The future of civil societies is inextricably bound to the nature of the networks we use. Due to the architecture of the WWW, we are all vulnerable to constant surveillance and sporadic direct attacks. This vulnerability can be mitigated, but the methods required are far beyond the interest and ability of normal users.

Three particular aspects of technology combine most potently to threaten our civil democratic futures; big data, surveillance and AI. While many see AI as the overwhelming threat, without access to big data, through surveillance, AI poses a far less direct threat to human beings.

Projects are underway to offer us alternatives that can protect us from big data surveillance. These projects may prove critical in preserving and increasing civil rights, privacy and freedom for individuals around the world.

--

--