Humanity in an AI-enabled world

Monica Spisar
4 min readJun 11, 2018

--

In studying AI, ML, and the general advance of technology — and observing the response to the hype that surrounds all that — I’m compelled to ask whether we’re fully considering our power in determining the nature of the influence of technology. Not from the perspective of regulating the technologies themselves, nor how they evolve (both of which are important considerations); rather, from the perspective of how we, as humans, will evolve to mitigate the inherent risks.

We’re complicit in the development of AI — artificial intelligence — that will not only work for us, but also watch us and influence our decisions. We know that we’ll need to protect ourselves from the AI beings we’re building. But we’re building them anyway.

The promises of AI include automation of unappealing rote tasks and crunching through mind blowing quantities of data in order to find what we might be searching for. (Interestingly, we routinely stumble in defining what, exactly, we seek — another topic entirely.) AI works because it’s been trained to please humans; at the very least, this is true for narrow AI — AI that’s designed to excel at a particular task but doesn’t excel at a random fundamentally different task. In narrow AI, algorithms (designed by humans) monitor their success (defined by humans) and then practice on data (created by humans) until a desired performance metric (supplied by humans) is achieved. As long as AI is restricted to this pattern, the influence of human foibles (at least — or, much worse, nefarious intent) are inevitable.

Anyhow, we do know AI will never be a panacea. It may even end up being overwhelmingly stifling and dehumanizing. We know the risks, though I’m not sure we really know them. We need not even look so far as proposed dystopias where AGI enables machines to overtake humanity. (The G in AGI, for general, transforms AI from narrow to not.) Challenges arise with the most basic AI support systems that determine, for example, who gets hired or who gets parole. Adjusting for known issues — such as bias — provides a measure of comfort but, at the end of the day, the algorithms and training data are subject to human fallibility and it’s impossible to know in advance whether the best efforts of fallible humans will suffice.

Given the complexity — if only in terms of the ever-increasing number of nodes and edges in the relationship graph — of elements which feed into AI calculations, avoiding a dystopian outcome might seem unavoidable. Considering those of AI’s abilities which are independent of direct human influence, the ‘intelligence’ of AI is reduced to its impressive speed with shuffling bits to effect algebraic manipulation. Therein lies its power and, on that front, humans cannot compete. But how might we shift our relationship to AI if we focused on amplifying our humanity instead? Encouragement to do this on an individual level abounds, but how will we make it systemic?

As currently structured, Western (and other) society relies on humans adopting certain behaviours — of which, many are machine-like and tend to minimize our humanity by, for example, limiting our individuality. Most of us adhere to constraining social norms and expectations without questioning the necessity of doing so. In acquiescing to the will of others, we diminish our humanity. In blindly complying with directions and orders, we suppress our humanity. And, ultimately, in submitting to the fallacy of leaders and/or societal structures absolving us of responsibility, we give away our humanity.

If that puts us at a disadvantage with respect to AI overlords, what options do we have? The need for these behavioural expectations isn’t fundamental to society; they were baked into our societal systems to enable them to run efficiently, not effectively — at minimal cost, and marginal performance. And to generate significant benefit for some — not necessarily all, nor even most. The true purpose of these accommodations we’re asked to make is to maintain current societal structures and systems.

But these machine-like behaviours nudge us toward competing with computers on their territory — and we can’t win on that front. By ceasing to promote or reward those behaviours over, e.g., inquisitiveness, creativity, and compassion, we might have a shot at amplifying our humanity enough to keep a step ahead of AI — even when it graduates to AGI.

To get there, our societal structures and systems will need to adapt. Or, rather, undergo drastic transformation — they’ll have to withstand humans actually being human. Given the tremendous compromises we’ve made, it’s hard to know where to even begin on that front. Initiatives toward systems which enhance participants’ agency seem headed in the right direction. And the ways in which education evolves will be influential. The recent shift to focusing on self-directed learning in progressive grade schools and beyond is an encouraging start.

At a fundamental level, we’d be wise to do away with the myth that change is hard. Well, perhaps myth is a bit strong — but it’s a fact that change is as essential to human life as breathing. The energy wasted on resisting its flow might be directed to better purpose. Perhaps most importantly, we’d gain confidence in dealing with difference — a confidence which would diminish the fear from which so many of our human disputes and limitations sprout.

Encouragingly, we’re also presented with an opportunity for optimism — to view this as the start of a potentially beautiful relationship. We’ve arrived at an impasse; change is imminent. And if humanity is to be preserved, it will need to flourish. My hope is that we will be the better for it.

--

--

No responses yet