with Jeffrey Bleich
Tech-optimist? Tech-pessimist? Tech-pragmatist? …Where do you sit?
Every day, we read about mind-bending new technologies or innovative workplace practices on one hand, and stories of mismatched skills being taught to young people or significant workforce retrenchments on the other. So how do we separate the theatre from the threats and chart a path forward? What are the major considerations we need to be turning our attention to when it comes to the ‘nature’ of technology?
This was exactly the focus of my conversation when I interviewed former US Ambassador to Australia (and namesake of the Jeff Bleich Centre for Digital Technologies, Security and Governance) Jeffrey Bleich. Throughout our conversation, I was struck by his pragmatism, his awareness of the complex implications of technology and his willingness to act. This, I believe, is the substance we need from our leaders – courageous individuals who are prepared to take it in, take it on, and take the best of humanity with them.
How do we think, truly broadly about the digital ecosystem?… We haven’t really thought about the fact that digital technology is changing the way people live, breathe, work, eat and think about their lives …Thinking of it as an ecosystem that is particularly challenging to democracies.
Here are four of the big ideas from our conversation:
Conversation #1: AI has arrived. Ethical guidelines have not.
Scientists and AI experts agree that we are in a race against time: we need to establish ethical guidelines to catch up to technology’s irreversible integration into our lives. In January 2019, Gartner reported that AI adoption tripled in the last year alone, with an estimated 37% of firms now implementing AI in some form. In a recent Deloitte survey, 76% of executives said they expected AI to “substantially transform” their companies within three years. Since 2017, more than two dozen national governments have released AI strategies or plans to develop ethics standards, policies, and regulations.
The problem? No two strategies are alike. While some Principles may correspond, the context of issues such as ethics, privacy and bias all shift dramatically between countries and cultures. Major technology companies are ahead in the global race to develop ethical guidelines and AI governance teams. We see new partnerships between Facebook and the Technical University of Munich (TUM) forming The Institute for Ethics in Artificial Intelligence, with an initial investment of $7.5 million. Amazon and the National Science Foundation recently earmarked $10 million for AI fairness research. Unless governments can get ahead of rapid change, the rule-breakers will become the rule-makers.
In theory, global bodies such as the OECD have gathered support for overarching Principles on AI. However, in practice, governments, corporations, academic and science communities pursue AI strategies that do not relate to each other. So how can we possibly figure out how they relate to us?
Jeffrey Bleich stressed the need for systems-thinking – to understand that nothing happens in isolation from this point on. Just as international laws around space or the oceans exist, global AI ethical standards are surely achievable?
Conversation #2: We need to see technology through a systems-thinking lens
Every single choice you make, every thought you have, every aspect of how you move through the world is being gathered in these devices and it’s being constantly updated in ways that can’t really be controlled by our current cyber security technologies… The fact that we’re not thinking in those terms, should concern all of us.
The next generation will need to see their world as a whole system – a seamless interface, perhaps. This is true of digital technology’s impact in our governance systems, our workplaces, our homes, our own brains. We are hyper-connected and every day, less able to switch off. AI is becoming so integrated in our world that we lose sight of where it starts and stops. How many hours in your day are completely free from technological interruption or influence? Even when you sleep, it’s likely your behaviour is being tracked simply because you didn’t update your phone settings. When you wake, do you take a moment to think ‘how do I feel?’ Or do you reach for a device to find out? As we move through our days, are our choices arising internally from a desire or an idea? Or are we behaving according to predictive (and prescriptive) behaviour models?
And what exactly is an AI? There is no precise and accepted definition of AI. Is it machine learning? If so, how developed? In movies we know AI to be non-biological consciousness. Usually portrayed as a single connected entity. But as we move incrementally along the journey we struggle as citizens to understand what is going on behind the laboratory doors of big tech companies. Let alone how our governments are regulating to protect us. If AI poses a threat, it moves behind a cover of normalised convenience.
Conversation #3: We need to adjust our governance. In our homes, workplaces and democracies.
When we think about governance, we think about what the government does… but governance is the set of norms that we all live by… some of it is training ourselves to be more aware of threats and to adjust our own behaviour.
Whether we think about privacy, security or surveillance, we need to understand that the changes technology brings run deep, and currently, largely run free. As we’re already seeing, digital technology has the potential to fundamentally shift our trust in each other, which gets to the core of relationships and the bedrock of our communities. This new domain requires a rethink of governance and of leadership throughout our communities.
In our conversation, Jeff discusses the urgency with which leaders need to be having more mature conversations. We need to be across AI, automation, mobility, blockchain and education. Leaders need to get out ahead of issues, developing policy for 3-5 years’ time, rather than arguing about 5G or autonomous vehicles like we still have a choice.
Leadership is needed to ensure we don’t let digital divide. The poorest communities stand to be most at risk from job loss, information exclusion and limited connectivity. In parallel, autocratic countries are finding technology a useful means to exert more control over citizens. Will we have a cold war in cyber? Is technology borderless? Or will it create new borders?
Conversation #4: Human agency is being impacted
I think humanity is going to be very different at the end of this century than it is right now. It’s a hard thing to contemplate but we will. We will be augmented by tools that we’ve developed… technologies that we’re already using, we are changing the way our brains are wired, the way we think about the world.
For most of us, in all our humanness at this time in history, AI represents a mechanism by which our behaviours are grouped, sorted, targeted and modulated by data intelligence. Technology was born to enhance our lives and advance our impact, but conversely, as its influence on us grows, we find ourselves ring fenced and judged by its learned assumptions. Who is dictating behaviour now?
Algorithms tend to move us iteratively toward our own extremes. Here’s an example Jeff gave in the interview: When we decide to watch a youtube video, the experience becomes mediated by the algorithm, showing us options of further videos to watch. When we click on something, the technology pigeon-holes us and begins to show us slightly more extreme versions of what it thinks we may like. If you click on a dog, your next set of options will be different sets of dogs. If you click on a small dog, pretty soon you’ll be looking at those handbag-sized pooches. In this way, the human brain is guided further down a path or our own bias. The internet (un)naturally tends towards extremes, with a capacity to fool humanity into the worst of itself.
Finally, here’s some food for thought from the inspiring Jeff Bleich that I’d love to challenge you to discuss with a friend or colleague: “At what point do we lose our wonderful, messy humanity, our story-telling, our mistakes, our illogical tears and become rational, predictable, superficial versions of ourselves?”