Cyber risks, identity and trustworthiness in the age of AI
In today's age of AI, our identities are tied to our online presence. With an increasingly interconnected digital landscape, the concept of identity is emerging as a concern that impacts individuals, businesses and societies as a whole.
Kurt Sauer, CISO at DocuSign and two of the world's leading experts on this topic, Sir Jeremy Fleming, Former Director of GCHQ and Carissa Véliz, Author and Professor of Ethics and AI at The University of Oxford, discuss in-depth identity protection.
Sir Jeremy Fleming is the former director of GCHQ, a world-leading intelligence, cyber and security agency with a mission to keep the UK safe. Carissa Véliz is an Associate Professor in Philosophy at the Institute for Ethics in AI, a Fellow at Hertford College at the University of Oxford, and the author of ‘Privacy Is Power’, an Economist Book of the Year. As the data economy grows in power, Carissa Véliz’s book discusses how data relates to power, knowledge, autonomy, identity and democracy.
Together, Kurt Sauer, Sir Jeremy Fleming and Carissa Véliz shed light on the significance of safeguarding identities and why individuals and organisations should take a digital-first approach to privacy and risk management and the far-reaching impact of this for businesses.
Here are some highlights from the discussion:
How do you think identity has changed in recent years, and how might it change in the future?
Carissa explains identity has always been an important method to help you establish who to trust and cement relationships with, providing both trust and accountability. Carissa says, ”If you meet someone and they changed identity every day, changed their name and changed how they look, it would be very hard to form a relationship with that person because you could never know who they are exactly.”
Carissa goes on to explain how accountability is very strongly related to identity. If someone does something wrong, you need to know who they are to hold them accountable.
The internet brings some really interesting changes for the future because, currently, people can be anonymous. Anonymity helps people to speak out free from repercussions, but in the future, we need to consider how to balance both anonymity and identity. Carissa says, “There is a tension between the things we value about identity and the things we value about anonymity”. She says that most writing prior to the Renaissance in history was anonymous and may otherwise have never seen the light of day. “If we don’t allow people to have pseudonyms or masked identities in the future, we may miss out on great masterpieces of the future.”
Trust is one of the central tenets of identity. Jeremy suggests a system only works if individuals, the government and society believe in it, and trust is needed to continue with basic tasks in human life online.
He explains, part of his previous role in one of the world’s leading intelligence agencies was keeping the country safe and intruding on privacy when there was cause to do so. He says, “The principle is that if you undermine the laws on which our society is based, you lose the right to anonymity. The grounds for intrusion have to be in accord with the Human Rights Act. Doing so should be absolutely necessary, and you should only intrude as a last resort.”
Anonymity isn’t a right in itself, and there can be reasons when a state is justified to place an identity on someone, which can be harder in a digital sense. Technology allows us to go beyond borders, but societies' approach to trust and identity differs worldwide. Technology allows people to feel less connected to their crimes and provides a scale to anonymity. Increasingly, people can use others' identities online to commit a crime.
How to protect privacy and anonymity?
People can be identified online in ways they wouldn’t be able to if they were writing in pen and ink. Will it be harder to protect privacy and anonymity in the future?
Jeremy says, “The cumulative collection of open source data in itself can be intrusive”. It comes in part because of the way technology allows you to make connections between data. A classic example is a loyalty shopping card that can benefit consumers despite getting lots of their data. For example, the loyalty card has access to your personal habits. You might find out you are buying too much wine or not eating meat before sharing new experiences with your friends and family.
On the flip side, companies are using data to make life easier for shoppers and get better value and more convenience. There are difficult balances to be made. Both businesses and governments have to think about their responsibilities to protect data.
In the UK and Europe, legislation is still based on the Human Rights Act. Jeremy says he believes some sort of International standard-setting body would be a good idea, but international compliance and regulation would be tough to achieve. If one country opts out, achieving International norms and standards becomes harder.
Jeremy says, “The advancement of technology has made it harder and harder for governments to intervene in a timely way. If you are working with a system where it takes 5 to 10 years to pass new legislation and compare that to the exponential growth in technology, including AI, it's clear that legislation for a particular part of technology won’t work. To protect individuals in the future, we need to be cleverer at spotting principles and then fast-tracking legislation if new technology like AI challenges those principles.”
Can we use AI to help protect identity?
Jeremy says absolutely, and he is firmly on the side of the argument that tech advancement and AI are good for society and individuals. Policing AI is going to require good AI. There is potential, provided we partner with the private sector. AI recognises patterns in data and can warn us about them and provide us with an explanation as to something being a threat. It could identify where particular data or communication has come from. Jeremy suggests that it’s not beyond the realms of imagination that AI could make us safer. When we talk to companies, we need to ensure they understand their responsibility to help us get there.
Carissa believes we can use AI to help us guard our identity. One part of the puzzle will be the use of pseudonyms and zero-knowledge proofs.
“The idea is if you want to watch something online, and you need to be over 18, the company makes a request instead of sending your date of birth, which could lead to identity theft, you just need to give ‘Yes’ or ‘No’ answers only. There is no need to know anything else, give specific data, or unintentionally reveal your identity.”
What can we proactively apply in our life of digital identity?
Jeremy advises there are some fundamental things we all have to do as individuals to protect our data, including making sure we don’t use our pet's name or favourite basketball team as a password and making sure when we get that annoying prompt to upgrade our software, we do it. First, look after the fundamentals, but secondly, be curious about the topic and take the time to understand the advice out there.
CyberEssentials is very easy to read and to take action on the advice. As an individual in a company, you can take responsibility for your data and not just leave it to the I.T. department. Think about how you are putting your identity out there and if you can go after anonymity when you need to or if you understand the choices you are making when you release data.
Carissa agrees to take care of the fundamentals and suggests we should also consider using password managers and privacy-friendly applications.
Kurt says this is an area in which no one can be a complete expert, so it’s a good idea to come away with an idea for individuals to be curious. Simple changes to processes, correctly supporting and educating employees, and staying on top of trends can result in significant cybersecurity improvements. Here are five security trends your team should be aware of.