How Cybersecurity has Changed in the Last Decade
The world was a different place in 2010: The global population had only just broached 7 billion (now up to 7.7), we still believed Lance Armstrong was clean, and Game of Thrones hadn’t even premiered on HBO. We looked at the world differently back then, because the lens of our lives were different. Now, ten years later, how have those lenses changed? How has cybersecurity changed? And how might it continue to develop as we dive into the new round of the “Roarin’ 20’s”? Here’s just a few changes that have profoundly affected how we surf the net — and share (or don’t share) our data.
1. Social media has made personal data ubiquitous. In 2010, Facebook was growing – and Instagram was just coming on the scene. Since then, social media has morphed from taking up a slice of our lives to taking up the entire pie (or at least the entire crust). In 2010, reports measuring the average user’s time on social media measured in hours per month; now, we measure in hours per day (2 hours and 16 minutes on average, to be precise). That prolific use has given us more than likes and follows; it’s made personal data the most valuable commodity on the market today. This abundance of data has changed the way hackers operate: while in the past, hackers might attack networks and devise ways to get through firewalls, they’ve realized people are much more realistic targets. Skimming data on the internet leads to more and more successful phishing schemes, because that personal information is simply out there for the taking. Some people take precautions to protect that data, and some people don’t, but nearly everyone has come to terms with the reality that if you want to engage in culture these days, you’ll probably be on social media – so your data is out there.
2. The Snowden Effect changed how we view data. In 2013, The Guardian broke the news that the NSA was spying on millions of Americans. Edward Snowden was eventually named as the whistleblower for this story, and whether you consider him a hero or a villain, the effect of his actions on American culture ran deep. Stories ran for months (and still run) on what he did and what it meant for the American people. People began to understand, many for the first time, how easy it is for personal information to be shared and spread without their knowledge. Trust in the government certainly went down, and many also learned just how valuable ‘metadata’ can be. In the words of former NSA and CIA director Michael Hayden, “we kill people based on metadata.” The gravity and risks of personal data breaches began to set in.
3. The first ransomworm was launched. In 2017, the ransomworm WannaCry spread to over 200,000 computers across 150 countries, taking computers hostage and demanding payments in Bitcoin to release them to their users. It was the first such example of that type of virus, and the largest and most widespread ransomware attack ever. This, like the Snowden effect, forced people to face the reality of the risks they were facing. Data breaches were no longer distant problems faced by major corporations; this was an attack that affected personal computers, too. WannaCry introduced a new era of data risk – and forced users more than ever to sit up and pay attention.
4. Multi-factor authentication became much more common. Technically, ‘multi factor authentication’ (MFA) has been around for centuries – it is as simple as requiring multiple proven points of identity to gain access to something. MFA online use was of course around before 2010, but it’s only in the last decade or so – after the many major data breaches – that experts began calling for more prolific use. Now, Google will require at least 2FA if you log in from a new device, and many online sites are following suite. This is in part a response to the amount of data we have online (see #1), but also to the amount of high-profile hacks and breaches that started putting the risk in perspective for many users (see #2 & #3). Protecting your data is much more effective with MFA, and these days it’s all but required for anyone serious about keeping their information private.
5. A generation grew up on the internet. Teenagers today were all born after the new millennium; the younger ones literally grew up with smartphones in their hands. They’ve never known a world without the internet; it is as much a part of the fabric of their life as plumbing, cars, or television – sometimes even more so. Many people believe the younger generation is at risk because of this; they grew up with the internet, and so (we assume) have much too trusting of a perspective. Their data has always been online, so perhaps they don’t really think of how to protect it. But recent studies reveal quite the opposite – in fact, teens today are much more savvy about their internet privacy than older generations. 95% of teens have checked or changed their online privacy settings, compared to a mere 32.5% of those over 65. These teens have grown up on the internet, yes – but that is exactly why they’re much more aware of the risks. If the children are the future, perhaps our future looks more secure than we expected.
The last decade has brought a wave of change with cybersecurity. More data is online, more data is vulnerable – and more data is protected with new and growing security strategies. We’re learning as we go, and in the age of information, we’re learning to protect the data that so many actors (good and bad) want to access. When 2030 comes around, will we look back and see more of the same? New legislation? New forms of attacks, and new social media sites? Probably. But if the younger generation is any indication, it looks like we’ll be ready to face each one head on.