OpenGov Expert Opinion
Balancing between accessibility and control: Data protection and managing insider threats
OpenGov speaks to Mr. Brandon Swafford, CTO, Data Protection and Insider Threat, Forcepoint
OpenGov spent some time with Mr. Brandon Swafford, CTO, Data Protection and Insider Threat, Forcepoint. Mr Swafford shared on topics such as data sovereignty, dealing with data privacy and security, as well as insights to data protection and managing insider threats.
Could you tell us more about your role as CTO of Data Protection and Insider Threat at Forcepoint?
The CTO role is relevant to Forcepoint in a couple of ways. The company is broken up into several business units, one covers our core technologies business in cloud security, one covers our protection firewalls, one covers global governments and one is data protection and insider threat. I’m a CTO of a business unit that houses our DLP tools and insider threat tools, and I coordinate with the other CTOs as far in assuring that the technology choices that I make for my products don’t conflict with theirs. I work a lot in understanding customer requirements, understanding the trends of architecture and insider threat security issues as well as work with our partner groups.
Forcepoint leverages partners internationally, I work with them for training purposes, products and getting their requirements for their services. So my job is a lot of different things but right I’m really focused on maintaining our international presence and opening it up, essentially. Only recently did the products that I work with became available outside of the United States. Previously, our insider threat tools were restricted because of defence reasons. I came to Asia, Middle East and Australia primarily because it’s a whole new world and I have to make sure that people understand how insider threats are supposed to work and products that can ease it if possible.
What are your thoughts on data privacy and security, given your experiences both with the US government and in the private sector?
So those two things are very different, if we use the United States as an example, if you’re an employer in the US, you have a pretty broad access to people who work for you, in Asia, it’s different in respect to how they handle privacy. In the United States, honestly, if I were a business owner and I think about how much I know about my people, I have to ask myself the question, between myself, LinkedIn, Facebook and Google, who knows more? It’s probably not me. Google and Facebook probably know way more about my employees than I probably ever will, so the pervasivess of the social media concept makes the idea of privacy a lot more difficult to understand.
So the privacy laws in the United States are different and probably less mature than they are compared to the EU. The EU, if you think about data privacy, has a more mature process, things like GDPR are enforcing that a lot more harshly. And so when I think about data privacy, I think about how do I legally and ethically collect the data that I need to have good outcomes and not sacrifice security and at the same time, how do I think about processes that allow me to broad scope monitoring without violating a lot of these laws?
When you think about the competition between privacy and security, it’s going to continue forever. When you think about lots of mature organisations, there’s a lot of focus on maintaining good security and then after that fact they think about, “what happens to these people’s data?” And if you’re a company owner, maybe your source of income is data and so one of the problems right now –if you think about the income that Google and Amazon make, they’re making it off your personal data, they’re making it off through crafting shopping carts for you to buy, which is based on your buying habits.
So the data that they have on you is actually pretty valuable. My world is complicated, I think about things like how do I make sure I don’t collect their banking information, how do I make sure I don’t collect information of them talking to the doctor, those are real issues I have to contend with. Even though the United States has a different approach to privacy, there are still things we have to be concerned with.
It’s about balancing accessibility versus control.
Both public and private sectors are migrating their data to cloud or possess some kind of hybrid cloud system. What are some of the biggest challenges or concerns in determining data sovereignty?
In my previous line, I dealt with data sovereignty in respect to the legal community, so I did a lot of legal forensics investigation work where we had to be concerned with things like EU Safe Harbour so if you have a litigation that’s taking place for a US company but the data’s in Germany, you have to go to Germany to get the data, which can only reside there, so there’s a lot of onward transfer and data sovereignty issues – who really owns it?
And depending on the country, you can claim that any particular email is private, there’s a lot of variance, so data sovereignty is a really complicated problem. Every country handles it differently, so that’s one issue. The issue with the cloud really is that, the cloud is nothing magical, it’s just somebody else’s computer. The complexity, though, is that, since these systems are operating at a superlarge scale, and they want to have redundancy and availablity – companies like AWS and Google have to basically spread their data across huge sloths of data centres.
One of the problems we have to contend with is ensuring that. When you think your data’s going to reside in a particular country, that it’s maintained and there’s controls that ensure that when you say, “this data’s going to reside here then it does.” I think the cloud providers are being more and more diligent about ensuring those are available, so that part of this is the cloud providers responding, the other part about it is not just blindly signing up.
So companies if they truly want to comply with data sovereignty, then it’s really their responsibility to ensure when they sign up for these that it’s part of the conversation and that there’s meaningful controls both the providers are going to have and that the companies are going to provide. For instance, if I’m a big bank, it’s not just Microsoft’s, Amazon’s or Google’s job to satisfy those, it’s also me and how I deploy them, so that when I build these systems, I’m not building a situation where I have a region in EU that is set up to communicate and relies on a system in the United States to do its work.
It’s not AWS or Google’s job to make sure I set up correctly, it’s their job to agree to what they agree to so part of it is that companies need to be educated to build those systems and build them in a compliant manner.
What are the implications in the variances of regulatory frameworks across borders, especially when some cloud services may occur beyond national geographical boundaries?
The question becomes what regulatory framework do you use – some people say the source of the data is relevant and some say it is the destination of the data. So I think when you talk about data movement between countries, it’s an issue of…sometimes the laws are very clear and it says that once the data enters the country that its destination is, it’s owned by the country and the laws apply to it. Does that mean that the people who sourced it (data) give up that right or are forced to give up that right? How does that transfer of ownership really happen?
For example, when you think about some of the countries in Asia such as China, where things that happen in China, turns out not to be owned by them – once the data enters the country, sometimes it’s very difficult to leave, ownership is
retained by that destination country. That’s not something I have to contend with directly that often because most of what I do is in the United States and Europe. In Europe, it’s relatively clear and you have data custodians and the custodians of the data is typically the source country.
In your experience, how do you approach data protection and managing insider threats within organisations?
Insider threats come in 3 forms – the first form being malicious users, the ones that are truly intent on causing harm or interested in protecting themselves and hiding. The second form will be accidental insiders, people that make mistakes, click a button they don’t mean to and the outcome is the same. Maybe they accidentally sabotage systems – the outcome is roughly the same but the mindset is different. For instance if I open up Outlook and I send an email, type in addresses and it autocompletes when I don’t need it to, maybe to a company that I didn’t want the data to go to and I just click send. I mean, I think everyone has done that in some point of their lives and it’s an accident, then what happens?
So the question is, did I try to recall it? Did I tell the person to delete that email? How did I react to that? Did I even notice? And so understanding the mindset of the person and the reaction is critical to know the difference between accidental and malicious. That’s probably the hardest job that I have, it’s understanding the context and intent of the person.
And then the third category is called the compromised insider, which is a hybrid between cybersecurity and insider threat. What I mean by that is the way malware operates is malware implants on the machine, it compromises an account and then it starts to move laterally, exfiltrate data or accomplish its mission. Doing that via an account that is compromised and that is accountable to a person typically and it’s trying to deal with accesses the person has in his/her account.
So from an insider threat monitoring and analysis perspective, it still appears to be a person but the way you attack that problem is very different. And I think one of the ways to look at that is if you think about malicious and accidental people, they tend to operate in a human time concept versus a machine time. So human time is like minutes, hours, days, weeks. Machine time is seconds and milliseconds, these actions take place very quickly, move in a lot places all at once, things that are out of the ordinary for the typical user so understanding compromise is a different problem than the other two. Because the mindset is largely irrelevant.
Now things get really complex when you think about malicious users using malware to exfiltrate data – when you have a person using malware to do that job, it’s like inception, you have to think about, “Now, here’s an exfiltration activity that looks like malware but is that malware being implanted by someone in the company or captured via phishing email, or does it come through an attachment? You have to start asking and know how to react to it.
When it comes to behaviour, I think about three basic emotions for people – sad, angry and stressed. What’s important there is that sad people wants to hurt themselves which means for whatever reason they are upset, they are going to be less secure, I should worry about them being an accidental insider. They’re probably going to care less typically and also they maybe likely to depart the company. And then I have other concerns like angry people who want to hurt others which means now it’s a matter of malicious intent: “the company is hurting me and I want to hurt them back”, “I deserve that promotion, I didn’t get it, I’m going to take the data somewhere else”…that happens.
For stressed people, they want to escape, they want to stop whatever it is that’s causing the problem. So stressed people tend to make irrational decisions out of a goal of getting out of a locker –so if there’s too much work, maybe they do less work. If they’re stressed because their contract is going to end and they need to get their next contract, maybe they take the work they made from one company and give it to the next one to get a contract, they’re worried and stressed. Those are the key emotional indicators that tend to come up.
What is cloud security and what role does behaviour analytics play in the area of cloud security?
Behaviour analytics take on 2 forms in cloud – there’s machines and people. The cloud security behaviour analytics as far as people goes, it’s a lot of the same responses I gave you earlier, so there’s basically going to be use cases say about someone who tries to log in to the same account at 2 different places. People try to steal credentials and maybe limit their 2 factor authentication. When you talk about the cybersecurity side of cloud security, the reality of that world is, once you compromised a cloud service, they’re so big that now you have access to a huge swathe of people. Normally if you just compromise one company, you just get one company but if you compromise a cloud service provider or one of their applications, it’s a vast number of people and companies. So the reality is that those are very important targets for most malicious actors because the reward from compromising one is so high.
And the opportunity for them to capture across lots of different types of information, lots of different types of people, different types of companies having access to lots of different things, it’s really attractive. A lot of the behaviour analytics technologies are roughly the same because the destination isn’t quite as relevant – it doesn’t matter whether I am accessing data on the cloud, the only question really is can I have visibility and if there are some visibility problems in the cloud. So for instance, if I’m on my premise I can collect hacker captures, I can see the network traffic really granuarly on my own data centre, once I go to the cloud I lose visibility, I can’t see some of the more intimate network traffic that happens beween the different systems that I have.
There’s a little more risk because the visibility is different.