Mr. Bussiere started off with a brief overview of where Tenable sits in the cybersecurity ecosystem. “We are about providing you the vehicles by which you can get deep and thorough visibility into your network infrastructure to determine your risk. So that you can understand what risk your end-points might expose you to, what risk your network might expose you to, what risks you might be exposed to in terms of configuration problems,” he explained. For instance, he said Tenable could have told a customer all the places that they were potentially vulnerable to WannaCry and that would have given them the visibility to where to fix and what to fix.
He said that Tenable is basically concerned with the intersection of three elements: threats; vulnerabilities and configuration problems; and business. Tenable solutions seek to inform clients when a threat crossed with a vulnerability is exposing them to a significant amount of business risk.
To give an estimate of the scale of the challenge, Mr. Bussiere said, “You might want to do a scan of an environment that has a thousand end-points in it. And each of those end-points is going to have 20 or 30 different vulnerabilities. You do the math. You get 30,000 vulnerabilities you need to worry about. So, which ones are the most important for you to fix at a certain point in time. We give you the software, which firstly gives you the visibility into the environment but then we help you filter out things that are not particularly important, let’s you focus on the things that are really important, so that you can fix a controlled set at any given point in time.”
Trends in network security
Mr. Busssiere said that the vulnerabilities we are beginning to be exposed to as interconnected devices begin to proliferate at a very rapid rate, pose one of the biggest security risks today. He highlighted the Mirai Botnet, which brought down a significant chunk of the Internet in the US and Europe in October last year. It did so through an extraordinary DDOS (distributed denial of service) attack on the domain name service provider, Dyn, of 1.2 terabits/ sec.
Describing the risk, Mr. Bussiere said, “The problem is you cannot really fix it. Because these devices are often hardcoded. For example, this Chinese manufacturer of cameras, which sells them under this whole bunch of different brands, they have a hard-coded telnet password, a hard-coded username, and that’s not going to get fixed.”
In this context being able to identify these devices on the network is crucial. Tenable provides solutions which can do that, discovering devices that expose the organisation to significant risks.
The second important trend is the shift towards cloud. Things that used to historically be hosted in the organisations’ own data centres are being moved to external clouds.
“Understanding the risk that’s exposing me to is another significant security threat. The cloud provider might do a great job of security for themselves, but how well have I configured my stuff,” Mr. Bussiere said. The cloud is a different environment altogether, and there is a learning curve to understand all the changes that the cloud brings about from a development perspective. As people are decoupled from physical machines, change becomes more constant and more violent than it has ever been before.
To take another example, organisations might be inadvertently exposed to a lot of risk because of Shadow IT. They might be using software-as-a-service (SaaS) applications that the IT or security department is not even aware of.
“All it takes is someone in department X to go off with their credit card, subscribe to something and all of a sudden all that private data is pushed out to the cloud. If you don’t even know what’s being pushed out, what services have been used, you can’t audit them and then if that service happens to have a breach, you are caught with your pants down,” Mr. Bussiere explained.
There is another trend related to the cloud, which is increasing usage of elastic computing models. You introduce services when demand requires you to introduce services, you take them away when the demand goes away. The cloud by its very natures encourages the use of elastic computing practices because it is so easy to deploy things.
Mr. Bussiere said, “A good example of that would be an ecommerce store during the holiday season exposing more servers to the Internet to deal with the holiday crunch and then back down after that. So, you need to adapt your vulnerability management to compensate for the dynamic environments that you are kind of exposed to now.”
DevOps and non-traditional ‘assets’
Historically vulnerability assessments have been done against the laptop or against the server. But now we are moving to a world, where vulnerability assessment have to be done against maybe web applications, that are not bound to any specific platforms.
“We are changing our licensing model from a traditional IP perspective to now look at it from an asset perspective. Virtualisation is kind of driving this,” Mr. Bussiere said.
Then there is the increasing adoption of DevOps methodology for IT development. Mr. Bussiere described DevOps as a combination of two traditionally siloed entities, development and operations. In contrast with the traditional waterfall model, this is a rapid continuous integration and continuous development model. Very, very small changes are made to software on a continuous basis, as opposed to massive, infrequent changes in functionalities. That also alters the vulnerability assessment process.
Mr. Bussiere said, “That means that we need to do something called DevSecOps, that is integrate security into the development process to ensure that when the container image is pushed into production, it passes a minimum security standard.”
And likewise, once the image is in production it needs to be continuously assessed because though the container image itself is static after it is developed, new vulnerabilities could be discovered in the libraries. That means that you need to perform assessment on the container image almost on a continuous basis, after it’s been deployed. So that you can identify if it becomes vulnerable and weak and potentially block it from further deployment if it exposes you to undue risk.
Discovery, discovery, discovery
In Mr. Bussiere’s view, the traditional models will continue to exist in parallel. “People are not going to get rid of all their assets, of all their data centres. Some things will still be maintained. It will be a hybrid model for the foreseeable future,” he said.
‘Discovery’ through continuous monitoring is the key to securing this hybrid environment. Step one for organisations is to ensure that they have the ability to discover things, as those things cross their infrastructure. This has to be done by proactively instrumenting the environment, so that they can look at the traffic and when they discover something, they can assess it for vulnerabilities and compliance problems.
An example of a lack of discovery exposing organisations to risk is the December 2015 Ukraine power grid cyberattack. The attackers were in the system for 6 months. There were opportunities to discover that problem, from detecting malware at the end-point or finding VPN traffic going to some weird IP address in Moscow. These opportunities were missed.
Continuous monitoring could also be the key to dealing with the human factor. People do stupid things all the time.
“People do stupid things all the time, including me. I have a vendor coming in and the vendor needs me to do something to my firewall temporarily so that they can do something. I poked a hole in my firewall and I never turned it back on. That’s a configuration issue. Through monitoring, you can find these things that people do that are stupid. We can help you to audit and monitor configuration changes. And when a configuration change is identified, you can go and check why that configuration change was made,” Mr. Bussiere said.
He continued, “We are not going to be able to block the guy from clicking on that infected pdf. What we will do though is that we will discover that there is a vulnerable version of Acrobat on the endpoint, so that you can patch it in time.” The patch would render the exploit impotent.
So, compensating for human behaviour is about being pro-active rather than reactive.
Pro-active in terms of reducing the vulnerabilities the organisation might be exposed to on a continuous basis. As well as, detecting non-compliant systems that may have been changed because of aberrant user behaviour.