Round Hole, Square Peg: Legacy Remote Access in Today’s Environment

January 27th, 2016 by Mark Carrizosa

As security professionals, it’s our responsibility to maintain awareness about the goings on within the security space. It might include doing our research and reading all the publications, or emails from friends, colleagues and even family. For those who fancy a bit more organization, maybe just crawling LinkedIn, Reddit, or Google Alert feeds on a daily basis. In any case, you’re bound to come across a litany of articles about security vulnerabilities, strategies, and threats. The most recent collection of interesting tidbits are those relating to the backdoors identified in some of the top firewall technologies (Juniper, Fortinet, and possibly others). I found myself scrolling through my own LinkedIn feed while waiting on the tarmac returning from a recent trip, when I received another email about the latest backdoor issue. Since I had nowhere else to be for the next few hours, I decided this was a good time to try and wrap my head around it all and maybe discern more than just a little ammunition for the obligatory security small talk at the next industry event. So with the hum of jet engines and the faint glow of the overhead cabin lights, I went to work.

The concept of edge firewall technology, and more specifically the VPN capabilities embedded within them, are certainly not new to us. They have been the de-facto standard for over 2 decades, providing remote access for our users to our internal resources…be they applications, servers, or management interfaces of the technology itself. With the identification of these backdoors, the immediate response from the industry is to fix them and make sure they do not re-occur. Regardless of the potential causes (state sponsored, malicious actors, or holes in product security testing, etc.), the bottom line is that these vulnerabilities must be remediated, and I absolutely agree that this the correct approach. However, it occurred to me that I was starting to form a bit of tunnel vision and not looking at the larger (and much more impactful) picture; why are we still using this legacy technology to solve today’s remote access needs, when clearly they aren’t as effective (e.g. the numerous breaches related to remote access issues) as they should be?

Let’s take a step back for a minute and define VPN for the masses. In its purest form, a VPN (or Virtual Private Network) is a secure (encrypted) tunnel that provides remote access for internal resources as if they are on the internal network. Sure, with the advent of SSL VPNs and Virtual Desktops, that meaning may be obscured just a bit, but for the purposes of this conversation, they also fit. These types of solutions have very well known issues (or challenges) to overcome for providing secure and cost-effective remote access. Whether it’s dealing with malware on the endpoints, split tunneling, over-granting of network access, or even hw costs, they are not plug-n-play solutions; that’s not even taking into account the product security of the solutions themselves…re: backdoors. However, with VPN technology being so entrenched in the majority of enterprises for such a long period of time, these challenges have become just part of the TCO for utilizing these solutions. We capitulate by trying to work around these issues and implementing some form of compensating controls. When did we become so jaded?

As the IT landscape and workforce evolves to meet the needs of the new business model, certain factors are forcing us to rethink (or re-invent) how we provide access to our resources. Once upon a time, users and resources were contained within a “perimeter” of some sort and any remote access was limited to a small group of individuals; access was provided more from an inside-out perspective. Now as we move towards a more mobile workforce (work from home/byod/crowdsourcing), as well as migrations into cloud operating models (aws/azure/gce), we must also take into account that our users are likely not sitting on a connected internal network and we must now factor in how we provide access in this new outside-in paradigm. Trying to mold a legacy access approach into today’s business models doesn’t seem very efficient, nor does it address the inherent risks associated…much less solve the backdoor problem.

So what’s the solution?

First off, let’s assume that we are no longer bound by the capabilities of these hw solutions; that alone will eliminate the risk of current/future backdoors. However, in thinking strategically, we’ll need to do more than focus on backdoors. To really have a shot with a new model, we should consider the following:

  • With more than just employees accessing resources (i.e. consultants, 3rd-parties, etc.), is it really necessary to provide full network access when it’s likely that only a handful of resources are required? I think it’s feasible to assume that a better model would be to eliminate the network access altogether and focus more on ensuring access is provided on a ‘least privilege’ basis to only those resources. Furthermore, if network access is not required, there’s no need to open up network access on the edge…essentially locking down the perimeter for inbound connections.
  • Can we be absolutely sure that 100% of devices (and any installed software) that traverse the internal network are known, much less controlled? X.509 certificates, NAC, and MDM solutions can help, but they’re not foolproof. While I’ve seen organizations provide corporate devices to consultants/3rd-parties, this can become very complex and costly to maintain. A solution which abstracts device from access is much more efficient and effectively reduces the risk when considering the enormous amount of malware in the wild.
  • Even if you have built the “Fort Knox” of VPN solutions for your on-premise environment, does that translate into a proper solution for cloud models? Sure, with Bring Your Own License (BYOL) and Marketplace offerings it is possible to use known technologies, but the architecture must change in order to work within these new operating platforms. What about the cost aspect within these cloud models, trying to secure a budget to basically replicate on a per VPC basis can quickly spin out of control.
  • Assuming you’ve been able to build a hybrid solution that works well within both your on-premise and (for example) AWS, trying to get that to work across multiple cloud environments (Azure, GCE, etc.) would be extremely difficult to implement and a nightmare to manage.

At Soha, our solution eliminates the need for clients, allows for a simplified and rapid deployment (or agile) models, all the while reducing the overall risk posture by essentially taking your apps off the Internet entirely. Want to learn more? Visit us at to schedule a demo and see just how we’re re-inventing remote access.