All Posts in “Enterprise Solutions”

Shifting The Focus of Our Security Lens – By Brian Tobia

I was at a meeting recently with both the security and virtualization teams in the room and they were having trouble connecting security policies and objects that lived in each of their realms. A colleague of mine refers to this as the Rosetta Stone problem in which the security team is usually speaking a different language than the others. What is seemingly important to one team usually doesn’t resonate with the other. The two then become disconnected and one of the biggest advantages an IT team has, information sharing, can be completely lost.

So I came up with an analogy to try and help bridge the gap. Instead of looking at things in terms of IPS/IDS policy, firewall rules, vApp’s, or vDS’s, let’s think about attributes and behaviors of the one element that all teams share in common: the user. If we look at how, say medical insurance polices, are written, every trait about a person is considered and this is the core of what the policy is made up of and also how much it costs (it always comes down to dollars, right?). What if we did the same thing for security policies? If each group or piece of infrastructure that we are trying to secure could communicate back elements about a user, we could combine these all together so not only would we have a more comprehensive security policy, but we would also be speaking the same language.

In this model, security policies now become much more dynamic and rulesets that are active across devices are much more adaptive. You can move from having an environment-wide VDI policy for internal users to having a virtual machine whose policy and access level changes to fit each user as they login or logoff. This not only closes many of the gaps we have with current “Swiss cheese” firewall or security device policies, but it also locks down many communication paths that are most likely unprotected today to the most restrictive set.

I mentioned information sharing before and this is really where open standards and integration between all the security tools in an environment can play well together. The first advantage here is the ability to enforce consistent policy based on user identity across an entire infrastructure. These can be things like Active Directory Group, geographic location, login history, the nature of the access request, etc. All these ingredients can be combined together into something like a recipe that dictates what the security policy should be. For example, the security policy being enforced if I’m sitting in an office accessing servers in the datacenter or if I am connecting from an airport in a new country that I’ve never traveled to could be very different.

The other big advantage of this user-centric approach to security is the increased information flow between solutions. If you think of all the security controls in your environment as a chain of services instead of individual pieces, information about what actions have been taken or what user identity attributes are present can be passed along this chain. This now allows for a device down the line, say Device C, to make a decision or modify policy based on outcomes that have already been produced by Devices A and B. Not only can each control now be smarter by utilizing this additional information, but now you get a global view and enforcement of security policy that is making smarter decisions.

Now notice I didn’t mention any product names…that was on purpose. We’re still getting there within the ecosystem of solutions. Whether it’s open source tools, open API’s, or just vendors working together for these integrations, I hope that shifting our viewpoint from being more device-centric to the magnifying glass now being focused on the actual user will result in better solution collaboration and a wider adoption of newer security technologies. Additionally, if security teams are less isolated from being left out of the design process and also if their reputations can be a little less tarnished from all this, it wouldn’t hurt either 🙂

Brian has been an IT professional for over 10 years in various customer-facing consultancy and technical roles. He specializes in virtualization, networking, and security technologies and holds various industry certifications such as: VCAP5-DCA/DCD, VCP4/5, VCIX-NV, and CISSP. He has authored multiple courses on networking and security topics and is an active member in the industry communities. Brian was also nominated as a VMware vExpert for the past 4 years for his work within the VMware and partner communities. He currently works as a security and compliance specialist for the NSBU within VMware.

So Ya See Timmy

As promised I will now talk about containers vs micro services. UGH ok where to start … Maybe it’s best if I do this in a dialog I recently had. Customer will be C I will be M. Also incase you haven’t read these sorts of things before <> will indicate internal dialog or though in my brains.

C: “ I am looking at Docker or VMware Photon to manage a bunch of web sites deployed in containers.”

M: <Ok I thought.>

C: “The web sites are currently deployed and we want to migrate them off of a unix server that they are sitting on today.”

M: <Hmmm, ok weird but if they rebuilt them …. >

C: “We just need to get them off the box as is today”

M: <But but that’s not what containers ….. ok. >

Here is my problem I am no good at keeping my mouth shut, like no good at all. I keep repeating to myself, Mike just stay quiet and people won’t think you are an ass. But then I open my mouth and well words fall out. 

M: “No sir, that’s a bad use case for Docker or Photon. You see a container is great if you have an application and want to deploy multiples of it. Scale it out, not so great for existing web services, better if you were to rebuild them and need multiple instances”

C: “Right like we have a lot of web sites, plus containers provide isolation and security.”

I could actually feel my eye twitch a little here. You know like the eye lid and the side of my face. Maybe it was a stroke?

M: “No see if you wanted to move them off of their existing hardware, just a straight virtual migration would be good, or you could use a code release software to layer them onto a micro kernel vm. But for what it sounds like you want, containers would be tricky. You see you need to have a host OS arguments here can be made that, that OS can be virtualized or bare metal. Then you have your container technology, your containers, and some orchestration methodology that maps them together. Containers are way different than the virtual environments you are used to managing and deploying today.”

Ok so at this point the conversation trailed into other things, and I won’t bore you with those I am just going to use my imagination to finish this conversation as I believe it would have gone.

C: “Yes but I was saw at VMworld …”

M: “I am not saying that container strategies are wrong, nor that you shouldn’t invest time and energy into having one. Quite the contrary I think there is a place for containers in environments where application management is difficult and the concept of micro services isn’t possible to adopt. But containers while they do provide another layer of abstraction are not natively more secure. In fact containers provide the app dev or owner all the more control over the application they are packaging and deploying.”

C: “But it’s isolated so that means any vulnerabilities they expose in their container can’t impact my infrastructure.”

M: “Have you ever watched Lassie?”

C: “What?”

M: “Lassie you know the dog that always saved people?”

C: “ … Yes”

M: “At the end of ever episode Timmy, the boy who owned Lassie learned a lesson in the form of a speech that his dad gave him. South Park uses this in all of their episodes where Stan and Kyle reminisce on the lessons of the episode. We call this a ‘SoYaSeeTimmy’. The point is no one learns the lesson while they are going through the adventure, they learn after the fact.”

C: <blank stare>

M: “So you see Timmy, mind if I call you Timmy? Good. So you see Timmy, you surely can run your web services in a container, or believe your container is actually not going to impact or open you up to security vulnerabilities, but just like when you fell down the well while trying to walk across a board like a balance beam, and Lassie came a running barking, and spinning in circles to get me to follow her back to you, you will learn that just because you can doesn’t mean you should.”

If I am incorrect or you feel differently let’s discuss it, I am still learning and could use a conversation on this that isn’t in my head. 🙂

 

How are Engineered Solutions Supported?

This spawned from an internal conversation so hopefully I don’t cause too many issues with it. What the hell is an IT solution, and what are you to expect of an IT solution from a vendor?

Is an IT solution just like a piece of hardware or software? Should it be treated and supported the same?

These are exactly the questions that are being asked by customers and by those of us evangelizing these solutions. If you have ever architected an IT design you know there is a lot to getting all of the moving parts working together. So how should we view these solutions?

From a business perspective investing in an IT solution can be expensive, so we want to be sure that the proper expectations are set. The full set of expectations depends on the type of solution. So rather than try and cover all of them let’s focus on the Federation Enterprise Hybrid Cloud an EMC, VMware, VCE, and Pivotal offering. The best way to look at this solution would be to think of it as a new building construction. Your business has decided it’s ready for it’s own office space and the size of with warrants new construction. The business has set needs, sq footage being the most likely initial defined requirement.

With those thoughts in mind they shop for an architecture firm, and a contractor to do the build. The architect starts to provide some input into power, cooling, number of floors, and breaks out the different use cases and specifics. Then the contracting firm comes in and does the build.

Once the construction is complete the company takes ownership and moves in. From there they have full control over how furniture is placed and who sits where. Any work done in that building is the dictated by the business.

But what happens when the business wants to change the layout of the building or modernize it? Well they bring back in an architect or contractor and verify that the changes are within code, legal and safe. Then they set to doing the work.

IT solutions like EHC are the same, the frame work for the build is founded in sound architecture, but each is customized to meet customer requirements. While some things can be productized and updates and changes can be controlled like moving furniture it takes time to reach that on a maturity cycle. Initially all solutions have to reach that level of commodity and utility.

Now your next question is going to be what in the hell do you mean by that? Well initially it means that as versions of EHC change and products are updated we (EMC and you the customer) need to make sure everything interoperates. In some instances it means professional services help to perform the upgrades at some costs because nothing is free. In others it just means validating against a compatibility or interoperability matrix.

For some this is becomes an anticipated expense, and something that can be planned for in outlying years budgets as the solution matures. For others this may be a show stopper as a solution like this is meant to drive lower OPEX and CAPEX. Early adopters will always have these concerns but it’s important to understand the support and upgrade cycles of such products and that we are all upfront about them so we can better partner to build the right solution the one that works to meet the business goals.