An insight into Hybrid Cloud with Mark Duffy from Dell EMC

We interviewed Mark Duffy who is a Director of Hybrid Cloud Engineering at Dell EMC. He works as a people and engineering lead within Infrastructure Solutions Group. Mark is part of the engineering team responsible for Cloud Solutions. Before he joined EMC, in 2012, Mark worked at Logicalis as a Solutions Architect Team Manager.

Dell EMC provides the essential infrastructure solutions for organizations to build their digital future, transform IT and protect their most important asset, information. Dell EMC enables enterprise customers’ IT and digital business transformation through trusted Hybrid Cloud and Big Data solutions, built upon a modern data centre infrastructure that incorporates industry-leading converged infrastructure, servers, storage, and cybersecurity technologies.

When Dell and EMC came together the Enterprise businesses combined with the Infrastructure Solutions Group. This enables the end-to-end ability to offer Servers, Storage and Networking services combined with broader Dell Technologies portfolio to deliver Converged and Hyper-converged technologies and solutions. Mark works in the part of the organization that delivers Hybrid Cloud based solutions.

Keep reading to find out more about Mark Duffy, Hybrid Cloud, his view on emerging technologies and the role of Network Engineers in the industry.

Why did you join Dell EMC?

I initially joined EMC as part of the Service Provider Engineering team. This team were building Cloud solutions for Service Providers, but it was more focused on creating reference architectures and solutions at the time. To create backup-as-a-service we could use the components of VMWare and EMC and put them together to create a backup-as-a-service solution. Service Providers could then sell this solution in the SME market. We were working with many EMEA Service Providers during this period, which is a very different market to the US service providers. In the first six months of being at EMC I also began managing the the US Engineering team as well.

I joined EMC because I wanted to be in back-end engineering having spent 15 years customer facing. I worked at Logicalis for 12 years and 3 years at another smaller integrator prior to that. Before those roles I was working as a Team Lead in software development at a small insurance firm. The main attraction about joining EMC was all around Converged Infrastructures, VMWare and Cloud.

What do you love most about the company?
It is being at the cutting edge of everything. We typically are trying to be 6 to 12 months ahead of where people want to start implementing technology. From the time I have been here we have done a lot of things, initially based around virtualization and converged infrastructures but also, I have been responsible for the delivery with solutions based around Open Stack. We have been involved in OpenStack Clouds for the last 5 years.

Currently we are working very closely with VMWare as part of the strategically aligned businesses in Dell Technologies.

What is the company culture like at Dell EMC?
The culture is excellent (#CultureCode_, it is very positive. As a company Dell is very people-focused. There are a set of key values that drive our culture, from how we treat the customer through to how we treat and are treated as employees. It is a very positive set of principles to live by.

When Dell and EMC merged, Dell took all the good processes that EMC had in place to the next level and making several cultural changes. Like for example in the US, where you could work between Christmas and New Year’s Eve. Dell carried their policy to shut down during the holidays. So, between Christmas and NYE everyone was off. The emails that I got in that week from the US were all automated replies, because the vast majority of people were out.

The company also keeps continuing to improve and invest for its employees. They have just announced a Global EAP programme that’s all about putting the employees first – it’s all very transparent regarding what they are doing.

When did you first hear about Hybrid Cloud?
This is probably about 2.5 years ago. We were doing Hybrid Cloud, but weren’t calling it Hybrid Cloud back then. It’s one of those things that has taken off in the last 18 months, particularly with the rise of container technology.

Threats and Weaknesses of Hybrid Cloud?
There are not that many restrictions from a security or hacking perspective. It’s more around the challenges of where you can put certain workloads from a requirements perspective, to be able to be audited. Certainly, in Europe there are certain countries that insist that certain data is only held locally within the country it should reside in. Germany is one of those countries. The challenge here is that the data does not move out of the country, that it remains where it is. This is where you see that providers have more local hubs, GDPR across the EU also adds to the challenges organizations face.

One of the challenges we see in Hybrid Cloud is the provisioning of capacity. Even for the big providers there are issues getting the capacity they need locally or in the country. The other technical challenge is being able to control who spins out instances. That is one of the most common occurrences everyone with a credit card can start and build out shadow IT. People are building out environments without corporate IT departments being aware. The IT department has to control it and be sure about what people are doing and where they are doing it. Controlling access from inside the company to someone’s external resources.

What are the strengths and opportunities?
The biggest strength is capacity on demand, also with the cloud there are providers who provide really rich tool kits to allow people to build applications quickly. For example, Microsoft and AWS, are making it very attractive for developers to be able to go into their environment and use the tools to create applications quickly and efficiently. Without having to start from scratch building the toolsets and configuring them, if they have a need for a database they can go and consume the database toolset immediately.

The flipside to that is you need to make sure you are managing your costs when building out those environments you are not being overcharged for resources that you are not using.

What do you like most about the industry?
It’s the fast pace of being at the leading edge of developing solutions to real business problems. This is one of the great things about working with VMware products for a number of years, figuring out how to provide disaster recovery for 30 components that make your Cloud Management Platform, and how you connect to the public cloud. You can fall backwards and forwards between multiple locations, using these enterprise products.

Who do you see as the biggest innovators in Hybrid Cloud?  
AWS is probably the biggest, then you have the tier 1 service providers beneath them that are building out either their offerings on the top or building out their own bespoke services. Microsoft with Azure are now making a lot of headway as it has an easy on premise off premise migration strategies.

Who do you see as the biggest adopters?
There isn’t a specific industry. If a customer has the need to build an on premise cloud and they cannot move to the public cloud they will look for an on premise solution. These companies are across every single market vertical including finance, retail, public sector.

Where do you see the future of Hybrid Cloud going?  
It’s getting simpler. The biggest shift over the last couple of years has been around making it easier to manage and easier to lifecycle. You can see this from how the next generation platform providers and companies like VMWare are making their products easier to consume and easier to lifecycle.

One of the most important things is making sure you keep your environment current, this is not necessarily for new features, but for protecting against vulnerabilities. One of the things that is steadily increasing, especially if you look at sites like The Register, is the amount of vulnerabilities in products. If you look at vendors in general there are product vulnerabilities in Firewalls, often from open source components used in those products. More recently we have had issues such as Spectre and Meltdown which impact a wide range of enterprise hardware and software. Sometimes it’s easy to mitigate them and it’s easy to work around where you put some hard-coded changes in (such as disabling a service) often you’re going to have move to a new version of the product because it’s the only way the fix could be incorporated.

If you are financial services based, then there are constant vulnerability scans and the need to explain how they are going to mitigate those vulnerabilities that are discovered to financial regulators. It’s a constant process of being able to lifecycle your environment and being able to deal with new features and vulnerabilities that come out. Everyone is having the same problem.

Microsoft took the right approach with “patch Tuesday”, which is a long-standing thing. They will push out vulnerability fixes to Microsoft products on a fairly regular basis. For hardware vendors this is a bit more difficult, this is also one of the reasons why you see a general trends towards software defined. Because if its software defined running on a piece of commodity hardware it’s easier to patch it, as well as the associated Capex and Opex costs.

What implications does this have on modern day Network Engineers?
When I first did my Cisco Exams, around 18 years ago it was still all fairly black and white. You built access lists, you had routers and you had switches. You just operated at that physical hardware networking layer, you’d then extend out in to the WAN and there was no awareness what was actually running on the network layer. It was even just a simple VMWare environment – where you had to think about how you would move one VM from one server to another. You actually had a SAP system – how did you fail over the SAP system with all of these underlying components from one location to another. There was no awareness of what was running at the networking layer beyond an application sitting in a subnet or a VLAN. In the last years at Logicalis we saw the shift, first in virtualization and then into the infancy of the Cloud.

Also, your skillset as a Network Engineer now needs to go broader and further up the stack. If you are going to recruit someone for working in a virtualized networking space, they would have to come from a solid networking background and also have a good understanding of virtualization. As a result we have also seen a lot people moving into security that had a really good networking background. We saw this at Logicalis a lot of people that were working in infrastructure security had come from a networking background originally.

Would it be better to be a generalist than an expert?
If you look at the Cisco curriculum you can go from traditional networking to branching off into Unified Communications and then there is a lot more focus on going up the stack. Networking teams provide the physical infrastructure, but it can be the application teams that drive how the networks are constructed. The server teams of old owned the virtualization environments, as result of the shift from server to hypervisor/host. So, the storage and server teams tended to collapse together and become the virtualization teams.

As you virtualise the services an Enterprise IT system is built upon, why spend 2 weeks on creating a network when you can now create a virtual network on the fly as part of the provisioning process. That’s the direction the applications are moving in, if you want to make sure you are keeping your skills current, you need to be looking at the virtualized networking and virtualized security layer. Personally that’s where I think your career is going to be moving forward.

What is your biggest difficulty when recruiting?
The biggest struggle was finding developers who understood Cloud and virtualized environments, a lot of developers early on came from pure Java backgrounds and there are new languages that sit on top of these components. But also understanding provisioning the workload and what does a workload mean. A workload is a collection of former applications, it’s not just deploying a Linux VM, its containerizing it, integrating it with numerous open source components.

It is about an organisation being able to translate one of their applications as far as possible into a workload that can be simply provisioned. So, when you are using your developers to write these scripts (for want of a better word) from an application from one side to another, they have to have awareness of the environment and how the develop their code.

If you could give one piece of advice to a Network Engineer what would it be?
Start to compliment your current skillset and understand what cloud means. Learning python scripting is important as well. Having any kind of engineer or developer that can script in python is pretty important to me. Having a Network or Infrastructure Engineer that can python script is invaluable.

Disclaimer: The answers to the questions are Mark Duffy’s personal view and not that of Dell EMC.