Paul Rubens, Author at Enterprise Networking Planet https://www.enterprisenetworkingplanet.com/author/paul-rubens/ Thu, 16 Jun 2022 19:43:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Employing SIEM in the Network Security Fight https://www.enterprisenetworkingplanet.com/security/employing-siem-in-the-network-security-fight/ Fri, 27 Aug 2021 12:00:00 +0000 https://www.enterprisenetworkingplanet.com/?p=21436 SIEM systems are becoming effective tools in combating security threats. Here’s how these tools are helping organizations of all sizes.

The post Employing SIEM in the Network Security Fight appeared first on Enterprise Networking Planet.

]]>
A firewall, an intrusion prevention system (IPS), and endpoint protection software are security weapons that almost all organizations have in their armory to defend themselves against cybercriminals. But when it comes to bringing out the big guns to help protect the network and the corporate data assets stored on it, an increasing number of organizations of all sizes are turning to Security Incident and Event Management (SIEM) systems. 

That’s because SIEM systems offer functionality that goes beyond more traditional security devices. According to Gartner, a SIEM system’s key roles are to:

  • Collect security event logs and telemetry in real time for threat detection and compliance use cases.
  • Analyze telemetry in real time and over time to detect attacks and other activities of interest.
  • Investigate incidents to determine their potential severity and impact on a business.
  • Report on these activities.
  • Store relevant events and logs.

Automated Threat Responses

In the near term to medium term, the extra functionality which is most likely to become commonly available is automated security response capabilities. Today automated responses to detected threats are comparatively rare because of worries about the disruption that could be caused in a production environment if a false positive is triggered. For that reason automated responses tend only to be used by organizations that want to adopt the very highest security posture. But in future it is likely that automated responses may become the norm when faced with sophisticated attacks from cybercriminals using automated attack tools. 

Artificial intelligence (AI) and machine learning capabilities are also likely to become increasingly important features of SIEM systems in the future, as they may enable automated responses far more quickly, appropriately, and with less risk of unexpected disruption. 

Also read: Managing Security Across MultiCloud Environments

SEM and SIM

Two important subsets of SIEM are  security event management (SEM) and security information management (SIM). In general, SEM is concerned with real-time monitoring  of logs and the correlation of events, while SIM involves data retention and the later analysis and reporting on log data and security records. This is often carried out as part of a forensic analysis to establish how a security breach occurred, which systems and data may have been compromised, and what changes need to be made to prevent a similar breach. Most modern SIEMs can be used to carry out both SEM and SIM.

SIEM for Medium-sized Companies

In the past, SIEM systems were only used by very large enterprises, but over the past few years they have become accessible to medium-sized organizations as well, according to Oliver Rochford, a cybersecurity expert and former research director at Gartner. He says one problem with SIEM systems is that in order to operate them, organizations need one or two people to oversee them 24 /7. In most cases only large organizations have the security resources available to do this themselves, but a solution for medium sized companies is to use  a managed service, or to oversee the SIEM system during office hours and rely on a managed service to provide “out of hours” cover. 

Threat Detection as a Driver to Adoption

Another reason that the appeal of SIEMs has broadened is that previously the main driver for adoption was compliance — an issue which is more likely to affect larger companies. While compliance is still an important factor, a bigger driver now is threat management, (and specifically threat detection and response). Many new deployments are undertaken by organizations with limited security resources but requirements to improve monitoring and breach detection, often at the insistence of larger customers or business partners, according to Gartner. 

“Look at ransomware – that’s a threat that mid-sized companies are very interested in detecting,” says Rochford. “Ransomware is typically very compact and then it connects to a C&C (command and control) center. So you may be able to detect a phishing email that delivers it, or its communication, or indicators of a compromise like new processes starting. A SIEM will allow you to centralize and review this information and maybe detect the ransomware.”

By the end of last year, the SIEM market was worth some $3.58 billion, up from $3.55 billion in 2019 according to Gartner. This is very similar to the value of the global network security firewall market, which was worth some $3.48 billion in 2020, according to Allied Market Research.

Also read: Combating the Rise of Ransomware-as-a-service (RaaS)

What SIEM Brings to the Network Security Fight

So what exactly can a SIEM system do to help organizations gain the upper hand against cybercriminals? Here are some of the most important ways that a SIEM system can help:

  • Ingestion and interpretation of logs from network hardware and software. A key differentiator of SIEM tools is the number and variety of log sources that they can connect to out of the box for data aggregation purposes. Although it is usually possible to build a connector to an individual device or application, this can be costly and time consuming and therefore impractical for more than a handful of log sources. Certain vendors, such as Splunk, are notable for the large number of applications that they can ingest data from.
  • Ability to connect to regularly updated threat intelligence feeds. Many companies only make use of the feed(s) included with the SIEM product or service they buy, but commercial feeds from third parties and open source threat intelligence feeds are also available. These can be valuable because research shows that their contents do not overlap to a high degree, and the more information a SIEM has about security threats the more likely it is to detect them.
  • Correlation and Analytics. This is the bread and butter of SIEM technology, and it involves tying together different occurrences reported in logs to spot the indications of a compromise — for example: a port scan followed by user access to certain types of data, or user entity behavior that can indicate an internal threat.
  • Advanced Profiling. All SIEMs carry out correlation and analysis, but advanced profiling is less common (although it is becoming increasingly prevalent). It works by establishing baseline or “normal” behavior for a number of characteristics on a network. It then carries out behavioral analytics to spot deviations from the norm.
  • Providing alerts. Perhaps the most important feature of a SIEM tool is the ability to use the features described above to alert security staff quickly about possible security incidents. Alerts can be displayed on a centralized dashboard (see below) or provided in a number of other ways including via automated emails or text messages.
  • Data presentation. An important function of a SIEM is to make the interpretation of data from multiple sources easier by presenting it in the form of easily comprehensible graphics on a security dashboard display.
  • Compliance. SIEM technology is commonly used to collate events and logs and to generate compliance reports to meet specific compliance requirements, eliminating tedious, costly and time-consuming manual processes. Some offer integration with the Unified Compliance Framework, enabling a “collect once, comply with many” approach to compliance reports.

Read next: Best Data Governance Tools for Enterprise 2021

The post Employing SIEM in the Network Security Fight appeared first on Enterprise Networking Planet.

]]>
Containers are Shaping IoT Development https://www.enterprisenetworkingplanet.com/data-center/containers-are-shaping-iot-development/ Fri, 16 Jul 2021 16:54:53 +0000 https://www.enterprisenetworkingplanet.com/?p=21270 As the proliferation of IoT devices continues, containers are proving beneficial in today’s cloud architectures.

The post Containers are Shaping IoT Development appeared first on Enterprise Networking Planet.

]]>
Containers have been around since the 1970s in one form or another, but in 2013 they exploded onto the IT landscape thanks to the launch of Docker. Since then, this “virtualization lite” technology has proliferated in corporate data centers and in the cloud, and now it’s hard to imagine how much of today’s cloud architecture could be built without them.

Such are the benefits of containers that it was perhaps predictable that they would become so popular. Gartner predicts that by 2022, more than 75% of global organizations will be running containerized applications in production, up from less than 30% in 2020.

But what was unexpected was the fact that containers would make such a good fit for large scale IoT deployments, and how they would, therefore, play a large role in the shaping of IoT development. To understand why, let’s take a closer look at what containers are, and what they do.

How Containers Work

Container technology is similar, or at least related, to virtualization technology. But one of the fundamental differences is that a container is far smaller, or “lightweight.” One reason for this is that while a virtual machine simulates a complete server, with an operating system and one or more applications installed, a container is just a runtime environment for an application. That means it includes the application, plus all its dependencies, binaries and configuration files needed to run it, but not an entire operating system. Instead it uses the operating system of the computer, which is hosting the container, and shares this operating system with all the other containers on the system. 

Also read: Using Digital Twins to Push IoT

Lighter Than Virtual Machines

Thanks to the fact that containers are so lightweight and don’t contain an entire operating system, this makes them highly portable, and it also means that more containers  than virtual machines can be run on a single host.

This in turn makes it practical to split applications into many different microservices, each running in its own container. Splitting up applications into separate services has a number of benefits, including making each microservice independent from the others. That means that they can be modified or updated without having a knock-on effect on all the others. 

Containers and IoT Development 

IoT devices, usually sensors, tend to be deployed “in the field”, often in large numbers. They usually collect large amounts of data, and that data has to be sent somewhere. The devices also need to be controlled, so control messages have to reach the devices in the field, as well as firmware updates.

Communication problems

The obvious place from which to process the data and to control the devices is the cloud, but there’s a very big problem with this approach, as the people behind many early IoT deployments discovered to their cost. The problem is that it is hard to communicate reliably with large numbers of IoT devices from the cloud, because this involves some combination of SIMs, networking hardware, physical leased lines, broadband connections, cellular towers, and other networking technology. The chances of all of this working reliably for the majority of the time is vanishingly small. 

Another problem is that where IoT data is needed in real time, sending it all to the cloud is impractical because of the latency that is necessarily introduced. The answer has been to place the data collection and processing functions, and the IoT device control functions, as close to the IoT devices themselves as possible, at the edge of the network.

So what we have ended up with is a scenario where an organization might have a large number of IoT devices deployed in many different locations, with the need to collect the data, analyze it, and also control these IoT devices from a number of different edge locations.

IoT microservices

What’s immediately obvious is that these different functions could be offered from a single application, but they could more usefully be offered as discrete microservices: perhaps one for data collection, one for data processing, one for sending data back to the cloud for storage, one for controlling the IoT devices, one for updating their firmware to add new functionality or to close security flaws, and so on. There are a number of benefits of this type of approach, if each microservice is placed in its own container at the edge.

Also read: SD-WAN is Important for an IoT and AI Future

Container Benefits in IoT Architecture

One major benefit is added security. That’s because certain functions, such as data collection, could be classified as “read-only”, so the container tasked with this would need minimal privileges when it comes to its interaction with the IoT devices.  By contrast, a container tasked with firmware updating would need more access privileges to the IoT devices.

Not only is this partitioning of functions consistent with security best practices, but it also makes it easier to secure the more privileged containers. That’s because they are smaller than a megalithic application, so their attack surfaces will be smaller. And also because, by their very nature, containers are independent of each other. That means (in theory, at least) that developers can update a container offering, say, data collection services, without worrying about how that will affect the container offering a firmware updating function. 

The second benefit of containerization in IoT deployments is that container management systems make it very easy to check new versions of a container out of a repository and push them out to all the edge locations simultaneously. This can be particularly important if a vulnerability is discovered in one container running at all the edge locations. This can quickly be updated without affecting the other containers, and it can easily be pushed out to the edge locations because of the fact that containers are lightweight. 

The Future of IoT and Containers

The use of containers has accelerated rapidly since 2013, and the same is true of IoT connections: 2020 marked the turning point where, for the first time, there were more IoT devices connected to networks than non-IoT devices such as smartphones, computers, and laptops, according to IoT Analytics

What’s clear is that many of these billions of devices aren’t making connections to servers in the cloud. As the number of IoT deployments continues to rise, these new IoT devices will increasingly be connected to compute resources located closer by, and these resources will offer an increasingly varied and sophisticated array of IoT control and data handling functions — all running as microservices packaged in containers. 

Read next: The Growing Value of Enterprise Architects

The post Containers are Shaping IoT Development appeared first on Enterprise Networking Planet.

]]>
Transitioning to a SASE Architecture https://www.enterprisenetworkingplanet.com/security/transitioning-to-a-sase-architecture/ Wed, 14 Jul 2021 16:31:31 +0000 https://www.enterprisenetworkingplanet.com/?p=21246 The security benefits of a SASE architecture are many. Read on to learn why implementing this framework is more important than ever.

The post Transitioning to a SASE Architecture appeared first on Enterprise Networking Planet.

]]>
Most organizations’ security setup is no longer fit for purpose. If that sounds too extreme, then at the very least it’s fair to say that anyone starting from scratch in most organizations would probably not design the security architecture in the way that it is currently implemented. Instead, they would probably design something which looks a lot like a Secure Access Service Edge (SASE) architecture. 

That’s because most enterprises have a centralized security function, with security hardware running in the data center guarding the perimeter of the corporate network and monitoring the traffic flowing in and out of it. And that’s fine for organizations that are largely centralized, with users accessing data and applications over the corporate WAN. They may have branch offices, but these will either consume security services offered by the data center, or they may have their own branch office security appliance as well. 

And that’s not to mention the huge number of people who are now working remotely due to the coronavirus pandemic, and who may continue to do so indefinitely. Many of these people may be accessing cloud applications most of the time, but even so they have to connect to their organization’s data center via a VPN before their traffic can get to the cloud services they want to access. 

However, the proportion of traffic from branch offices which is ultimately destined for the internet rather than the corporate data center has increased from 20% to over 80%, according to Juniper Networks. So sending it to the data center first, to go through a security stack, is definitely suboptimal for a number of reasons:

  • This results in a huge amount of traffic moving over the WAN between branch offices and the data center when it could otherwise go straight out onto the internet from branch offices. This has an impact on WAN bandwidth costs.
  • Some traffic, such as Office 365 data, does not need to go through the full security stack, so sending it over the WAN is a waste of resources. 
  • Sending data to a centralized security function can have a significant impact on performance, both of the WAN, but also of cloud applications.

What is SASE?

Before discussing transitioning to a SASE architecture, it’s worth mentioning what is meant by the term. Essentially it’s an architecture which sees the provision of security services including SD-WAN, secure web gateway, cloud access security brokers, zero trust network access and even firewall-as-a-service, at the cloud edge. 

Also read: The Home SD-WAN and SASE Markets are Rapidly Expanding

Why Transition to a SASE Architecture?

One of the biggest benefits of a SASE architecture is that security services are available where they are needed, not just at a single chosen point (the data center). That means that branch office and remote workers accessing services in the cloud can have their data sanitized as it travels to and from these cloud services, without it having to make a diversion to the corporate data center first. That has important implications for latency, which can be particularly important with real-time applications such as Zoom video conferencing.

Another important benefit is that, as a cloud-based service, SASE security can be scaled almost infinitely in the same way that servers and storage resources can be scaled in the cloud using services offered by the likes of AWS or Microsoft’s Azure. 

Lower cost, less complex

A SASE architecture is also likely to be far less costly than a traditional security setup. Certainly it costs less to operate per unit of secured data, and since it can be scaled up and down to meet an organization’s needs over time, enterprises don’t have to pay to maintain a scale of infrastructure that they don’t need.

There’s a fourth important benefit which is often overlooked, and that’s simplicity: a SASE architecture makes security far easier to manage and maintain. That’s in part because there is no need for IT staff to spend time applying updates and patches to appliances, install new hardware, or replace it from time to time. There is also no need for staff to manage equipment at branch offices either remotely or by making site visits.

It’s also easier to manage and configure a SASE architecture because the whole SASE security stack can be managed as a single cloud-based application. This may be possible to a greater or lesser extent with existing data-center based solutions, but it’s doubtful if this can ever be as integrated as is the case with a SASE solution. 

Also read: Remote Work Could Boost SASE, Slow SD-WAN

Intelligent management

There’s one final and very compelling reason to consider transitioning to a SASE architecture, and that’s related to the complexity of the many modern cyber threats. Managing security, spotting threats, distinguishing between suspicious and legitimate traffic, understanding security logs, and preventing or stopping cyber attacks are tasks that are almost too much for a human — or a team of humans — to cope with. For that reason it’s likely that in the future many security systems will have to use machine learning (ML) and artificial intelligence (AI) to keep up. 

The good news is that a SASE architecture is an ideal foundation in which to build a network secured with the help of AI and ML, because all of the data is right there in the cloud, where it can be processed by cloud-based analytics systems. 

Time to Transition

For all of these reasons, the time has come for many organizations to consider a transition to SASE — or some cloud-based security architecture that is very close to it. If they don’t, they risk being left with a security setup which, in a world of cloud applications and remote workers, really isn’t fit for purpose any more.

Read next: Effective Cloud Migration Strategies for Enterprise Networks

The post Transitioning to a SASE Architecture appeared first on Enterprise Networking Planet.

]]>
Bringing Hyperautomation to ITOps https://www.enterprisenetworkingplanet.com/management/bringing-hyperautomation-to-itops/ Fri, 02 Jul 2021 17:12:14 +0000 https://www.enterprisenetworkingplanet.com/?p=21232 As more organizations turn to hyperautomation in their ITOps, the toolset is becoming more sophisticated and integrated.

The post Bringing Hyperautomation to ITOps appeared first on Enterprise Networking Planet.

]]>
The hyperautomation market is already worth $481.6 billion, according to Gartner, and it’s set to rocket to nearly $600 billion by the end of the year. More to the point, all organizations that want to win will have to go all in on it, according to Gartner’s Fabrizio Biscotti. “Hyperautomation,” he says, “has shifted from an option to a condition of survival.”

To underscore this, Garner expects organizations that successfully introduce hyperautomation to their ITOps, along with redesigning their operational processes, will be able to reduce their operational costs by as much as 30%.

What is Hyperautomation?

Hyperautomation is a term first coined by Gartner a couple of years ago, and the research house defines it as an approach that enables organizations to rapidly identify, vet, and automate as many processes as possible using technologies such as robotic process automation (RPA), low-code application platforms (LCAP), artificial intelligence (AI), and virtual assistants.

Automation, as opposed to hyperautomation, is largely carried out using a few relatively simple tools. By contrast, organizations getting started with hyperautomation in earnest need to adopt a wide selection of separate complex tools, and in the past these have had very little integration between them. 

Hyperautomation Platforms for ITOps

What’s beginning to change, as increasing numbers of organizations turn to hyperautomation  in their ITOps, is that the toolset is becoming more sophisticated and, crucially, far more integrated. “Vendors are developing integrated offerings that combine technologies like RPA, LCAP and business process management into one, packaged, tool,” explains Cathy Tornbohm, another Gartner analyst. 

Thus we are beginning to see the emergence of hyperautomation platforms, which the research house describes like this: “Hyperautomation today involves a combination of tools, including robotic process automation (RPA), intelligent business management software (iBPMS) and AI, with a goal of increasingly AI-driven decision making.”

This also includes tools that provide visibility to map business activities, automate and manage content ingestion, orchestrate work across multiple systems, and provide complex rule engines, according to Gartner. 

Read more: The Growing Relevance of Hyperautomation in ITOps

Simplifying Transactions

One particular area that is expected to be in high demand is the field of technologies that can support IT departments looking to hyperautomate staff-facing interactions. More sophisticated than a simple chatbot, these sorts of interactions could involve things like staff members requesting access to an in-house application or to a cloud-based resource or service.

In order to achieve this, the hyperautomated process will need access to technologies which can automate “content ingestion”.  This will involve chatbot technologies such as conversational AI and natural language processing (NLP), but also optical character recognition, signature verification, document ingestion, and other parts of the process. 

Another important technology enables the robotic execution of actions, or playbooks of actions, usually via an application’s UI, that mimic a human’s action during a transaction, such as processing the staff member’s request. 

Digital IT Staff

Robotic process automation is complex, but in itself it is not particularly cutting edge. However, what will really makes hyperautomation a star in the field of ITOps is the combination of RPA with intelligence — that is, artificial intelligence. 

By combining these two technologies, it will be possible to create digital IT staff to take some of the pressure of existing (human) IT staff, many of whom are overworked due to the worldwide shortage of staff with specialist IT skills in many areas. 

These “digital staff” will be able to help by taking on many of the most repetitive IT tasks, but increasingly they will also be able to handle vital specialist tasks such as detecting and reacting to security incidents. They will be able to connect to different security applications, operate with structured and unstructured data from device logs and help tickets, analyze the data that they have access to, and then make decisions and take actions.

Hyperautomation’s Ripple

The beauty of this approach is that in doing so they will also discover new processes that are ripe for hyperautomation, so that the scale of hyperautomation can expand. That’s the promise, at least. 

And that’s important because the point of hyperautomation in ITOps is not just to reduce operational costs significantly, but also to discover where it is possible to redesign and  improve other operational processes. 

It’s only when organizations both hyperautomate and improve their existing processes that the massive cost savings in ITOps and elsewhere in the organization can be achieved. 

Read next: Data Center Automation Will Enable the Next Phase of Digital Transformation

The post Bringing Hyperautomation to ITOps appeared first on Enterprise Networking Planet.

]]>
Effective Cloud Migration Strategies for Enterprise Networks https://www.enterprisenetworkingplanet.com/os/cloud-migration-strategies-enterprise-networks/ Fri, 02 Jul 2021 16:44:58 +0000 https://www.enterprisenetworkingplanet.com/?p=21227 For companies planning to move their operations to the cloud, here is what to consider to set up a clear migration plan.

The post Effective Cloud Migration Strategies for Enterprise Networks appeared first on Enterprise Networking Planet.

]]>
The number of workloads running in the cloud has exploded in the last few years, and the coronavirus epidemic is set to drive this figure even higher. In 2017, cloud workloads represented 86% of all workloads worldwide, according to Statista, and this figure is set to grow to over 90% by the end of the year. 

Migration Planning

For companies still planning to move their operations to the cloud, what’s needed is a clear migration plan. This involves establishing the reasons for moving applications to the cloud, determining which applications and their dependencies will benefit from being moved to the cloud (or replaced with cloud-native applications), deciding which cloud to move to, and then working out the likely resources in the cloud that will be needed and the cost of these cloud resources.

Network Resources

Another area that needs considering is the likely network resources that will be needed to support users, working in corporate offices or remotely, while they access the applications and data that are moved to the cloud.

This is important because a migration to the cloud will likely lead to a significant increase in WAN traffic as data is moved to and from the cloud, although LAN traffic will not necessarily fall significantly. That means it may be necessary to make arrangements to increase the effective bandwidth of WAN connections to the cloud, either by increasing physical links or by using various WAN optimization techniques and (probably) hardware. Adding a  level of redundancy may also be a prudent course of action. 

Network Security

Organizations also may have to consider changing the way that they manage network security when employees are accessing applications in the cloud remotely or from within the corporate network. That may well involve using a SASE solution, with network security controls provided as a service from different access points outside the corporate network.

Also read: Taking the Unified Threat Management Approach to Network Security

Migration Strategies

When it comes to individual applications, or, more realistically, groups of interdependent applications and their data, what’s needed are different migration strategies depending on their particular attributes and requirements.

In general, organizations need to pick from one of six different migration strategies, known as the six Rs of cloud migration: Retiring, Retaining, Rehosting, Replatforming, Repurchasing, and Refactoring/Re-architecting.

Retiring

The simplest way to handle the migration of an application is simply to get rid of it. During the assessments needed to establish whether an application is suitable for migration to the cloud it is likely that some applications which are no longer needed will be surfaced. These applications can simply be retired, providing a handy monetary saving which can be set against the one-off migration costs for other applications.

Retaining 

Another simple way to handle migration is not to migrate at all, but rather to leave the application where it is currently running in the data center. There are a number of reasons why this might be appropriate: 

  • The cost of migrating an application to the cloud may be too high
  • It may be worth waiting some years until the hardware it is running on has depreciated
  • It may need to remain in the data center for other reasons such as performance, security, or regulatory requirements. 

Rehosting 

Sometimes known as “lift and shift”, this forklift solution involves moving physical servers (and virtual servers) onto an IaaS platform which directly mimics the setup in the data center, including servers, storage, and networking infrastructure. Rehosting is popular with conservative or risk averse organizations, or ones that want to make an initial move to the cloud before starting to rearchitect their operations significantly. 

Replatforming  

This strategy is often used where large organizations have legacy systems of many different types that are too complex simply to lift and shift. Instead, various adjustments and accommodations have to be made so that the systems can be run on virtual machines in the cloud. Although this can be costly, it provides an opportunity to move such systems to the cloud without too much difficulty, while being able to take advantage of cloud benefits, such as lower costs and better security.

Repurchasing 

Instead of adapting existing applications to fit the cloud, another strategy is simply to abandon them and use something new that has been designed to operate in the cloud. This will frequently involve switching non-mission-critical functions. such as CRM or HR, to purpose-built SaaS platforms after moving the related data from existing on-premises applications

Refactoring/Rearchitecting 

This last cloud migration strategy is the most complicated, but the one that is likely to yield the biggest benefits. Essentially it involves making significant changes or, more likely, rebuilding applications from the ground up to work as cloud-native applications or collections of microservices, often running in containers

This kind of rebuild allows organizations to gain the full benefits of cloud scalability, redundancy, accessibility, and lower costs. However, it is also the most expensive to implement and it requires the most time, and therefore many organisations choose to refactor/rearchitect only after they have made an initial “lift and shift” migration to the cloud. 

Read next: How Data Centers Must Evolve in the Cloud First Era

The post Effective Cloud Migration Strategies for Enterprise Networks appeared first on Enterprise Networking Planet.

]]>
Using Digital Twins to Push IoT https://www.enterprisenetworkingplanet.com/data-center/using-digital-twins-to-push-iot/ Fri, 18 Jun 2021 20:29:50 +0000 https://www.enterprisenetworkingplanet.com/?p=21192 Billions of dollars of savings are on the table for companies that use digital twins as part of IoT deployments.

The post Using Digital Twins to Push IoT appeared first on Enterprise Networking Planet.

]]>
Fifty years ago, NASA kept full-scale models of its space capsules close to hand to help it diagnose problems with real capsules out in space, and to help it come up with fixes to problems that it could communicate to the faraway astronauts. 

The idea of a digital twin is something very similar. However, instead of building a physical mock-up of a space capsule, or any other physical object for that matter, a digital twin involves making an accurate digital simulation of the object which exists only as computer code.

The Function of Digital Twins

In many ways a digital twin is similar to a virtual machine in that it is a digital entity, which is designed to mimic the workings of a real object. But a virtual machine is designed to be used as an alternative to a real, physical computer. By contrast, a digital twin is designed to be tested and experimented with, but the ultimate aim is to use any insights gained from the digital twin on the digital twin’s physical counterpart.

A good example of this is the use of a digital twin of a car and digital crash test dummies. Car designers can run a digital twin of a car and one or more dummies through many different types of crash scenarios and use the data from these crashes with the ultimate aim of making safety improvements to the real car’s design. 

Read more about Digital Twins

Digital Twins and IoT

Creating accurate digital twins of space capsules, cars, and crash test dummies is an incredibly complex business involving researching the physics of these items and then developing a precise mathematical model that describes them and their behavior.

But the good news for those involved in IoT is that the “things” in question are often relatively straightforward sensors, and these can be orders of magnitude more simple to model mathematically than something as complex as a space capsule or a car. 

That means that creating digital twins of many types of IoT devices is relatively straightforward, quick, and crucially, inexpensive.

It’s also the case that many IoT environments consist of large numbers of simple identical devices, such as temperature or humidity sensors in containers, or GPS units in vehicles. That means that once a digital twin of a device has been created, simulations of large numbers of these devices working together can be created simply by making multiple copies of the digital twin and feeding them with data. This can be “artificial” data, or data that is received by existing physical devices.

The implications of this for large scale IoT deployment and management are huge, as we shall see.

Also read: SD-WAN is Important for an IoT and AI Future

Multiple IoT Applications

At the very start of an IoT initiative, digital twins can be used as device prototypes to help fine-tune the precise design of the device itself, as well as its firmware, encryption systems, and other software.

Once this process is complete, digital twins can be used to help optimize the deployment of devices by testing how many devices are needed in practice, where they should be positioned, and how they should be connected through various networks to data collection hubs. 

Testing Updates and Changes

Once large numbers of devices are deployed, digital twins can also  be used to test firmware and other software patches and updates before they are sent out over the air to their physical counterparts. This can be particularly useful when changes are made to the way that devices interact with each other, as large scale simulations allow developers to see  what the results will be before the patches and updates are deployed en masse. 

Digital twins can also be used to help design and manage changes to network topography, and even where data is collected. For example, an organization may be collecting data from its IoT devices on servers in its own data center, but as the IoT network expands it may decide that data needs to be sent to the cloud for collection. 

In this sort of case, digital twins can help predict when a changeover would be necessary, how performance might be impacted either positively or negatively, and what scale of cloud resources would be necessary to get a required level of data collection and processing performance. 

Predictive Twins

This sort of use of digital twins in IoT networks is what might be called “predictive twins”. Rather than using real data, a network of digital twins used as predictive twins could also be used to test the impact of different types of data flows, increased data traffic, and many other situations, to see what the impact on the IoT network would be, and what changes might be needed in the future. 

Digital twins could also be used as predictive twins in the sense that they could also be used to predict when maintenance or replacement of the physical counterparts might be necessary given many different usage scenarios.

The Value of Digital Twins

Digital twins are uniquely suited to IoT deployments because of the relative simplicity of IoT devices and the fact that digital twins can be replicated at little or no cost. Recognizing this, Gartner predicts that digital twins will exist for “billions of things” in the near future. 

The possible benefits of digital twins to organizations with IoT deployments are staggering, said David Cearley, a Gartner vice president. “Potentially billions of dollars of savings in maintenance repair and operation (MRO) and optimized IoT asset performance are on the table,” he concluded.

Read next: The Evolution of Data Centers Lies in Digital Transformation

The post Using Digital Twins to Push IoT appeared first on Enterprise Networking Planet.

]]>
IoT Faces New Cybersecurity Threats https://www.enterprisenetworkingplanet.com/security/iot-faces-new-cybersecurity-threats/ Thu, 10 Jun 2021 16:45:01 +0000 https://www.enterprisenetworkingplanet.com/?p=21123 vAs the proliferation of internet-connected devices continues apace, new enterprise security threats loom. Here is what to look out for.

The post IoT Faces New Cybersecurity Threats appeared first on Enterprise Networking Planet.

]]>
Cybercriminals are always looking for ways to breach corporate networks and steal data, and Internet of Things (IoT) devices present them with a vast array of opportunities to do so. That’s because many IoT devices can be easily compromised. When these devices are connected to corporate networks they offer a potential way in. 

As a chilling example, in 2017 hackers were able to access a casino’s database of its biggest spending customers after gaining access to its computer network through a vulnerability in a thermostat attached to a fish tank. 

Even when hackers can’t jump straight from IoT devices to other corporate assets, IoT devices can be  a huge cybersecurity threat. Many IoT devices collect and forward large amounts of data, and by intercepting this data cybercriminals may be able to garner information that they can exploit to successfully breach the network.

One reason that IoT devices are such tempting targets is that, quite simply, there are so many of them. Today there are an estimated 14 billion such devices, according to Statistica, and this is projected to explode to about 31 billion in the next four years. Some of these devices will have been secured appropriately, but many will not. And, thanks to the rapidly increasing numbers, many organizations will struggle to manage them all securely.

Also read: SD-WAN is Important for an IoT and AI Future

Default Password Risk

One emerging security weakness is that many devices have hardcoded passwords as well as factory default usernames and passwords that are never changed. Last year, a hacker published a list of more than half a million servers, routers and IoT devices which were exposing their telnet port, along with their default logon credentials. 

Aside from  offering the possibility for criminals to steal data or pivot to corporate systems, IoT devices compromised in this way may be incorporated into botnets. This type of security weakness can be avoided if manufacturers use a one-time password that has to be modified when the device is initially set up, or through the use of two-factor authentication.

Lack of Security Updates

Most manufacturers make efforts to ensure that their devices are secure when they are made and sold. However, as with any type of computer infrastructure, vulnerabilities in IoT devices are bound to emerge. That means that IoT devices get less secure as they age, and research by Unit 42 in 2020 found that 57% of IoT devices were already vulnerable to medium or high severity cyberattacks.

The obvious solution is for manufacturers of IoT devices to release regular firmware security updates. The problem, however, is how to ensure that these updates are installed in a timely fashion if they are not centrally managed. Manual installation creates a huge management headache for administrators, while automated updating at unexpected times could cause operational problems. 

The good news is that California and Oregon’s IoT cybersecurity laws, which came into effect at the start of 2020, require that manufacturers of IoT devices incorporate “reasonable security features” such as unique passwords and  regular security updates. Other states are likely to follow suit in the future.

One further problem when it comes to security patching is the growing phenomenon of “shadow IoT” — internet-connected devices that an organization’s IT departments have not authorized and are unaware of, and which may never be updated. 

Also read: Best UTM Software of 2021

Data Leakage

After compromising IoT devices, cybercriminals will often examine the data traffic that they gain access to. Clearly this is of no value if the data is encrypted, but the evidence suggests that this is rarely the case. Palo Alto Networks’ Unit 42 report found that a staggering 98% of all IoT device traffic is unencrypted, potentially leaving highly confidential information exposed.

This statistic should be treated with some caution, however, as a relatively small number of devices could be generating a very high proportion of the total IoT traffic, and much of this could be fairly mundane data rather than confidential information. 

Nonetheless, it is clear that almost all IoT traffic is unencrypted, and it will be a major challenge for IT departments to rectify this situation in the short to medium term. 

Lack of IoT Management

Perhaps the biggest emerging  IOT cybersecurity threat comes down to the lack of adequate management of IoT devices. 

A survey carried out by ZK Research in 2020 found that up to 15% of all IoT devices are shadow IoT devices, and up to 20% of all devices run unsupported legacy operating systems such as Windows 7. Many of these devices connect back to corporate IT systems, presenting a clear cybersecurity risk. However, without visibility into these devices or the ability to update unsupported operating systems, there is little that network administrators can do about them. 

One option is to  isolate (known) IoT devices and their back-end systems on VLANs, which are separate from other corporate systems. A better option may be to connect IoT devices to IoT data hubs and management systems hosted in the cloud by providers like IBM, Google, Microsoft, and AWS.

Looking to the Future

If the projections are correct and 15 billion new IoT devices are commissioned over the next four years, then this will present a potential bonanza for cybercriminals unless a great deal of work is done.

IT departments will have to ensure that IoT management systems are implemented more widely, shadow IoT devices are detected, and processes covering basic security measures such as changing default passwords, installing security patches and encrypting data-in-motion are put into place.Read next: Best Practices for Securing Edge Networks

The post IoT Faces New Cybersecurity Threats appeared first on Enterprise Networking Planet.

]]>
Democratizing IT for Rapid Digital Transformation https://www.enterprisenetworkingplanet.com/management/democratizing-it/ Sat, 05 Jun 2021 13:00:58 +0000 https://www.enterprisenetworkingplanet.com/?p=21082 Digital transformation has been underway for some time. That’s because the rise of the internet over the last twenty years has enabled digital start-ups to disrupt almost the whole gamut of industries and services that exist in modern economies. It’s as a reaction to this disruption that many organizations of all sizes have embarked on […]

The post Democratizing IT for Rapid Digital Transformation appeared first on Enterprise Networking Planet.

]]>
Digital transformation has been underway for some time. That’s because the rise of the internet over the last twenty years has enabled digital start-ups to disrupt almost the whole gamut of industries and services that exist in modern economies. It’s as a reaction to this disruption that many organizations of all sizes have embarked on digital transformation so that they can compete with these new, nimble, disruptors. 

When we talk about digital transformation, for the purposes of this article, we will actually be talking about two similar yet distinct concepts: true digital transformation, and also digital optimization. Here is the difference between the two. 

Optimization and Transformation

Digital optimization involves taking existing business models, practices and processes, and then applies digital technologies to make them more efficient. That could be something as simple as sending out invoices by email instead of printing them out and mailing them. 

Digital transformation, by contrast, involves a complete rethinking of business models so that they can be rearchitected from the ground up to take advantage of digital technology. Often this results in a completely new way of doing business, rather than just digitally enabling the existing business. 

In this sense, digital transformation is a good illustration of economists Richard Lipsey and Kelvin Lancaster’s famous Theory of the Second Best. This states that if one optimality condition in an economic model cannot be satisfied, it is possible that the next-best solution involves changing other variables away from the values that would otherwise be optimal. 

Put more simply, a complete re-architecting of a business to take advantage of digital technologies (transformation) may well be more effective than trying to modify the existing business by bolting on digital technologies where it seems to be advantageous to do so (optimization).  

The problem with digital transformation, rather than digital optimization, has always been that it can be very expensive and require a great deal of investment in digital technologies. That meant that very large businesses could embark on digital transformation, but medium-sized businesses often lacked the resources to rip up the play book and start again.

The good news for these companies, and for consumers and customers, is that the democratization of IT means that there are now plenty of opportunities for companies of all sizes to carry out a digital transformation without the need for huge financial resources to do so. 

Also read:  The Growing Value of Enterprise Architects

What is Democratization?

So what do we mean by the democratization of IT? It turns out that there are two related answers to that. 

Computing power available to all

The first is that IT resources have become so cheap, and provide so much bang per buck, that the cost of purchasing (or acquiring as a service) the necessary technology is no longer a barrier to digital transformation for many organizations. To get an idea of how far costs for compute resources have fallen, consider this: the fastest computer in the world in 1993 – the Numeric Wind Tunnel – was a thousand times less powerful than an iPad Pro, while an iPhone 8 is more powerful than all the computers that existed in the world when Neil Armstrong landed on the moon. 

IT skills are not a must-have

The second is that whereas, in the past, digital technologies were the realm of an IT “priesthood” who were trained to install, manage and operate them, the barriers to entry into activities such as coding, data analytics, and artificial intelligence have now dropped considerably. That means people who may have general IT skills, or even people with almost no IT skills at all, can now get involved in those activities. 

No code developers

A perfect illustration of this is the trend towards citizen developers using low code or no code platforms. Citizen developers have business skills, but thanks to simple point and click interfaces and other user interface innovations they can create programs to fulfil specific business needs without the need for deep coding skills. Gartner predicts that 70% of new applications will be built using low code/no code platforms by 2025.

Another rather specific illustration of the democratization of IT is the availability of chatbots — often ones embedded with considerable natural language processing abilities — that can be figured by almost anyone to provide customer services and solutions to common problems. 

AI for all

More generally, the democratization of IT means that AI in many different forms is available to all companies at very low cost for digital transformation and optimization purposes. Aside from chatbots, AI can be found in smart recommendation routines and website personalization, which are all designed to build customer relationships that are far deeper than can be achieved without these digital technologies. 

Analytics without data scientists

Finally, it is worth considering that many forms of  digital transformation (and optimization) are predicated on the availability of digital data, and this has resulted in many organizations assembling vast data sets related to their activities. In the not so distant past this was a problem because storage costs were high, and making use of this data required vast compute resources and the skills required to carry out data analytics.  

But thanks to low-cost storage and compute resources available on demand in the cloud, it’s now perfectly conceivable that decisions can be made at a business unit level to set up a cloud analytics system with very little budget. What’s more, thanks to AI advances such as natural language processing and 3D/VR representation, almost anyone can start to explore data and surface useful insights from that data. Big data analysis is no longer the preservice of data scientists. 

Rapid Digital Transformation

The democratization of IT means that digital transformation projects no longer have to be extremely expensive and reliant on large numbers of people with specialist IT skills. That means that more organizations than ever are likely to embark on digital transformation projects in the near future and many will be able to complete them successfully, at a cost that would have been unimaginably low just a few years ago.

Read next: Data Center Automation Will Enable the Next Phase of Digital Transformation

The post Democratizing IT for Rapid Digital Transformation appeared first on Enterprise Networking Planet.

]]>
The Growing Value of Enterprise Architects https://www.enterprisenetworkingplanet.com/data-center/enterprise-architects/ Wed, 19 May 2021 22:07:09 +0000 https://www.enterprisenetworkingplanet.com/?p=21039 The concept of an “enterprise architect” is becoming increasingly common as large and medium-sized enterprises look for ways to get ahead of their competitors and introduce new products and services. Yet there is still a great deal of confusion about what, exactly, an enterprise architect is and can be expected to do. What is an […]

The post The Growing Value of Enterprise Architects appeared first on Enterprise Networking Planet.

]]>
The concept of an “enterprise architect” is becoming increasingly common as large and medium-sized enterprises look for ways to get ahead of their competitors and introduce new products and services. Yet there is still a great deal of confusion about what, exactly, an enterprise architect is and can be expected to do.

What is an Enterprise Architect?

So before looking at the value of enterprise architects, let’s take a look at how the role of enterprise architect is defined. According to research house Gartner, “enterprise architects support business and IT executives by identifying and analyzing business value derived from technology.”  To that extent, the role of enterprise architect is very much an IT/technology job.

But there are other definitions which imply that the job is more of a business role. For example, Wikipedia says: “enterprise architecture applies architecture principles and practices to guide organizations through the business, information, process, and technology changes necessary to execute their strategies.” 

Another business-oriented definition states that: “enterprise architects help their organizations select, create and implement the right business- and technology-based platforms to support their business ecosystems.”

Finally, a more lyrical definition says that  enterprise architecture is “the hopefully forgettable technological foundations that allow you to do your job.”

Also read: Networking 101: What is a DataOps Specialist?

Strategy Over Tactics

Clearly, then, enterprise architecture covers a broad range of activities, and it requires an understanding of both IT and the business it supports, the goals of that business, and the possibilities for innovation to achieve those goals. 

So perhaps it is helpful to think of the traditional role of an IT staff member to be tactical. That means ensuring that the right IT infrastructure and services are in place to enable the organization to do its business while also reacting to any changing business requirements by making modifications to the IT setup.

By contrast, an enterprise architect looks at the big picture when it comes to IT infrastructure and services. That means they are more strategic, understanding the business and future business needs and then deciding what the enterprise’s IT function needs to look like in the short, medium, and long term. 

That means that rather than deciding that certain servers need to be upgraded or adjusting cloud resources or services month by month to meet demand, an enterprise architect has to design entire operating models. That could involve combining data center computing, cloud services, monolithic applications, web services and anything else appropriate, to meet the current and future needs of the business.

Also read: 10 IT Networking Certifications to Get in 2021

Innovation is Key

But when it comes to explaining why the value of enterprise architects is increasingly being recognized, there’s a strong argument that it comes down to one word: innovation. 

That’s because digital businesses and those that have undergone digital transformation are constantly looking to gain a competitive advantage through technological innovation: by offering existing goods and services more efficiently or effectively through the use of technology, or by bringing to market innovative products and services enabled by the smart use of technology. 

“Enterprise analysts and technology innovation leaders must use the latest business and technology ideas to create new revenue streams, services and customer experiences,” explains Marcus Blosch, a Gartner analyst. The company estimates that this year 40% of enterprises will use enterprise architects to help ideate new business innovations made possible by emerging technologies.

Artificial Intelligence

One technology in particular is likely to provide a significant boost to the value of enterprise architects in the short term, and that technology is artificial intelligence, Gartner believes. 

Although AI has often looked like a solution in search of a problem, one key use for it is likely to be to automate processes to reduce friction and improve business efficiency. The research predicts that, by next year, 50% of enterprise architects’ programs will involve AI-enabled software for planning, governance, assurance and IT asset management purposes.

Driving Innovation

By next year, Gartner predicts that fully 80% of digital businesses will be harnessing the business/IT collaboration enabled by enterprise architects to drive innovation in their markets. 

Ultimately, what’s driving the appreciation of the value of enterprise architects is the desire for innovation, and the understanding that it is the enterprise architect that is in a unique position to understand both the business and emerging technology. Armed with this understanding, enterprise architects can facilitate collaboration between business and IT departments to raise awareness in new technologies and the possibilities that they offer for innovation. 

Read next: Simplifying Data Management with Hybrid Networks

The post The Growing Value of Enterprise Architects appeared first on Enterprise Networking Planet.

]]>
Virtualization vs. Containerization: What is the Difference? https://www.enterprisenetworkingplanet.com/data-center/virtualization-vs-containerization/ Tue, 18 May 2021 16:08:26 +0000 https://www.enterprisenetworkingplanet.com/?p=21014 If you want to run an application, there are two ways of doing it: on a physical computer, or on an abstraction of a computer. The two most common forms of abstraction are virtual machines (VMs) and containers. But what’s the difference between these two forms of abstraction?  To answer this question, let’s take a […]

The post Virtualization vs. Containerization: What is the Difference? appeared first on Enterprise Networking Planet.

]]>
If you want to run an application, there are two ways of doing it: on a physical computer, or on an abstraction of a computer. The two most common forms of abstraction are virtual machines (VMs) and containers. But what’s the difference between these two forms of abstraction? 

To answer this question, let’s take a look at VMs and containers in more detail.

Server Virtualization

Server administrators have long had to deal with the fact that most servers are chronically underutilized. That’s because processors become more powerful every year, and the cost of resources such as RAM and disk storage continue to fall dramatically. As a result, many servers have the potential to run more than one application – but server admins are loath to do this for many reasons including security, reliability, and scalability.

The solution, popularized by VMware, is server virtualization. This enables a single physical server, or virtualization host, to run multiple virtual machines, or VMs. Each VM has its own operating system (and these operating systems can be different), onto which an application can be installed. 

Since VMs are designed to be isolated from each other and from their virtualization host, that means that security issues in one application should not be able to  affect another application running in a separate VM. Equally, if one application crashes and requires a server reboot, its VM can be rebooted without affecting the running of any other VMs. Unfortunately, it is occasionally possible for this isolation to break down — a phenomenon known as VM escape. 

When it comes to scalability, virtualization can help because VMs are portable. For example, there could be a situation where two applications are running in VMs on a single virtualization host, but one of these applications comes to need more resources to the extent that the two VMs can no longer co-exist on the same host. 

Without virtualization the job of moving one of the applications to a new server would be a serious administrative task. But a VM exists as a computer file, so this file can easily be copied or moved over a network (or even via storage media) to a new virtualization host. And, in fact, features such as VMware’s vMotion and Microsoft’s Live Migration even allow VMs to be moved to new hosts while they are running (a process known as live migration), ensuring that there is no interruption to the services they provide. 

This has important implications for disaster recovery too. That’s because if a disaster strikes, virtual machines can be moved to a secondary site and, crucially, this secondary site does not need to mirror the primary site. Essentially all that is needed is a sufficient number of virtualization hosts at the secondary site.

In order to become a virtualization host, a physical server needs to run a piece of software called a hypervisor (sometimes known as a virtual machine monitor) which acts as a resource broker between the physical virtualization host and the VMs. These can be “bare metal” hypervisors (also known as Type 1 hypervisors) such as VMware’s ESXi  which contain their own OS kernel and run directly on the physical server, or hypervisors built in to operating systems (also known as Type 2 hypervisors) , such as Microsoft’s Hyper-V, which runs on Windows Server and Windows 10.

The first hypervisors were developed by IBM in the 1960s, and today popular hypervisors include Hyper-V, ESXi, KVM, and Nutanix AHV.

Also read: Best Server Virtualization Software of 2021

Containers

In contrast to virtualization, a container host needs to run its own operating system, as well as a container system (which is analogous to a hypervisor). 

That’s because containers are not self-contained abstractions of computers in the way the VMs are. Instead, a container consists of a single application (or microservice), and any other vital files that it needs to run. It then makes use of the container host’s operating system kernel, binaries and libraries in order to function. These shared files are exposed as read-only files to containers.  Other containers running on the container host also share the host’s kernel, binaries and libraries.

Since containers are far “lighter” than VMs, and far quicker to start up, this also makes them ideal for running microservices, which can be called into existence when demand for those microservices scales up,  and then taken down when demand subsides. They can also be easily moved between public and private clouds and traditional data centers. 

By far the most popular container environment is Docker. Other notable container environments include rkt, Apache Mesos, lxc, containerd, Hyper-V Containers, and Windows Server Containers.

Dedicated operating systems such as Red Hat’s Fedora CoreOS have been built specifically for running containerized workloads securely and at scale.

Also read: The Growing Value of a Microservice Architecture

VMs vs. Containers

Since containers share their host’s operating system rather than have their own operating system (in the way that VMs do), this results in some important differences between containers and VMs:

  • Containers are far smaller or “lighter” than VMs, often consisting of a few megabytes rather than gigabytes, and require far fewer hardware resources. That means a single physical server can host far more containers than VMs. 
  • Containers can be started in seconds or even milliseconds. By contrast, VMs need to go through an entire boot process to start up. 
  • Since containers all share their host’s operating system, all applications have to run on the same operating system. By contrast, VMs running on a virtualization host can all run different operating systems (for example Linux, Unix, and Windows). 
  • When using containers, only the container host’s operating system needs to be patched and updated. With VMs, each VM’s operating system has to be patched and updated.
  • If a container crashes the container host’s operating system, then all containers running on that host will fail. 
  • A security vulnerability in a container host’s OS kernel will affect all the containers that it is hosting. 

How are VMs and Containers Used?

VMs are ideally suited to traditional resource-heavy, monolithic applications, especially as preparation for moving these applications to the cloud. 

Containers are more suited to hosting microservices used in web services, and in cases where scalability is important. When containers are used in this way they are usually managed by a container orchestration system for automating computer application deployment, scaling, and management. These are often based around Kubernetes, an open source system originally designed by Google but now maintained by the Cloud Native Computing Foundation

Containers are also very useful for software developers, because they can create applications in containers on a laptop and then ensure that they will work the same way in containers in a production environment. 

One more thing worth mentioning is that the benefits of containers and VMs can be enjoyed simultaneously. That’s because containers can run in VMs, allowing organizations to make use of existing virtualization infrastructure, such as VM management systems, to manage their containers as well. 

Read next: Transforming Networks: From Virtualization to Cloudification

The post Virtualization vs. Containerization: What is the Difference? appeared first on Enterprise Networking Planet.

]]>