Lauren Hansen, Author at Enterprise Networking Planet https://www.enterprisenetworkingplanet.com/author/lauren-hansen/ Fri, 02 Jun 2023 19:11:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 What are Virtual Network Assistants? https://www.enterprisenetworkingplanet.com/data-center/what-are-virtual-network-assistants/ Fri, 02 Sep 2022 21:07:49 +0000 https://www.enterprisenetworkingplanet.com/?p=22774 Companies are increasingly managing their LAN, WLAN, WAN, and SD-WAN with virtual network assistants (VNA). VNAs are just one way that enterprises are harnessing the power of artificial intelligence (AI) and IT hyperautomation for network management.  A virtual network assistant is an AI-driven network monitoring tool that: Gives network administrators insight into the network’s performance, […]

The post What are Virtual Network Assistants? appeared first on Enterprise Networking Planet.

]]>
Companies are increasingly managing their LAN, WLAN, WAN, and SD-WAN with virtual network assistants (VNA). VNAs are just one way that enterprises are harnessing the power of artificial intelligence (AI) and IT hyperautomation for network management. 

A virtual network assistant is an AI-driven network monitoring tool that:

  • Gives network administrators insight into the network’s performance, such as bandwidth.
  • Helps with troubleshooting.
  • Indicates the health of the network’s software, such as virtual firewalls, virtual routers, and network management software.

VNAs cannot be separated from broader trends in the networking space, and this gives key vendors in this space their competitive edge.

VNAs are Part of Broader IT Trends

The broader category of virtual assistants is experiencing explosive growth, projecting to reach a value of just under $51 billion USD by 2028. Though this category includes sectors like retail, virtual assistants for networking purposes are certainly part of this trend. The narrower market of network automation alone is forecasted to reach a value of over $32 billion by 2027.

VNAs have emerged because of hyperautomation and the growing functionality of software-defined wide-area networks (SD-WAN). 

VNAs are born out of a broader trend towards hyperautomation. Hyperautomation describes enterprises’ increasing adoption of not only virtual network assistants but also robotic process automation, low-code application platforms, and AI. 

The difference between automation and hyperautomation lies in the breadth of tools companies utilize for an automated approach to managing their IT network. Vendors sometimes collect these tools under one roof in the form of a hyperautomation platform.

Also read: Best Network Automation Tools for 2022

In addition, VNAs have emerged as SD-WANs gain in popularity and accrue more AI capability. While SD-WAN vendors first combined network and security functionality, they are also adding AI capability, such as a VNA, to their product purview. Augmenting SD-WAN functionality enables network administrators to manage, secure, and automate their networks.

Why VNAs are Important for Networking

Virtual network assistants are just one AI-driven hyperautomation tool that stands at the intersection of automation and AI. While automation is often merely rules-based, VNAs take automation a step further because of their AI capabilities. They adapt, assist in decisions that benefit the network, and even learn how to make decisions on their own without needing human intervention.

VNAs are particularly useful for network security and troubleshooting.

Read more: What is AI for Networking?

Security

VNAs automate and streamline devices’ connection to the network. They scour device logs and data—whether structured or unstructured—to enforce and learn from established security policies for a device or group of devices. 

They learn, for example, by detecting deviations from the policy and alerting network engineers to such deviations. They are constantly learning from user behavior. 

Read more: Best Network Security Software & Tools of 2022

Troubleshooting

Comparing historical data to real-time data, a VNA can learn from previous security incidents to anticipate and remediate future network glitches.

A VNA is capable of sifting through terabytes of data—from firmware, equipment, activity logs, and other indicators—to uncover a network problem. 

If external or internal end users have a subpar app experience, a VNA can assist that user through a conversational interface or other data sources. Taking in the data from a chat conversation as one of several sources of terabytes of data, the VNA automatically generates an IT ticket that, in turn, helps internal users, namely network administrators, diagnose and correct the issue. Even better, the VNA is capable of solving some problems all on its own.

Key Vendors and What Their VNAs Do

VNAs are available separately or as part of a hyperautomation AIOps platform. Several vendors offer network automation agents under the name virtual network assistant (Juniper), digital network assistant (Masergy), automated network assistant (Loni), or with no specific, catchy name at all (Moogsoft, Zif, and Broadcom).

Juniper’s Marvis VNA

Juniper’s Marvis Virtual Network Assistant—formerly known as Mist Virtual Network Assistant—is one of the most well-known VNAs on the market today and is used for enterprise WLANs, LANs, and WANs. 

The Marvis VNA is designed with network administrators in mind. It works in conjunction with Marvis Action to monitor the health of the Juniper Mist network. It also harnesses the power of AI to sift through data and logs to generate insights about the root cause of network problems. Beyond diagnosis, however, Marvis corrects the issue itself—using Marvis Actions—or makes smart recommendations to network administrators on how to remedy the problem. 

Marvis even has a conversational assistant that understands user intent by contextualizing their requests to accelerate what they are trying to achieve, such as troubleshooting a network issue or finding a device on the network. Either way, Marvis’s Mist AI engine collects data from each remediation in order to learn from it. 

Read more: The Role of AI and ML in Enterprise Networking

Massage’s Digital Network Assistant 

Masergy launched its AIOps digital network assistant that integrates with SD-WANs to analyze and monitor the network and its applications for performance and security. 

Like Marvis, this virtual network assistant acts as a virtual network engineer by alerting and advising network admin on how to best handle network vulnerabilities. 

Loni’s Automated Network Assistant (ANA)

Loni also offers an automated network assistant on its device, vendor, and infrastructure agnostic networking platform.

Like Marvis, Loni’s ANA also features a conversational interface and runs on machine learning and natural language processing, however, it can also respond to voice commands.

Moogsoft’s AIOps Platform

Moogsoft deploys AI to prioritize threats and provide network professionals with only the most pressing network concerns. 

In the process of scanning the network for threats, Moogsoft picks up on anomalies through deduplication and correlation and reports on the most likely cause of the deviation. 

After identifying the root cause, Moogsoft suggests a recommended course of action that the network engineer can either accept or override in favor of a different approach. 

Zif AIOps Platform

Zif’s AIOps solution includes a variety of features to ensure a network is performing at its best. The platform includes:

  • Predictive analytics
  • Real-time network topology mapping
  • Automatic app discovery
  • End-to-end monitoring 

Zif predicts resource utilization, usage patterns, and incident volume with a high degree of accuracy, keeping NetOps teams ahead of the curve. Though Zif ingests data from a range of sources, it helps network administrators pay attention only to the high priority threats to the network’s performance. It also does the heavy lifting by running over 500 pre-defined workflows for routine, repetitive tasks.

Broadcom’s AIOps and Observability

Broadcom offers AIOps Observability that monitors the network through full tech stack observability, prioritizes network issues for NetOps teams, and ultimately helps NetOps align network performance with business goals.

Broadcom addresses the end-user perspective—whether that user is internal, like an employee, or external, like a vendor or customer—with its Appneta product. 

Appneta provides a comprehensive understanding of users’ network experience and usage of apps, ISP, SaaS, and cloud provider networks. This means that network administrators can monitor the performance of any application or program on the network for any user, any location, and at any time. They can proactively identify and address issues before the end user ever notices.

Like other vendors, to monitor the network, Broadcom’s solution compares data from apps, network services, devices and other components of the tech stack. It then applies AI and automation to provide visibility and actionable, data-driven insights to network administrators.  

What differentiates Broadcom’s AIOps solution is that it can be both domain-centric and domain-agnostic. Domain-centric AIOps solutions are designed for first-party data, which is data that the company collects and owns. Domain-agnostic AIOps solutions, on the other hand, are able to draw from various data sets and data types for analysis and insights. It’s best to have an AIOps platform that can handle homogenous data from one part of the organization as well as data from the organization’s broader digital ecosystem.

Read more: Top AIOps Tools & Platforms of 2022

Benefits of VNAs

Virtual network assistants elevate an enterprise’s network management by:

  • Saving network professionals time by collecting information and taking care of manual tasks.
  • Allowing network engineers to focus on more strategic projects and tasks.
  • Increasing network efficiency and performance through accelerated network troubleshooting.
  • Improving end-user experience and thus fostering greater customer satisfaction.
  • Reducing and preventing downtime.
  • Enhancing network performance and security.
  • Reducing human error, which causes up to 70% of network failures.

Enhance Your Network With a VNA

VNAs are not so much a replacement of network engineers as they are an extension of them. VNAs assist human network administrators by helping them do their job more efficiently, as they spend roughly 20% of their time troubleshooting wireless networks and are prone to error.  

Since VNAs help network engineers manage, secure, and optimize increasingly complex enterprise networks, vendors are likely to add some form of a virtual agent or improve their current VNA tool.

Read next: The Future of Network Management with AIOps 

The post What are Virtual Network Assistants? appeared first on Enterprise Networking Planet.

]]>
What is AI for Networking? https://www.enterprisenetworkingplanet.com/data-center/what-is-ai-for-networking/ Mon, 15 Aug 2022 18:52:21 +0000 https://www.enterprisenetworkingplanet.com/?p=22740 More and more companies are capitalizing on the synergy between artificial intelligence (AI) and networking. With the proliferation of user devices and the data they generate, companies are increasingly relying on AI to help manage a sprawling network infrastructure. By 2024, 60% of enterprises will have an AI-infused infrastructure that will entail more widespread automation […]

The post What is AI for Networking? appeared first on Enterprise Networking Planet.

]]>
More and more companies are capitalizing on the synergy between artificial intelligence (AI) and networking. With the proliferation of user devices and the data they generate, companies are increasingly relying on AI to help manage a sprawling network infrastructure.

By 2024, 60% of enterprises will have an AI-infused infrastructure that will entail more widespread automation and predictive analytics for networking aspects like troubleshooting, incident prevention, and event correlation.

What is AI for Networking?

AI is becoming ever-pervasive as companies try to manage increasingly complex networks with the resources their IT departments have. What network administrators used to do manually is now largely automated – or moving that way. 

However, the use of AI does not shield even the biggest companies from network outages. Facebook experienced a major outage in October 2021 that the company blamed on faulty router reconfiguration. AWS likewise experienced an outage in December 2021 that it chalked up to a network scalability error.

In spite of AI’s sophistication and all it can do for networks, it is not foolproof. This underlines the continued importance of human intervention in networking.

Read more: Cloud is Down: How to Protect Your Organization Against Outages

How AI is Deployed in Networking

AI, more specifically the application of machine learning (ML), helps network administrators secure, troubleshoot, optimize, and plan the evolution of a network.

Security

A proliferation of endpoints in the network in the age of work from home – and work from anywhere – widens a network’s attack surface. To remain secure at all times, a network should be able to detect and respond to unauthorized or compromised devices.

AI improves the onboarding process of authorized devices to the network by setting and consistently enforcing quality-of-service (QoS) and security policies for a device or group of devices. AI automatically recognizes devices based on their behavior and consistently enforces the correct policies.

An AI-powered network also detects suspicious behavior, activity that deviates from policy, and unauthorized device access to the network more quickly than a human could. If an authorized device indeed gets compromised, an AI-powered network provides context to the event.

Device categorization and behavior tracking helps network administrators manage various policies for various devices and device groups and reduces the potential for human error when introducing a new, authorized device to the network. It also helps them detect and troubleshoot network issues in a fraction of the time.

Read more: Best Network Security Software & Tools of 2022

Troubleshooting

Prior to AI-driven networking, NetOps (network operations) needed to determine network problems by reviewing logs, events, and data across multiple systems. This manual work not only took time and extended outages but also presented opportunities for human error. The sheer amount of data involved in today’s networks makes it humanly impossible for any NetOps team, no matter how large, to sift through event logs to identify and fix network problems.

Now, AI enables networks to not only self-correct issues for maximum uptime but also to suggest actionable steps for NetOps to take.

When a problem occurs, an AI-driven network uses data mining techniques to sift through terabytes of data in a matter of minutes to perform event correlation and root cause analysis. Event correlation and root cause analysis help to quickly identify and resolve the issue.

AI compares real-time and historical data to discover correlating anomalies that begin the troubleshooting process. Examples of relevant data include firmware, equipment activity logs, and other indicators.

An AI-infused network can capture relevant data from just prior to an incident, aiding investigation and accelerating the troubleshooting process. The data from each incident helps machine-learning algorithms in the network to predict future network events and their causes.

In addition to detecting and learning from network faults, AI automatically fixes them by drawing from the network’s rich historical data bank. Alternatively, it relies on this data to make precise recommendations on how network engineers should approach the problem.

AI capabilities streamline and drastically improve the troubleshooting process. AI reduces the number of tickets IT must process, and in some cases it can resolve problems before end users, and even IT, notice an issue.

Network Optimization

Keeping a network functioning and secure at baseline is one thing, but optimizing it is another. The continuous process of optimizing a network is what keeps end users happy and retains them as customers in the long run.

Wireless connectivity standards have evolved in terms of speed, number of channels, and channel bandwidth capacity. These standards are more than any traditional NetOps initiative could handle, but not too much for a network that is infused with AI.

Network optimization involves the trifecta of monitoring the network, routing traffic, and balancing workloads. That way, no one part of the network is overburdened. Instead, the network is able to efficiently deliver the best quality of service by distributing traffic more evenly across the network.

Today’s networks require self-optimizing AI networks that thrive on real-time, event-based network data. Through deep learning, for instance, a computer can analyze multiple datasets related to the network. Based on that data, the network’s recommendation engine checks the policy engine to make smart recommendations to enhance existing policies.

On the one hand, the suggestions meet baseline service quality standards in spite of changing circumstances, such as a traffic spike in a particular geographical area or on a user’s device. The recommendation engine may suggest switching on idle assets or rerouting traffic through longer paths to mitigate congestion.

At the same time, the suggestions adhere to the network’s baseline operational constraints, such as prioritizing phone calls and SMS text message performance over video streaming.

The network will then re-optimize the equipment on its own based on the recommendations. Self-optimizing networks maximize a network’s existing assets, directing it on how to best operate given its finite resources, while also ensuring adherence to service-level agreements (SLAs).

Through the observability and orchestration of AI-powered networks, users get the best possible network experience.

Network Planning

Given the growth of 5G networking, AI will have the biggest impact in network planning to provide new services or expand existing services to underserved markets.

A 2018 Ericsson report found that 70% of service providers worldwide report AI as having the greatest impact on network reliability. Not far behind reliability, network optimization and network performance analysis are two further areas where 58% of respondents say AI is gaining traction.

Using AI for network performance analysis enables communication service providers to accurately predict what a network will need and are thus able to better prepare.

For example, AI can be deployed to improve the provider network’s geolocation accuracy. Doing so provides critical information to help the provider evaluate the quality of service in a particular area. That information, in turn, informs plans for future network upgrades.

AI also comes into play when trying to identify underserved market areas. It helps distinguish served versus unserved markets from satellite images.

AI gives businesses, communication service providers in particular, a competitive edge by helping them identify and act on strategic opportunities.

Read more: From Co-existence to Convergence: The Union of 5G and Wi-Fi

Benefits of Leveraging AI for Networks

AI-infused networks provide organizations with a host of benefits, including:

  • Continuous monitoring.
  • Event correlation and root cause analysis to detect, fix, learn from, and prevent network issues.
  • Predictive analytics to proactively identify and address future issues.
  • Fewer instance of downtime.
  • Shorter downtime when it occurs.
  • Automated network provisioning, such as for devices and optimization.
  • Automated network-boosting recommendations.
  • Enhanced network performance.

Also read: Best Network Automation Tools for 2022

The Future of AI Use in Networking

Given the many benefits of AI-infused networks, they are sure to keep growing in adoption across today’s enterprises. AI is playing an increasingly important role in managing networks that are rapidly becoming more complex.

However, the fear that AI will replace networking professionals is a noted but ultimately unwarranted concern. Networks still need humans to verify and occasionally augment AI functionality by:

  • Addressing discrepancies between a network problem and a proposed solution that the system generates.
  • Assisting the machine when it cannot produce a solution with a high level of confidence.
  • Inspecting event correlation and using human logic to guide the algorithm in what it should and should not learn in terms of event dependencies.
  • Validating the machine’s analysis before implementing its recommendations.
  • Understanding how a machine arrived at an insight, decision, or conclusion.

Read more: What is Explainable AI (XAI)?

Aside from these interventions, because of AI’s largely automated role in networking, IT teams can devote their resources to strategic, high-value tasks, such as digital experience and digital initiative roll-ups.

Read next: The Future of Network Management with AIOps

Read next: Top Business AI Trends to Watch for 2022

The post What is AI for Networking? appeared first on Enterprise Networking Planet.

]]>
Edge Computing Use Cases https://www.enterprisenetworkingplanet.com/data-center/edge-computing-use-cases/ Thu, 28 Jul 2022 17:11:43 +0000 https://www.enterprisenetworkingplanet.com/?p=22715 Edge computing is a distributed computing architecture in which hardware and software support data computation and storage at the physical edge of a network, as close to the end user as possible. Edge computing enhances network performance by transmitting data at the shortest distance between the sensor or end user and the cloud or data […]

The post Edge Computing Use Cases appeared first on Enterprise Networking Planet.

]]>
Edge computing is a distributed computing architecture in which hardware and software support data computation and storage at the physical edge of a network, as close to the end user as possible.

Edge computing enhances network performance by transmitting data at the shortest distance between the sensor or end user and the cloud or data center. In essence, it brings the cloud or data center as close to the user or device as possible, thereby reducing response time.

Because of the ubiquity of Internet of Things (IoT) devices, edge computing cases are numerous and far reaching.

Read more about how edge computing is changing up the data management landscape: Micro Data Centers are Evolving the Edge

Edge Computing Use Cases

Edge computing is relevant in several contexts, including the healthcare, manufacturing, and retail sectors among others.

Edge computing in agriculture

Indoor agricultural facilities transmit and receive data from sensors to grow crops. The sensors need edge computing to make intelligent decisions about crop irrigation, nutrient density, and optimal harvesting times.

Edge computing in healthcare

Edge computing is finding applications in the healthcare sector in terms of tracking patients’ vital information in real time and keeping patient data up-to-date and secure.

Edge computing brings data processing, analytics, and storage closer to a hospital’s on-premises server or a device at the patient’s home. Physicians can be alerted to unusual patterns or changes in patient data and take immediate, potentially life-saving action if needed.

For instance, HCA Healthcare partnered with Red Hat to develop a real-time sepsis diagnostics solution using edge computing. This solution helped HCA Healthcare reduce the length of time to diagnose sepsis, a life-threatening response to infection, to one day or less.

Patients’ wearable devices are a basic example of an edge solution, as they generate and receive data wherever the user happens to be. A heart rate monitor, for instance, locally analyzes data on the patient’s heart rate, blood pressure, and sleep patterns, keeping doctors updated with real-time patient information.

Edge computing in entertainment

Edge computing comes into play for really any app but especially for streaming services, such as Hulu or Netflix.

Edge computing optimizes content delivery networks (CDNs) by identifying the best, low-latency network path for a user’s internet traffic and ensuring a broadly distributed global cache or data repository for servers. This is especially important in the evening of any given location when most people are home from work and watching their favorite shows.

An optimized network also helps marketers curate more personalized and interactive customer experiences, such as sophisticated chatbots, recommendations, and even offline interactions.

Internet providers that support users’ work and recreational needs require performance analytics from edge computing to ensure reliable, fast internet.

Edge computing in manufacturing

A manufacturing company with plants located around the world benefits from edge computing, as its leadership can make quicker, more accurate business decisions regarding optimal operations.

The data that management uses to make those decisions doesn’t have to come from one centralized cloud. Instead, it can be collected and transmitted close to the server of any given facility location.

Edge computing also especially comes into play to ensure employee working conditions are safe. If a machine suddenly turns off automatically due to an object obstruction, such as a hand or shirt sleeve, this is an incident that management will want to know about. Sensors or cameras on that piece of machinery can collect and transmit live data on the edge.

Bringing data processing closer to manufacturing equipment through IoT devices and sensors enables management to better monitor production lines and employee safety as well as anticipate necessary maintenance.

Edge computing in retail

For retail businesses, especially those that conduct e-commerce, edge computing becomes especially important for collecting and transmitting data between retailers and customers.

For online orders, for instance, edge computing enables quick and accurate order processing and fulfillment between mobile or web orders and distribution centers that are closest to shoppers.

Edge computing is also behind the magic of selecting a specific store location to check an item’s availability. In addition, it helps retail businesses make more accurate sales forecasts to better prepare for seasonal fluctuations in business.

Edge computing in the energy sector

The oil and energy industry has relied on collecting and sending data to a distant data center. This prevented timely redress of issues related to oil pipeline pressure or electrical conductivity, due to the lag time between a critical incident and its rectification.

Edge computing has accelerated the identification and resolution of technical or security issues that arise in the energy sector because it delivers real-time information from IoT sensors in drilling facilities or power plants.

More generally, edge computing helps organizations with multiple physical locations improve their energy consumption management. IoT devices and sensors linked to an edge platform help users:

  • Track energy usage.
  • Conduct real-time analysis of consumption.
  • Adjust or reduce heating, cooling, and lighting, according to times of day in various locations.

Edge computing in telecommunications

Edge computing has much wider reach than Wi-Fi, allowing broader and more scalable connectivity. That’s why edge computing is being deployed in 5G mobile communication networks to deliver fast app experiences and cache content for local users. This essentially allows user traffic to bypass one infrastructural backbone.

Though broader implementation of public 5G networks are nascent, edge computing is contributing to the current rise in private cellular networks (PCNs). Telecommunication companies and other enterprises are heavily investing in 5G networking to connect IoT devices to edge computing facilities. In fact, the 5G market is predicted to experience a CAGR of 72% and reach a value of almost $250 billion USD by the year 2028.

In recent news, Deutsche Telekom partnered with Google Cloud to bring cloud services closer to Deutsche Telekom’s edge, pilot 5G service in Austria, and kick off other joint projects.

Edge computing in transportation

Self-driving vehicles or really any mode of transportation that operates with geolocation data use edge computing.

An autonomous vehicle, for example, constantly gathers and transmits information about weather, traffic, and road conditions as well as several other data points. Such a vehicle requires a lot of computational power to send, receive, collect, and analyze information as the vehicle moves in order to make the right decisions.

Edge computing ensures optimal end-user experience for transportation mobile apps, such as Uber or Lyft. Drivers’ vehicles are outfitted with geolocation devices that transmit data live to the app, so it can select the quickest ride for end users. Plus, end users can track the location of their ride.

Vehicles, whether self-driving or otherwise, need constant connectivity. Edge computing enables fast communication with low latency between these vehicles and data centers.

Benefits of Edge Computing

Edge computing benefits businesses in several ways:

  • Expanded product and service offering and reach.
  • Better and more varied avenues to serve customers.
  • Vast data storage capability.
  • Compliance through data sovereignty and security.
  • Improved security, as data is distributed throughout the network instead of in one central location, or only the most sensitive data gets sent to the cloud.
  • Sharper, real-time monitoring.
  • Reduced cost of raw data transmission, especially in areas with high mobile data fees.
  • Enhanced network performance.
  • Reduced network loads by processing on the edge.
  • Greater bandwidth capacity across bigger geographic area, leading to lower bandwidth use in any given edge location.
  • More reliability, fewer network disruptions.
  • Faster data processing.
  • Higher end-user satisfaction.

Read more about tips to keep the edge secure: Best Practices for Securing Edge Networks

Edge Computing is Here to Stay

Edge computing is on its way to great adoption. And while it’s becoming increasingly popular, this does not spell the end of cloud services. Cloud and edge computing are allies in delivering scalable yet efficient connectivity and secure data transmission and storage to businesses.

Read more about edge computing and other data center trends: Data Center Technology Trends for 2022

Gartner estimates that the global enterprise edge computing market will grow to $19 billion in 2024 with a CAGR of nearly 14%. Gartner also predicts that by 2025, 75% of enterprise-generated data will be processed outside the traditional centralized cloud and data center.

The boom in edge computing adoption and use is attributed to parallel growth in IoT edge devices. Edge computing keeps device and network performance at their peak and helps organizations save money on expensive cloud computing. However, edge and cloud computing will continue to pair well together. While the cloud enables large-scale computing, the edge  offloads localized tasks to use fewer resources.

Read next: Top 6 Edge Computing Companies 2022

The post Edge Computing Use Cases appeared first on Enterprise Networking Planet.

]]>
Top 7 Data Center Colocation Companies https://www.enterprisenetworkingplanet.com/data-center/top-7-data-center-colocation-companies/ Thu, 14 Jul 2022 17:42:43 +0000 https://www.enterprisenetworkingplanet.com/?p=22690 Data center colocation facilities are outsourced data centers that a business uses to augment its own computing capacity. Data center colocation is often a core element of a company’s multi-faceted infrastructure strategy to leverage the optimal amount of resources for its computing needs.  Given the exponentially increasing volumes of data that companies collect and store, the […]

The post Top 7 Data Center Colocation Companies appeared first on Enterprise Networking Planet.

]]>
Data center colocation facilities are outsourced data centers that a business uses to augment its own computing capacity. Data center colocation is often a core element of a company’s multi-faceted infrastructure strategy to leverage the optimal amount of resources for its computing needs. 

Given the exponentially increasing volumes of data that companies collect and store, the data center colocation market is predicted to grow at a compound annual growth rate of 13.3% between 2021 and 2028.

This guide will help you choose a data center colocation company. Below is a list of the top choices; continue to scroll down for still more data center colocation resources. 

Read more: The Evolution of Data Centers Lies in Digital Transformation

Top 7 Data Center Colocation Companies

The colocation market has seen increasing consolidation within the last couple of years, so expect the vendors below to change in terms of name, leadership, or both.

Colocation
Provider
Data CentersBest for
China Telecom
363
mid to large enterprises with a presence in Asia, especially China
CoreSite
27
enterprises with a presence in the US
CyrusOne
50+
companies looking for specialized colocation services
Cyxtera
60+
companies of any size that seek an affordable yet reliable option
Digital Realty
290+
larger organization based in the US that also have international markets
Equinix
220+
large companies needing specialized services
NTT Global Data Centers
160+
businesses of any size looking for a flexible modular approach to data center colocation services

China Telecom

Best for: mid- to large-sized enterprises with a presence in Asia, especially China. 

China Telecom is among China’s largest data center providers with data centers around the world, including the Americas and Europe. 

China Telecom provides flexible and customizable data center architecture to fit customer needs. It offers integrated data center services to customers around the world through its vast network of data center facilities (more than 360). China Telecom also provides customized hosting services and global disaster recovery services.

Its integrated data centers are connected to a broad range of private and public data circuits around the world. 20 of their data centers are Tier IV, and 90+ are Tier II with diversified network connections on every continent. All Tier IV data centers are outfitted with high density power and cooling infrastructures.

Customers can rest assured that their data assets and hardware is secure, as China Telecom’s security features include anti-virus scans, DDoS protection, firewall management, and intrusion detection. It also supports HIPAA, ISAE, 3402, and PCI-DSS compliance, though competitors tend to cover more compliance measures.   

China Telecom also takes measures to ensure that all of its data centers are running sustainably with green operational policies. 

Read more: Data Center Sustainability: 5 Steps to a Green Data Center

CoreSite

Best for: for enterprises seeking country-wide services in the US.

CoreSite provides secure, reliable, and high-performance data solutions through its 27 US data centers. Each center keeps its customers’ markets connected with more than 32,000 connections in ten major metropolitan areas, including Boston, Chicago, Silicon Valley, and as of recently, Atlanta and Orlando.

Each site in CoreSite’s geographically distributed portfolio is connected through high-count dark fiber, which allows for scalable growth and access among multiple markets at once. Select sites in CoreSite’s portfolio have inter-site connectivity that allows customers to access as many providers as they need in an interconnection-dense facility or carrier hotel.

In areas where CoreSite has a singular data center, it offers common-carrier access to other regional interconnection hubs. In fact, CoreSite has more than 450 domestic and international carriers and access to more than 40 intercontinental cables. This makes it easy and cost-effective for CoreSite customers to link their deployment at a CoreSite data center to one at another data center in the region. 

CoreSite offers top internet exchanges like AMS-IX, DE-CIX, and LINX, to name a few, but it also features its own Any2Exchange for internet peering. Having a variety of internet network connectivity options keeps costs down and connections up for CoreSite customers.

CyrusOne

Best for: companies of any size looking for specialized colocation services in Europe and the Americas.

For companies who want data center colocation without committing to a suite of other services, CyrusOne is worth considering. CyrusOne specializes in colocation with more than 50 data centers across Europe, North America, and South America. It offers highly scalable data center solutions to grow with a business’s data needs, enabling clients to simplify and modernize their data infrastructure for long-term stability and sustainability. 

CyrusOne’s services are highly flexible and scalable. This vendor supports mixed deployment models, including connectivity to private and public clouds, hybrid-cloud, and multi-cloud environments. Customers can:

  • Choose their own managed service providers (MSPs).
  • Tailor their desired redundancy according to rack level and application needs.
  • Flexibly manage their contracts, choosing to ramp up or tamp down power and space over a specific time period.

CyrusOne provides design architectures that support flexible power requirements (2N, N, or both) and rack power densities that range from 250 watts per square foot up to 900 watts per square foot.

This vendor’s facilities are designed to comply with rigorous standards, such as ISO 27001, HIPP, SOC 1, and SOC 2, and TRUSTe to ensure protection of customers’ critical data.

In addition to compliance measures, CyrusOne also takes sustainability seriously. In May 2022, it joined the Infrastructure Masons Climate Accord, which is a coalition that strives to reduce the carbon footprint of digital infrastructure.  

Cyxtera

Best for: companies of any size that seek an affordable yet reliable option.

Cyxtera offers colocation solutions that scale up or down to meet the needs of any size businesses. Its 60+ data centers are carrier and cloud-neutral, which gives growing organizations more flexibility. 

Cyxtera runs data centers in more than 28 different markets across North America, Europe, and Asia. Its leasing options for colocation include:

  • Rack space
  • Cabinets
  • Cages
  • Private suites

Cyxtera’s customizable cage solutions feature multi-layered security and access to multiple network and service providers in Cyxtera’s ecosystem. In addition, the vendor’s secure dedicated locking cabinets provide 2-8 kilowatts of power and can be tailored to meet client needs.

Cyxtera features SmartCabs, which are single-tenant colocation cabinets that are equipped with built-in power, network connectivity, and Cyxtera’s configurable core network fabric—the Digital Exchange.

Read more: Understanding the Value of Enterprise Data Fabrics

With Cyxtera, clients have the option of handling their own server management or choosing from a tiered support subscription model with options for everyday maintenance. 

Cyxtera is the only vendor covered here that offers not only data center colocation and cloud services, but also AI and ML compute as a service. This service enables customers to deploy and provision AI/ML-powered workloads with greater agility and speed.

Digital Realty

Best for: larger organizations based in the US that also have international markets.

Digital Realty is a market leader in the colocation, interconnection, and hybrid cloud infrastructure markets with more than 290 data centers that cover 50 markets across six continents. Most of its data center operations are housed in the US and Europe.  

Digital Reality offers a variety of housing options:

  • Cabinets
  • Cages
  • Private suites

This vendor’s data colocation solutions are designed to secure clients’ mission-critical data and keep customers connected to its ecosystem through a singular, open platform. It boasts a ten-year record of 99.999% uptime—also known as “5 nines availability.”

Smaller organizations should beware that Digital Realty does not offer rack lease spacing. It’s therefore a better fit for larger organizations that seek flexible network connectivity, a variety of bandwidth options, or support for a hybrid cloud model.

Equinix

Best for: large enterprises seeking specialized hardware and software services.

Equinix is one of the major players in the data center industry that has acquired smaller businesses to strengthen its market share. 

Equinix provides clients with a full portfolio of interconnection and integrated infrastructure services that include:

  • Private suites
  • Customized private cages
  • Cabinets
  • Custom cabinet set-ups

Equinix has over 220 carrier-neutral data centers in 63 metropolitan areas and in six different continents. Its specialized Equinix Infrastructure Services include:

  • Hardware supply, such as cages, cabinets, cable management equipment, and more.
  • Equinix’s team of knowledgeable professionals for deployment expansion.
  • Fast, robust migration processes that ensure high availability and low risk.

Read more: Data Center Migration: 7 Best Practices

Equinix is quite comparable to Digital Realty, as they both target large enterprises and do not offer leased rack spaces. These characteristics make them both unsuitable for smaller to medium-sized businesses. Equinix’s data center location coverage is somewhat smaller than Digital Realty, but Equinix extends its services beyond its own data center walls by giving access to its broader ecosystems of partners and providers. Check the exact data center locations of both to determine which vendor better aligns with your business needs.

NTT Global Data Centers

Best for: businesses of any size looking for a flexible modular approach to data center colocation services.

As one of the largest data center vendors in the world and especially in Asia, NTT Global Data Centers serves large enterprises with its expansive network of more than 160 data centers in more than 20 countries. 

NTT offers racks, secure cabinets, custom-built cages and wholesale colocation solutions, private vaults and suites for extra security, and build-to-suit data centers.

As a major contender with Equinix and Digital Realty, this vendor keeps smaller organizations in mind as well with the ability to lease rack space. NTT claims to offer compliant data center services, though the website does not list its certifications. 

NTT has end-to-end capabilities—from data center design to implementation—and scalable services to meet the high-demand hyperscale companies. Its data center locations operate in Africa, Asia, Australia, Europe, and North America (US only) but not in South America or the Middle East. 

Common Features of Data Center Colocation Providers

A company stores leased servers or its own at a provider’s physical location, while the provider offers the following: 

  • Network connections
  • Internet exchanges
  • Back-up power sources
  • Cooling facilities
  • Physical offices
  • Maintenance and support
  • Multiple data center locations
  • Wholesale colocation
  • Retail colocation
  • Cabinet colocation
  • Cage colocation
  • Private data center suites
  • Hosting
  • Hybrid, cloud-based colocation
  • Tier III and IV level data centers that signal robustness

Read more: Data Center Technology Trends for 2022

Types of Data Center Colocation Facilities

A company can choose from four different kinds of colocation: retail, wholesale, hosting, and hybrid.

Retail colocation 

Retail colocation means that company shares the data center environment with other customers leasing a rack, space within a rack, or an entire cabinet. 

Leases for this option tend to be short term—around one year—so it’s a good temporary solution for companies who need a carry-over data center until they build their own, for example. 

Retail colocation facilities offer various carrier and connectivity options to cater to the needs of different companies. Though customers bring their own servers to the leased data center space, they relinquish control over the data center design and operation, leaving it up to the provider. 

Wholesale colocation

Wholesale colocation refers to a dedicated data center that is contained within a broader data center colocation facility. One customer leases an entire data center in the facility, usually over a period of three years. 

Wholesale colocation facilities generally offer strong redundancy to keep data centers running in spite of outages, natural disasters, or other disturbances. Customers also have more control in wholesale colocation, as they can bring their own servers and also provide input for the data center design, layout, and management. 

However, wholesale colocation centers are not as carrier-agnostic as the retail colocation option.

Hosting

With this option, the vendor owns and manages the server, but the customer might have some visibility into server management. This makes the hosting option the least attractive option in terms of customer control and flexibility, yet it meets the price point for smaller companies that do not have the upfront costs to secure their own equipment. 

Hybrid cloud-based colocation

These kinds of facilities blend the physical data center with cloud services. All vendors here offer this as an option to meet the diverse infrastructural needs of today’s organizations. 

The vendors here do not specify which kinds of colocation they offer, but many likely offer all that can be combined with one another for ultimate flexibility. 

Benefits of Data Center Colocation

There are plenty of reasons why a company may choose colocation over building its own data center.

Cost: A company saves in two ways by foregoing the costs of building its own data center and having costs fixed in the contract with the provider.

Interconnection: Carrier-neutral colocation centers are like a hub for networks, cloud service providers, and IT service providers, allowing businesses whose servers are housed within the same data center to leverage these connections. 

Maintenance: Since the provider owns the data center, it takes care of cooling, power, and network connections.

Reliability: Outsourced data centers typically have state-of-the-art infrastructure to meet the computing needs of today’s businesses in addition to compliance certification, disaster recovery protocols, power consumption, management, low-latency networking, and multi-layered security measures.

Scalability: Since business’s data needs and strategies evolve, colocation providers offer flexible and scalable data center solutions with a variety of cabinet and cage sizes as well as various power configurations.

Security: Data center colocation facilities have strong security measures in place for hardware and software to complement customer’s own measures. Providers usually lock racks and employ 24/7 security guards. 

Support: Data center colocation providers have knowledgeable staff for round-the-clock support.

Who Are Data Center Colocation Facilities For?

Data center colocation is a great way to free up resources by outsourcing data center operations. 

It is financially advantageous especially for small to medium sized businesses who have not yet shored up the finances to build their own data center, because they can write off the operational expense of data colocation as a tax reduction measure. 

However, if the operational expense of data center colocation exceeds the annual depreciation expense of a capital expenditure and if the organization has cash flow to support the investment, on-premise is the better option. This is usually the case for enterprises with bigger budgets. 

Yet larger businesses do not typically put all their computing needs in one basket. They usually have the financial and infrastructural flexibility to combine on-prem, cloud, colocation services, and more. 

Data center colocation is really for any company pursuing a diverse infrastructure strategy. 

Larger enterprises, for example, want more servers in more locations. In this case, China Telecom, Digital Realty, Equinix, and NTT are the best bet in providing the coverage that enterprises need. 

Smaller to medium-sized businesses should turn first to CoreSite, CyrusOne, Cyxtera, and even NTT for scalable data center colocation solutions. 

Read next: Best DCIM Software for Managing Data Infrastructure

The post Top 7 Data Center Colocation Companies appeared first on Enterprise Networking Planet.

]]>
What is Software Defined Networking? https://www.enterprisenetworkingplanet.com/data-center/software-defined-networking/ Tue, 14 Jun 2022 21:06:30 +0000 https://www.enterprisenetworkingplanet.com/?p=22602 Software-defined networking marks the shift from hardware devices to the use of software to maintain and secure today’s increasingly complex networks effectively, efficiently, and flexibly.  Companies are quickly moving their data to the cloud, and networks are becoming ever more complex with varied and distributed device ecosystems. This not only makes networks more vulnerable but […]

The post What is Software Defined Networking? appeared first on Enterprise Networking Planet.

]]>
Software-defined networking marks the shift from hardware devices to the use of software to maintain and secure today’s increasingly complex networks effectively, efficiently, and flexibly. 

Companies are quickly moving their data to the cloud, and networks are becoming ever more complex with varied and distributed device ecosystems. This not only makes networks more vulnerable but also more difficult to manage. 

Software-defined networking (SDN) helps network administrators more closely monitor company networks. SDN also complements and supports emerging technologies, such as 5G networks and secure access server edge (SASE), to name a few, keeping company computing environments adaptable to new technology.

The SDN market is expected to continue growing rapidly. In fact, it is forecast to grow from its 2020 market size of $13.7 billion to an estimated total value of $32.7 billion by the year 2025.

What is SDN?

Software-defined networking is a network architecture approach that allows network engineers and administrators to centrally control or program the network and its traffic through software applications. SDN helps companies remain adaptable, allowing network engineers to efficiently orchestrate network services to devices as needed.

SDN is similar but distinct from SD-WAN. SD-WAN is born out of SDN and indeed applies the same basic concepts as SDN to direct network traffic quickly and efficiently. However, SDN has a smaller geographic scope than SD-WAN, which can span a greater geography. Another key difference between the two is that users (network administrators) program an SDN, while the vendor provisions and optimizes the services that SD-WAN delivers.

It may sound paradoxical, but SDNs help companies achieve an integrated network ecosystem but, at the same time, divorce the control plane from the data plane in the network. This allows software to run independently and in a device-agnostic and operating system agnostic manner, even at the edges of the network. The software is still accessible to network switches and routers that would otherwise stand behind closed proprietary firmware.

Types of SDNs

There are four different types of SDNs.

API: Controls the flow of data between the control and infrastructure layers through programming interfaces.

Open: Uses open protocols to route communications between virtual and hardware devices.

Overlay: Maps a virtual layer on top of a hardware ecosystem to provide segmented access and bandwidth between devices and data centers.

Hybrid: Bridges traditional and software-defined networking by assigning the best protocol depending on the kind of data traffic.

Who uses SDN?

The utility of SDN varies, depending on business size and type. 

Mainly smaller businesses

Because of the more localized nature of SDNs, primarily small and medium-sized businesses adopt them to simplify network control. Plus, the centralized nature of SDNs keeps operational costs down for SMBs. 

However, behemoths Facebook, Google, and Microsoft as well as universities also use SDN for its open-source elements, which helps keep costs low. SDNs help cloud service providers deliver optimal service to the edge. They also help universities centrally manage automation, service, and security across both wi-fi and ethernet networks on campus.

Fintech companies

Companies operating in the tech and financial services sectors tend to capitalize on the advantages of SDNs. 

SDNs are attractive to tech companies—such as cloud service providers—for a number of reasons, including but not limited to:

  • Streamline network provisioning.
  • Optimize network performance.
  • Automate manual tasks to expedite customer onboarding.
  • Quickly add new services or expand existing ones.

Companies in the financial services industry use SDNs to keep confidential data on the network secure. SDNs provide a virtual layer of firewalls to fortify and protect devices on the network. Moreover, SDNs enable users to quickly respond to rapid changes in financial markets by moving funds, making quick trades and other time-sensitive financial transactions. 

Industrial/manufacturing environments

Manufacturing facilities often run 24/7, necessitating an SDN to maintain network resilience and efficiency. With a distributed SDN, companies with geographically dispersed manufacturing facilities can remotely and efficiently control different parts of the manufacturing process, for instance, lighting in one facility and robotics in another.

Additionally, SDNs can be combined with other technologies that mutually enhance one another. For instance, SDN combined with Network Function Virtualization achieves security, performance, and reliability requirements for some industrial settings.

Parts of the SDN and How They Work Together

SDN is made up of three layers: application, control, and infrastructure. 

Application level

The application level houses the network’s apps. The apps send requests to the network’s control layer. The SDN uses APIs to interact with and manage apps through an application-run controller. 

Control level

The control layer (or control plane) contains the controller and runs every service on all devices within the network from one central location. This layer is therefore often aptly referred to as the brain of the network. 

The control layer acts as the intermediary between apps running on devices and the underlying switch infrastructure. It directs requests to the infrastructure layer, establishes routes, and assigns time and frequency slots

The controller comes in three different types, according to the architecture of the SDN network: centralized, distributed, and hybrid. The right controller for any given computing environment depends on the system size as well as the desired level of security, resilience, and scalability. The distributed controller is a good compromise between security and system complexity.

Infrastructure level

The infrastructure layer – also called the data plane or forward plane – is the bedrock of the network. It contains physical devices, known as data plane devices, namely routers and switches. This layer basically regulates and segments traffic within and across networks.

The switches change the traffic across the network, directing traffic where it’s most needed at any given moment and determining how it gets delivered. They are like the muscles of the body that act based on messages sent from the brain (control). The infrastructure layer needs to be able to support various types of devices and operating systems, so that apps work properly.

Benefits and Disadvantages of SDN

Benefits of SDN

Agility

SDNs render networks more flexible and scalable to adapt to the changing needs of today’s organizations by, for example, rerouting traffic to reduce downtime and prevent outages.

Easy provisioning

Since network engineers, rather than SDN vendors, program SDNs, they make on-demand configuration, customization, service expansion, and service additions possible.

Automation

SDNs use automation for load balancing, repetitive manual tasks, and to automatically deploy, update, and fix apps. This supports optimal performance and increases end-user satisfaction. 

Simplicity

By consolidating services within one common infrastructure, SDNs streamline management of hardware, software, and devices and give administrators greater network control and visibility.

Cost

SDNs lower operational costs by reducing hardware (in hybrid adoptions) or eliminating hardware (in pure play adoption).

Security

Visibility and control from SDNs boost network security by allowing network engineers to monitor the network and intervene in network activity to prevent a cyber attack. Distributed and hybrid SDN controllers, in particular, segment networks and their respective traffic and classify them according to level of confidentiality. This way, a hack stays contained to the network within which it occurred and cannot spread to other network segments. 

Disadvantages of SDN

Vulnerability to a network-wide attack

For starters, while central control is a major advantage, it’s also the Achilles heel of the SDN. Since the controller is the central point through which the network traffic is managed, all it takes is a cyber attack on the controller to bring the entire system down. A distributed or hybrid controller, however, mitigates widespread data loss or data breach, limiting an attack more locally within the network. 

Limited capacity for complexity

Another shortfall related to centralized control is that it opens up the door for network administrator error when operating several workloads simultaneously. However, new solutions address this shortcoming (see Trends section below).

Potential for inefficiency

One final disadvantage of centralized control is that the volume of traffic on large networks can overburden the controller, leading to bottlenecks and overall inefficiency. 

Also read: 6 Enterprise Networking Security Trends

Trends and the Future of SDN

SDN is here to stay, as it has given rise to and supports other technologies that today’s enterprise networks need.

SDN and container management

Container management software such as Kubernetes assist network administrators with managing several loads across the network at the same time. It enables network administrators to build and configure apps, infrastructure, and services each with their own specific requirements that help meet overarching business objectives.

SDN and 5G

SDN, particularly when paired with network functions virtualization (NFV), provides the necessary support to launch, power, and optimize 5G infrastructure.

SDN and SASE

Secure access service edge (SASE) is a comprehensive collection of security features, including WAN and Zero Trust, that enhances the security of SDN and strengthens a network’s security posture.  

Segment routing

Segment routing technology is one of several that have emerged from SDN. Segment routing adds a layer of security to the network that protects against loop attacks and DoS attacks and, combined with software-defined perimeter, distinguishes between trusted and malicious devices that try to connect to the network.

As a networking approach that programs networks through software rather than hardware, SDN has become a cornerstone of enterprises’ resilient, quickly adaptable network strategy. SDN has evolved to include a range of SDN types, enabling businesses of all sizes to take advantage of SDN’s benefits of agility, simplicity, and its support of newer technologies, such as 5G. 

Read next: Top Enterprise Networking Companies 2022

The post What is Software Defined Networking? appeared first on Enterprise Networking Planet.

]]>
Best Cloud Service Providers & Platforms https://www.enterprisenetworkingplanet.com/management/cloud-service-providers-platforms/ Mon, 16 May 2022 20:46:26 +0000 https://www.enterprisenetworkingplanet.com/?p=22510 Cloud service providers have become increasingly popular, as evidenced by the global market’s explosive growth, with an estimated value of $386.97 billion USD in 2021 and an expected CAGR growth rate of more than 15% between 2022 and 2030. With many vendors and cloud platform services across dozens of categories, it’s become especially difficult for businesses […]

The post Best Cloud Service Providers & Platforms appeared first on Enterprise Networking Planet.

]]>
Cloud service providers have become increasingly popular, as evidenced by the global market’s explosive growth, with an estimated value of $386.97 billion USD in 2021 and an expected CAGR growth rate of more than 15% between 2022 and 2030.

With many vendors and cloud platform services across dozens of categories, it’s become especially difficult for businesses to choose the right cloud service provider. This guide will not only explain what cloud service providers are, but will also cover top vendors, benefits of the cloud, and more.

Top Cloud Service Providers

We’ve compiled a list of the top cloud service vendors, listed in alphabetical order, with information regarding their key features, pros, cons, and pricing.

Alibaba

Alibaba’s cloud services cache, sync, backup, and restore data in the cloud.

Alibaba Cloud is the largest China-based cloud provider, offering a wide range of products and services in more than 200 countries around the world.

Key Features

  • Analytics
  • Application services
  • Container services
  • CDN
  • Data storage, backup, and recovery
  • Database services
  • Domains and website hosting
  • IaaS
  • Internet of Things (IoT)
  • Machine learning
  • Media services
  • Monitoring services
  • PaaS
  • Security (DDoS and SSL protection)

Pros

  • Broad array of features and services
  • Vast knowledge and documentation base
  • Reliability
  • Multilingual support

Cons

  • Interface learning curve for non-technical users
  • Some necessary coding knowledge
  • Lack of flexibility for running some services and applications

Pricing

Alibaba Cloud offers a high-value free trial that gives customers adequate time and flexibility to try out its features. Upon adopting Alibaba Cloud, the available services are priced on a pay-as-you-go model, with many tools even offered free of charge. As such, Alibaba Cloud’s modular flexibility makes it a great solution for businesses of various sizes.

Amazon Web Services

Use Snapshots in AWS to create a backup of critical workloads, such as a large database or file systems that span multiple elastic block stores (EBS).

A consistent leader in Gartner’s magic quadrant, Amazon Web Services (AWS) dominates the U.S. cloud market. Small businesses and enterprises alike use AWS as their cloud service provider, with more than 200 IaaS, PaaS, and SaaS cloud services available globally.

Security is a main priority for AWS, illustrated by its offers of more than 40 compliance certifications. And to meet the needs of customers who aren’t ready to do away with legacy systems or on-premises data centers, AWS has added VMware Cloud as a virtualization solution that bridges traditional and cloud cloud data management models.

Key Features

  • Analytics
  • Blockchain
  • Container services
  • Data storage, back-up, and recovery via S3
  • Database services via RDS, Aurora, and DynamoDB
  • E-commerce
  • Edge computing
  • IaaS
  • Internet of Things (IoT)
  • Penetration testing
  • ML/AI via SageMaker
  • Monitoring services
  • PaaS
  • Serverless computing via EC2

Pros

  • Centralized billing and management
  • Easy implementation
  • Flexible capacity management
  • Global availability
  • Breadth of features
  • Knowledgeable customer support
  • Extensive APIs, tools, and resources
  • Integrations with broad network of partners
  • High scalability and flexibility
  • Provides centralized and flexible billing for numerous cloud services

Cons

  • Complicated pricing structure
  • Potential tool creep can lead to inflated costs
  • No trial period before committing to AWS
  • Lack of compatibility with on-premises cloud environments

Pricing

AWS takes a pay-as-you-go approach, so you only pay for the services you use without getting locked into a contract or licensing periods. If you cancel a service, AWS does not charge a termination fee. Though AWS offers a pricing calculator, it requires users to enter information that, at best, gives them a rough estimate of services they would actually need.

Azure

Dashboards in the Azure portal enable users to get a clear overview of cloud resources.

Microsoft Azure provides hundreds of services, including cloud-based versions of its legacy products and services, such as Office 365 and Power BI. Customers with traditional on-premises technology management appreciate Azure’s migration flexibility and support with Azure Stack, for example.

Azure also offers 90 compliance certifications designed for governmental, regional, global, and industry-specific uses.

Key Features

  • Analytics
  • Blockchain
  • Container services
  • CDN
  • Data storage, backup, and recovery
  • Database services
  • Database services via Azure SQL, Azure Cosmos DB, MariaDB, MySQL, PostgreSQL
  • Microsoft Cognitive
  • Internet of Things (IoT)
  • Microsoft Azure Stack
  • Media services
  • Monitoring services
  • PaaS
  • Security (DDoS and SSL protection)

Pros

  • Frequent platform updates
  • Global availability
  • Easy to set up and manage
  • Scalability
  • Consistency across cloud environments
  • Strong hybrid cloud services
  • Breadth of cloud services, applications, and enterprise SaaS tools
  • Integration between PaaS and Azure’s public cloud
  • User-friendly interface

Cons

    • Initial learning curve
    • Complicated pricing and licensing plans
    • Lack of customizability
    • Occasionally inadequate customer support, according to some users

Pricing

As with AWS, Microsoft Azure’s pricing and licensing options can be challenging to navigate. They offer different kinds of promotional discounts, making it hard to know which one applies; in some cases it’s hard to get an accurate idea of pricing upfront.

Google Cloud Platform

Create custom dashboards in the Google Cloud console or the Cloud Monitoring API to view alerts, log entries, and other metrics.

Google Cloud Platform (GCP) is a strong and popular contender among enterprises, available globally. Additionally, GCP is building out its multicloud solutions and data center locations to accommodate smaller businesses and gain market share.

Though it offers comparable services to AWS and Azure, users have been particularly impressed with the GCP’s machine learning functionality and robust analytical tools.

Key Features

  • Analytics
  • Anthos control pane
  • Google Kubernetes Engine
  • CDN
  • Data storage, backup, and recovery
  • Database services via BigQuery
  • Domains and website hosting
  • IaaS
  • Internet of Things (IoT)
  • Identity access control
  • Serverless computing
  • Monitoring services
  • PaaS
  • Security (DDoS and SSL protection)

Pros

  • Flexible contracts and good discounts
  • Responsive customer support
  • Extensive documentation
  • Scalability
  • Global infrastructure
  • Array of cloud solutions and products
  • Breadth of APIs and developer tools
  • Environmentally sustainable (low carbon) infrastructure options

Cons

  • Difficult migrations and integrations
  • Can be pricey
  • Confusing interface, according to user reviews

Pricing

GCP offers flexible contracts and a variety of discounts to attract prospective clients from competing cloud vendors. Free trials are available to those who want to give the platform a test drive.

HPE GreenLake

With HPE GreenLake’s dashboard, it’s easy to keep track of financials, capacity planning, compliance, storage capacity, and more.

As one of the pioneers of edge computing, the Hewlett Packard Enterprise (HPE) GreenLake edge-to-cloud platform taps into a niche market offering that will soon become more mainstream with the rise of 5G use. HPE GreenLake blends cloud applications and infrastructure tools to accommodate each business’s unique needs.

Key Features

  • Container services
  • Application services
  • Data storage, back-up, and recovery
  • CDN
  • Database services (open source)
  • Edge infrastructure development
  • Aruba networking tool
  • IoT
  • Security via Silicon Root of Trust
  • ML/AI

Pros

  • Easy provisioning of private cloud environment
  • Out-of-the-box and customizable cloud environment configurations
  • Flexible framework that supports configurations of your choosing
  • Pay-as-you-go pricing model
  • Scalability

Cons

  • Potential vendor lock-in/dependence
  • Difficult API integrations
  • Documentation and training resources could be better

Pricing

HPE Greenlake pricing is split into three tiers that offer increasingly more features as you upgrade. The website walks you through acquiring a quote by defining your workload, choosing configurations, and finalizing specs.

However, it can be difficult to estimate this information upfront. Plus, the last step is to provide an email address to get your quote, so there is no transparent way to get a feel for price without submitting your contact information.

IBM Cloud

IBM Cloud Direct Link Connect achieves multi-cloud connectivity in a single environment.

Most popular among mid-sized businesses and enterprises, IBM’s full-stack cloud platform covers public, private, and hybrid environments and offers more than 170 products and cloud services.

Key Features

  • Analytics via IBM Bluemix cloud
  • Automation via Cloud Pak
  • Blockchain
  • Container services
  • Database services for SQL and NoSQL
  • Database services
  • IBM Cloud Satellite
  • IaaS
  • Internet of Things (IoT)
  • ML/AI via IBM Watson tools
  • Quantum computing
  • Monitoring services
  • PaaS
  • Security via IBM Cloud Key Protect

Pros

  • Assisted implementation from dedicated IBM support team
  • Infrastructural flexibility
  • Broad network of partners and APIs
  • Offers more than 170 products that run in the cloud
  • Industry-specific applications and services
  • Breadth of security features

Cons

  • Non-intuitive user interface
  • Complex licensing structure
  • High cost

Pricing

IBM offers a product tier that gives users free access to more than 40 cloud services and discounted rates on more than 350 products and services. From there, IBM provides three pricing models: PayGo with Committed Use, Reserved instance use for time-based contracts, or a subscription for platform-wide discounts.

Oracle

Oracle Cloud lets users provision and manage instances.

Oracle Cloud offers integrated cloud services that help companies build, deploy, and manage workloads in the cloud or on-premises. Its most notable feature is its database services, as Oracle is well known for its data management software.

Key Features

  • Analytics
  • Application services
  • Container services
  • IaaS
  • Data storage, backup, and recovery
  • Database services
  • Integration
  • IaaS
  • Internet of Things (IoT)
  • Migration tools
  • Media services
  • Monitoring services
  • PaaS
  • Security (encryption, firewalls)

Pros

  • Vertical-specific products for ERP, CRM, and more
  • Works well if your company already uses Oracle products
  • Requires little to no coding knowledge
  • User-friendly interface
  • Global reach
  • Scalable and flexible for various workloads
  • Free training and certification programs

Cons

  • Difficult setup
  • Relatively high cost
  • Rigid contracts

Pricing

Oracle Cloud outlines a transparent pricing structure, but the information may overwhelm users looking for the right product tier for their company. Users report that Oracle Cloud’s cost is relatively high compared to competitors, but it may be worth the investment if your company already uses and is familiar with Oracle products.

VMware

Easily manage clusters with VM’s vSphere Client.

Owned and backed by Dell, VMware cloud solutions span two main functions: a bridge between on-premises data centers and the cloud and as a cloud service provider.

VMware has leveraged this bridge functionality in its partnerships with other major cloud service providers, including Alibaba, AWS, Azure, Google, and IBM. As such, VMware doesn’t so much compete with other vendors, as it works with them in a mutually beneficial business relationship.

At the same time, VMware is not dependent on the other big players for its success. It’s a powerful cloud service provider in its own right. VMware is positioned to benefit all around by covering on-premises and virtualized data centers, public and private clouds, and everything in between.

Key Features

  • Application management and services
  • Cloud disaster recovery
  • Container services via Tanzu
  • Developer tools
  • Data storage, backup, and recovery
  • Edge computing
  • Migration services
  • Machine learning
  • VMware Cloud Foundation
  • Monitoring services
  • vRealize Suite Cloud Management
  • Security via CloudHealth

Pros

  • Consistency across cloud environments
  • Enables integration between public and on-premises infrastructure
  • Extensive integration with AWS cloud services and other common apps
  • Industry-specific solutions
  • User-friendly interface
  • Scalable and flexible
  • Broad partner network

Cons

  • Relatively high cost
  • Availability of third-party resources is sometimes limited
  • Can’t virtualize all types of workloads
  • In some cases, a direct migration to a public cloud may prove simpler and less expensive

Pricing

VMware outlines a detailed licensing structure that depends on the average maximum number of virtual machines your company manages to afford customers flexibility.

What is a Cloud Service Provider?

A cloud service provider is a third-party vendor that offers a gamut of Internet-based cloud services, including but not limited to:

  • App/website hosting
  • Content delivery network (CDN)
  • Data storage
  • Infrastructure-as-a-service (IaaS) maintenance and security
  • Machine learning (ML) and artificial intelligence (AI) tools
  • Platform as a service (PaaS)
  • Software as a service (SaaS)

Most companies today have moved their computing and data infrastructure to cloud models for flexibility, performance, and speed to meet the growing digital demand of their business and customers.

How to Choose a Cloud Service Provider

With hundreds of cloud service providers to choose from, it’s difficult to know which one is right for your business needs. Be sure to consider the following factors:

Cost of implementation

In addition to financial cost, the time and resources necessary to implement a new cloud infrastructure should also be considered. As such, it may be better to choose a vendor that offers the flexibility to scale up or down with their tools, products, and services in order to limit costs and avoid tool creep. 

Moreover, be sure to ask the vendors about the level of support they provide as you migrate your data and assets onto their infrastructure. All of these considerations are especially pertinent to those moving from on-premises data centers or private clouds to a public, hybrid, or multicloud environment because it’s an even bigger, and thus riskier, shift.

Compatibility

If you’re switching to a hybrid or multicloud environment from either an on-premises infrastructure or a private cloud, make sure your software, hardware, and all other IT assets will function seamlessly across your cloud environments.

To that end, check out the vendor’s application library or partner network for embedded apps and add-on integrations. If it includes many that you already use, that speaks to that cloud service’s ability to fit with your current software and tools.

Read more: Fighting API Sprawl in the Modern Cloud Maul

Cloud security features

Outsourcing your cloud to a provider means relinquishing some control over cloud security posture management (CSPM) security measures.

In your shared responsibility with the cloud service provider of choice, find out whether they offer their own cloud security or rely on a third-party cloud security service. If it’s the latter, researching third-party affiliates only prolongs the buying process and necessitates further trust and outsourcing of your company’s security.

Additionally, make sure the cloud service provider offers transparent and user-friendly dashboards and analytics as well as options for backup and disaster recovery to ensure security and resilience. Depending on your industry, check on the cloud service provider’s compliance certifications as well.

Benefits of Cloud Computing

By migrating your data storage and infrastructure, your company reaps several benefits like easier collaboration, lower costs, increased security, resilience, efficiency, and more.

  • Collaboration: With the ubiquity of work-from-home and hybrid work models, cloud computing is a must-have in order to keep your business adaptable and your workforce connected.
  • Cost Savings: Cloud computing yields lower operations costs because you no longer need to own, manage, and maintain hardware, on-premises servers, databases, and other assets.
  • Security: Cloud providers invest in cybersecurity and include automated security functions in their cloud environments to shoulder some security responsibility.
  • Resilience: When the inevitable data breach occurs or when a server is down, cloud providers keep managed cloud environments running optimally by backing up data. This is a major advantage over on-premises data centers that are not as quick to bounce back when experiencing an outage.
  • Efficiency: Cloud service providers manage servers and data centers for you, which frees up in-house IT staff to focus on more pressing concerns.
  • Scalability: Cloud computing is scalable, offering limitless capacity for users, resources, and workloads. Cloud service providers allow for scaling up and down as needed, which is especially beneficial to businesses operating in fluctuating or cyclical industries.
  • Performance: Cloud service providers often have data centers around the world to deliver fast, reliable service. They also maintain and update cloud software, databases, and other services automatically.

Private Cloud vs. Public Cloud

Some vendors, like Azure, HPE GreenLake, IBM, and VMware, provide both private and public cloud services.

However, it’s not always easy to tell which type of cloud infrastructure is best for your business. Often, it comes down to the expertise of your staff, the size of your budget, and the level of security your industry demands.

In a private cloud, data is hosted on a company’s server or intranet. Companies who own a private cloud are responsible for managing, protecting, and updating servers, hardware, software, and other IT assets with in-house IT staff.

This option affords companies more control and security over their clouds. However, a private cloud is less efficient and more costly because of the amount of resources needed to maintain it.

Read more: Top 8 Data Migration Practices & Strategies

As a result, a public cloud is often the more popular option, supplying companies with their infrastructure that the provider maintains and updates. Public clouds ensure business resilience and continuity, enable faster deployment, and are more cost efficient.

However, they might not provide the level of security and control necessary for industries that must adhere to more stringent data privacy regulations.

Read more: What Are Sovereign Clouds?

Getting a Clearer Picture in a Crowded Cloud Market

Cloud service providers relieve companies of the burden of operating and maintaining infrastructure on their own by executing their own resources and expertise to manage cloud infrastructures, platforms, and applications. This benefits your company in terms of efficiency, cost, security, and other aspects.

However, the cloud service provider market is as diverse as cloud models and configurations that companies need. Start with the top providers listed here to get a sense of what you need out of your data infrastructure and which company checks off most of your boxes.

Read next: Best Enterprise Cloud Migration Tools & Services 2022

The post Best Cloud Service Providers & Platforms appeared first on Enterprise Networking Planet.

]]>
What is Digital Identity? https://www.enterprisenetworkingplanet.com/management/what-is-digital-identity/ Mon, 04 Apr 2022 19:09:42 +0000 https://www.enterprisenetworkingplanet.com/?p=22395 In the B2B environment today, 80% of seller-buyer interactions will happen digitally by 2025. B2B buyers’ increasing adoption of a digital-first approach to buying makes their internet activities that much more interesting to sellers. In order to properly market to current and prospective customers, sellers track, measure, and act upon buyers’ internet behaviors as much […]

The post What is Digital Identity? appeared first on Enterprise Networking Planet.

]]>
In the B2B environment today, 80% of seller-buyer interactions will happen digitally by 2025. B2B buyers’ increasing adoption of a digital-first approach to buying makes their internet activities that much more interesting to sellers. In order to properly market to current and prospective customers, sellers track, measure, and act upon buyers’ internet behaviors as much as possible.

This presents a challenge to both sides of the B2B buying journey. A business’s growing online presence—the content their employees create and consume, the purchases and searches they make, and other online activities—creates a digital identity that is susceptible to security breaches, such as a brute force attack.

What Constitutes Digital Identity? 

Digital identity is made up of the traces of a person’s, business’s, or entity’s digital activities. Digital activities include those performed online—searches, transactions, creating accounts, entering usernames and passwords, and any other information that identifies an entity’s past or current internet use patterns. 

Digital identity differs from physical identity that can be verified in person by checking a passport, driver’s license, ID, or badge. Instead, digital identity is linked to various digital identifiers, including but not limited to:

  • Email addresses
  • User names
  • Passwords
  • Searches
  • Domains

Why Does Digital Identity Matter for Enterprises?

Digital identity isn’t only for individual consumers in the B2C market; digital identities also apply to enterprises in the B2B environment

Every time buyers or employees at corporations conduct searches, post or engage with content, or purchase products or services, the corporation leaves digital traces that make up the enterprise’s digital identity. 

Digital identity is a set of behaviors and keystrokes that open up a business to digital identity theft. Because malicious actors can monitor behaviors—such as purchasing patterns  and keystrokes—managing and securing a digital identity poses a formidable challenge for companies on both ends of a B2B transaction. 

The onus of securing customer data that makes up a business’s digital identity falls to both sellers and buyers. Sellers need to be able to keep customer data secure, otherwise they risk losing customers and gaining a bad reputation for mishandling customer data. Investing in a secure CRM is therefore worthwhile. It’s also imperative to prove to buyers that their data is secure when they do business with you as the seller.

Corporate buyers, however, also have a responsibility to perform due diligence when making corporate purchases in order to protect their digital identity. This involves, for instance, researching potential vendors and evaluating their trustworthiness through their website and interactions with your company. Users should also be required to create complex passwords and prompted to update them regularly.  

Best Practices for Protecting Digital Identity

A company should take a combination of systemic and behavioral measures to secure their digital identity.

Use a secure browser

Work with a secure browser, such as Avast Secure Browser or DuckDuckGo, that keeps your search activity and data safe from third parties. 

Require strong passwords

Bad actors have sophisticated methods for cracking users’ passwords. It’s therefore critical to enforce strong password practices. 

For instance, require users to create passwords that are at least 8-10 characters long and that contain a mix of numbers, uppercase and lowercase letters, and symbols. For added security, systematically prompt users to update their passwords at regular intervals. 

Another way to protect passwords from attackers is to require different passwords for different apps and sites. To help users keep track of their passwords, adopt a secure password manager, such as 1Password or Bitwarden.

Enforce multi-factor authentication

Given that brute force attacks steal passwords, two-factor or multi-factor authentication presents an additional shield for digital identity. Multi-factor authentication comes in many forms.

Temporary passcode

To ensure that the user is indeed authorized to access the enterprise’s network, two-factor or multi-factor authentication requires users to enter a passcode sent by phone call or SMS to an affiliated phone number.

Knowledge-based authentication

Alternatively, knowledge-based authentication requires users to answer their pre-set security questions that only they know the answers to. Such questions target trivial information, such as the name of one’s grade school or the name of their first pet.

Secure shell authentication

Besides authenticating users through 2FA, MFA, or knowledge-based authentication, some IAM platforms verify user access through SSH (secure shell) key management. This encrypts session activity within an app or at a website and encrypts passwords as well to create a shield around such activities that make up a company’s digital identity.

Biometrics

Another form of multi-factor authentication checks a user’s physical identity by reading biometrics through facial recognition, retinal scanning, or fingerprint scanning. 

Adopt microsegmented access

In a B2B purchasing environment, it’s critical to make sure that only those who are involved in the buying decision have access to financial information, such as credit card numbers and corporate account numbers. Microsegmentation grants or restricts access to certain applications or sensitive data based on roles or identities within the organization. 

For example, the 6-10 people in charge of software purchase and implementation will have access to subscriptions, past purchases, and credit card information. This access will either be based on their names or their job titles. 

How Well-Protected is Your Organization’s Digital Identity?

Assume that any actions that shape your company’s digital identity—that is, activities that employees perform online through company accounts—are visible and vulnerable to bad actors. Employ a combination of best practices to secure your digital identity.  


Read next: Best IAM Tools & Solutions 2022: Identity Access Management Software

The post What is Digital Identity? appeared first on Enterprise Networking Planet.

]]>
Hiring Crunch in Cybersecurity & What Your Company Can Do About It https://www.enterprisenetworkingplanet.com/management/cybersecurity-hiring-trends/ Wed, 23 Mar 2022 19:56:52 +0000 https://www.enterprisenetworkingplanet.com/?p=22354 Cybersecurity hiring is facing a skilled labor shortage as companies undergo digital transformation and need more cybersecurity professionals than the current labor market can provide. Over the past two years, companies’ accelerated digital transformation and the urgency to address cyber crimes and data breaches has outpaced the availability of skilled workers. The Great Resignation and […]

The post Hiring Crunch in Cybersecurity & What Your Company Can Do About It appeared first on Enterprise Networking Planet.

]]>
Cybersecurity hiring is facing a skilled labor shortage as companies undergo digital transformation and need more cybersecurity professionals than the current labor market can provide.

Over the past two years, companies’ accelerated digital transformation and the urgency to address cyber crimes and data breaches has outpaced the availability of skilled workers. The Great Resignation and the US government’s issuance of fewer work visas to foreigners since March 2020 has only exacerbated the labor gap. 
Insufficient compensation, combined with stress and burnout, has pushed more cybersecurity professionals to leave or switch jobs. In fact, 38% of cybersecurity professionals surveyed partially attribute the cybersecurity talent shortage to relatively low salary offerings that make it difficult to attract, recruit, and retain qualified candidates.

Cybersecurity Hiring Crunch

The number of unfilled cybersecurity positions has grown tremendously from 1 million in 2013 to 3.5 million in 2021, with Texas and California leading the way with the most cybersecurity openings. According to a LinkedIn report, cybersecurity jobs account for 13% of all IT jobs, and the LinkedIn platform itself currently has more than 59,000 cybersecurity job postings in the US. 

The most sought-after cybersecurity professionals include:

  • Cybersecurity analyst
  • Cybersecurity consultant
  • Cybersecurity manager
  • Cybersecurity specialist
  • Network engineer
  • Penetration & vulnerability tester
  • Software developer
  • Systems administrator
  • Systems engineer

Effective Cybersecurity Hiring Tactics

Here are some concrete actions that your company can take or invest in to start attracting more qualified cybersecurity professionals.

Compensate fairly, even generously

Cybersecurity is worth the investment, so a company must compensate its cybersecurity professionals accordingly. The average annual US salary for cybersecurity professionals is $100,000 with Lakes, AK, San Francisco, CA, and Santa Clara, CA, as top paying cities. To attract top talent, conduct research on industry compensation statistics for your region. This will give you an idea of what competitors are offering and whether you can meet or exceed their numbers. 

Align with HR on recruiting

Just under one-third of professionals surveyed said that cybersecurity has a fair or poor relationship with human resources (HR). This would explain why nearly one-third of professionals surveyed think that HR is misguided and ill-informed in its search for qualified cybersecurity candidates. HR and cybersecurity teams don’t seem to be an intuitive pairing, but they must work together to establish hiring practices that meet cybersecurity needs as well as business goals. 

Re-think your current job postings

A quarter of survey respondents found their employer’s cybersecurity job postings to be unrealistic, demanding too many certifications, years of experience, and other specific technical skills. To broaden your search for talent, carefully craft job postings rather than using outdated templates that you may have used in the past. 

The current state of the job market requires re-assessment of must-haves versus nice-to-haves, with several criteria falling to the latter. Be open to a variety of experience levels and qualifications, and make certifications or specific technical skills bonuses rather than requirements. NIST’s NICE Framework is a helpful resource to consult when determining the appropriate skills, tasks, and knowledge needed to perform certain types of cybersecurity work. 

If your company’s needs nevertheless require a specialized set of competencies, ensure that the salary is enticing.  

Invest in ongoing employee training

Being supported in one’s role is arguably a main factor in determining employee retention. Of 489 cybersecurity professionals surveyed, 21% did not complete the typical 40 hours of training annually because their companies did not pay for it. Hiring a cybersecurity professional is not a one-and-done task. It’s the company’s responsibility to subsidize ongoing professional training and development to enable cybersecurity professionals to do their jobs properly.

Deploy unconventional recruiting tactics

To develop the next generation of cybersecurity professionals, reach out to institutions of higher education near your company’s location to establish a mutually beneficial relationship. Microsoft, for example, launched a national campaign to help place a quarter million of community college graduates into cybersecurity roles by 2025 in an effort to close the talent gap. Setting up a similar placement or internship program creates a sustainable uptake of talent into your company.

Diversify the talent pool

Systemic inequities propagate repetitive hiring patterns that lead to a fairly homogenous and limited talent pool. According to demographic statistics, cybersecurity analysts, for example, are:

  • Predominantly male (71%)
  • College educated (61% have a Bachelor’s degree)
  • 42 years old on average
  • Mostly white (73%)

Plugging in any of the other job titles listed in the earlier section generates similar demographics across the board. HR and cybersecurity teams can and should come together on creative recruiting strategies to diversify and expand the talent pool. IBM, for instance, is partnering with historically black colleges and universities (HBCUs) to train students and prepare them for in-demand tech jobs. 

Other demographics that remain largely untapped but who are looking for work include women returning to the workforce and the older populace, but major companies are spearheading initiatives to empower these groups that remain largely excluded from recruiters’ attention. 

Within the past year, Google, for instance, has announced at least two initiatives to train older, low-income adults as well as formerly incarcerated adults on digital skills. Cloudflare has been offering a returnship program for women re-entering the workforce since 2017.

Read next: Best Cybersecurity Certifications 2022

The post Hiring Crunch in Cybersecurity & What Your Company Can Do About It appeared first on Enterprise Networking Planet.

]]>
Software Bill of Materials (SBOM) Pros & Cons https://www.enterprisenetworkingplanet.com/security/software-bill-of-materials-sbom-pros-cons/ Wed, 23 Mar 2022 18:36:25 +0000 https://www.enterprisenetworkingplanet.com/?p=22352 As cyber attacks become more sophisticated and happen more frequently, SBOMs or software bills of materials have become a more common practice across the software supply chain to trace vulnerabilities with as much accuracy as possible. Software bills of materials help software buyers make more informed purchasing decisions by disclosing an application’s various components. What […]

The post Software Bill of Materials (SBOM) Pros & Cons appeared first on Enterprise Networking Planet.

]]>
As cyber attacks become more sophisticated and happen more frequently, SBOMs or software bills of materials have become a more common practice across the software supply chain to trace vulnerabilities with as much accuracy as possible. Software bills of materials help software buyers make more informed purchasing decisions by disclosing an application’s various components.

What is a Software Bill of Materials (SBOM)?

A software bill of materials lists components of software –whether open-source or proprietary–  for the sake of transparency and security. Components include:

  • Dependencies
  • Firmware
  • Hierarchical relationships
  • Libraries
  • Licenses
  • Operating systems
  • Metadata

An SBOM functions in a similar way to a nutritional facts label, describing the ingredients that went into developing the software. Aside from being a list of components, however, an SBOM is also a catalog of software versions and updates. That is, it’s a living document that evolves as the software does.

Why Are SBOMs Used?

Software development involves third parties, creating a software supply chain that has become increasingly susceptible to attacks. As a case in point, Python’s open-source code repository found itself vulnerable to malicious behavior in 2021. Any third-party software using Python was at risk and, further down the chain, companies using that software. If that third-party software had had an SBOM, its users further down the supply chain could have been informed of the vulnerability and investigated it further. 

SBOM Pros

SBOMs are beneficial to any organization that prioritizes security. According to Bren Briggs, Vice President of DevSecOps at Hypergiant, an SBOM is a “crucial component of cybersecurity control.” Briggs goes on to say that “asset inventory is the single most fundamental control available to organizations to reduce the risk of vulnerabilities.” Given that cybersecurity attacks are becoming more complex and more frequent, Briggs believes that having an SBOM in place is a good practice for organizations to be “disciplined in the basic controls” and achieve “far better levels of security with lower effort and cost.”

SBOMs also empower software buyers and ensure adaptability, compliance, and supply chain integrity.

Adaptability

Since SBOMs contain a catalog of previous software versions, they allow software developers to quickly revert to a previous software version in case an update disrupts or threatens the software’s performance and security.

Buyer power

SBOMs therefore give software buyers more visibility into what a software contains and its potential vulnerabilities, keeping them in control of the research and buying process before committing to an application. 

Compliance

SBOMs are not purely a matter of corporate discretion. President Biden announced in an executive order that application developers must provide a software bill of materials to improve cybersecurity.

Similarly, according to 2020 IoT cybersecurity laws in California and Oregon, manufacturers must build standard security features into their devices’ software, such as regular, automatic security updates.

SBOMs contain a log of changes to and version of software, making it easy for the developer to maintain records and remain compliant to regulatory standards in the event of an audit.

Supply Chain Integrity

Over all, vigilance across the software supply chain has led to SBOM use for transparency purposes. On the vendor side, an SBOM supports software developers to be more vigilant about third-party components. They enhance visibility over the software’s components, enable easier management of component dependencies, and support industry best practices and standards. SBOMs enhance supply chain integrity, keeping developers, vendors, and clients informed and unified in their approach to security. 

SBOM Cons

Difficulty

Drafting a SBOM is not so cut and dry. While a list of individual parts works for the manufacturing environment, this is not how software development happens. A developer may incorporate only certain files, functions, or lines of code from third-party software, making it difficult to draft up a SBOM.

False alarms

Components of software are not inherently vulnerable; they’re only vulnerable based on how they’re being used within the software. It’s therefore misguided for customers to rely on a SBOM to trace vulnerabilities because the component in question may only be vulnerable if used in a certain way. Use context should therefore be included in SBOMs so as not to raise false alarms for customers.

Time intensiveness

Given the difficulty in tracing components to draft an SBOM, doing so will also take time. Other factors that prolong the process is training staff to use a tool that helps create an SBOM that follows the National Telecommunications and Information Administration’s (NTIA) guidelines. The time it takes to write up an SBOM is not an excuse for foregoing security and transparency, however, it is a formidable challenge for software developers to take into account. 

Lack of data

Opponents of the efficacy of SBOMs cite the NTIA’s lack of data to back up the effectiveness of SBOMs in preventing cybersecurity attacks. While SBOMs may be good in theory, they may ultimately do more harm than good by distracting software developers and customers from more serious impending risks.  

SBOM is a Step in the Right Direction

SBOMs services are increasingly becoming part of vulnerability management, third-party risk management, and software composition analysis offerings. While SBOMs are neither preventative nor a cure-all solution for cyberattacks, they are a step towards transparency and vigilance for software developers and customers to identify and address risks.  

Read next: Top Vulnerability Management Tools & Software 2022

The post Software Bill of Materials (SBOM) Pros & Cons appeared first on Enterprise Networking Planet.

]]>
Top 6 Patch Management Trends https://www.enterprisenetworkingplanet.com/security/patch-management-trends/ Fri, 11 Mar 2022 16:12:36 +0000 https://www.enterprisenetworkingplanet.com/?p=22288 To keep software up-to-date and secure, software developers roll out patches to their applications from a central server on a regular basis. Because of frequent patch releases and the proliferation of software versions they generate, patch management is a crucial part of the software development lifecycle. What Is Patch Management? Patch management defines the process […]

The post Top 6 Patch Management Trends appeared first on Enterprise Networking Planet.

]]>
To keep software up-to-date and secure, software developers roll out patches to their applications from a central server on a regular basis. Because of frequent patch releases and the proliferation of software versions they generate, patch management is a crucial part of the software development lifecycle.

What Is Patch Management?

Patch management defines the process of proactively fortifying software against specific security vulnerabilities before hackers have a chance to exploit them. This process entails tracking and supervising software patch releases. Stages of patch management include:

  • Identifying a bug
  • Devising a code solution for the bug
  • Testing the patch in a sandbox
  • Approving the patch
  • Documenting the patch code
  • Releasing the patch to end users
  • Monitoring the patch release

Patch management is important because it keeps your application secure from hackers. Developers must take care when releasing a new patch, as it may affect a device’s other applications and functions. Patches also guard against software performance issues and platform version misalignment.

Just as threats are continuously evolving, so too is patch management. The patch management market is expected to grow to a total value of just over $1 billion by 2026, with the banking and healthcare sectors anticipating the most growth. 

6 Trends in Patch Management

AI/Automation

With the help of patch management software, much of the management process is powered by automated AI. Algomox patch management service, for instance, scans for, evaluates, tests, and deploys patches across all servers and applications. 

Automating patch management and associated status reporting ensures your software will never miss an update or critical security fix. Automation can save precious time for repair, given that the average amount of time it takes to fix a vulnerability is 205 days. When patch management is automated, IT administrators can focus on other business critical tasks. 

Compensating controls

Because of the long lead time to address a network vulnerability, organizations often utilize compensatory controls which serve as a triage in the meantime. However, organizations should not rely on compensatory controls as a crutch, as they are not secure over the long term.

Centralization

Today’s patch management market is quite fragmented owing to vendors’ specialization in various components of patch management. Managing multiple products from different vendors gives rise to inefficiencies. 

To avoid confusion and inefficiency, organizations desire a centralized server to manage patch releases, including from third parties. Doing so reduces failure, boosts productivity, saves time, and spares your organization the steep cost of improper patch management which averaged $4.24 million in 2021

Qualys features a centralized patch management app that allows customers to remediate cloud vulnerabilities from one place. 

Also read: Patch Management in Cloud Technology

Visibility

As companies increasingly adopt BYOD models, the resulting device sprawl gives rise to “shadow IoT” or unauthorized, unrecognized devices that are connected to your network and may not ever get updated. 

BYOD makes patch management increasingly challenging to manage, opening up your organization’s network, cloud, data, and more to cyberattacks. Managing a wide range of devices impairs the visibility of new patches that software vendors release. Both JumpCloud and Qualys enable automatic patch deployment across various operating systems and applications in a dispersed, heterogeneous device ecosystem. 

Also read: 12 Tips for Mitigating Security Risks in IoT, BYOD-driven Enterprises

DevOps integration

With the release of new software updates and subsequent monitoring for bugs, patch management fits neatly into the continuous integration/continuous delivery (CI/CD) feedback loop to remediate security issues that arise in deployments.

Also read: Integrating IT Security with DevSecOps: Best Practices

Policy-driven patch management

Just because patches are released does not mean that administrators apply them. In fact, 60% of breaches are a result of failure to apply available patches. Organizations learned this lesson the hard way in late 2021 when organizations failed to apply Microsoft’s patches. However, neglecting to deploy patches isn’t necessarily a matter of defiance or laziness; administrators are sometimes hesitant to deploy new patches out of fear that they will put a drag on the system, disrupt performance, or trigger other applications to misfire.

Organizations today are embracing a policy-driven patch management approach to ensure that the network owner applies patches to the system within a critical, previously determined timeframe. This approach combines operational data about system configuration with security measures to be undertaken.

Patch management is part of a comprehensive vulnerability management strategy. And, it doesn’t have to be cumbersome. Fortunately, today’s patch management software is widely embracing automation and AI in order to provide administrators greater visibility and control over their networks.

Read next: Best Patch Management & Software Tools 2022

The post Top 6 Patch Management Trends appeared first on Enterprise Networking Planet.

]]>