Adrian Bridgwater, Author at Enterprise Networking Planet https://www.enterprisenetworkingplanet.com/author/adrian-bridgwater/ Fri, 02 Jun 2023 19:22:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 From Co-existence to Convergence, the Union of 5G & Wi-Fi https://www.enterprisenetworkingplanet.com/management/5g-wifi-6/ Fri, 15 Apr 2022 21:23:41 +0000 https://www.enterprisenetworkingplanet.com/?p=22445 Learn how 5G and Wi-Fi will work together and what potential trade-offs or points of tension may arise between them.

The post From Co-existence to Convergence, the Union of 5G & Wi-Fi appeared first on Enterprise Networking Planet.

]]>
As we know, 5G is the new broadband telecommunications cellular standard that promises to super-charge the global connectivity quotient. At the same time, we must also think about the emergence of Wi-Fi 6, which is the next evolution of the Wi-Fi 802.11 standard.

While 5G is variously detailed as being some 10 to 20 times faster than 4G, Wi-Fi 6 takes connectivity speeds from 3.5 Gbps (gigabits per second) on Wi-Fi 5 to 9.6 Gbps. So in short, everything is getting faster.

However, some questions arise about whether or not we will actually get these speeds and, if so, whether we can make productive use of them.

While it’s possible we won’t get, or need, all the speed and performance promised from 5G and Wi-Fi 6 any time soon, the groundwork and architecture being laid down now will certainly help push us toward newer and better technology innovations in the years ahead.

Although, that does still leave the question of how 5G and Wi-Fi 6 will work together and what potential trade-offs or points of tension may arise between them.

Also read: The Impact of Wi-Fi 6 on Digital Transformation

Use Cases

Wi-Fi (and, ultimately Wi-Fi 6) and 5G will now need each other more than ever to support consumer and business applications ranging from augmented and virtual reality (AR/VR) to Industry 4.0 factory automation.

One example is providing reliable coverage indoors for bandwidth-intensive applications such as 4K video. That’s challenging for 5G, which is using a new high-band spectrum, such as millimeter wave, to provide the speeds users expect from 5G.

Working alongside 5G, Wi-Fi 6/6E provides more capacity than all the other Wi-Fi bands put together and delivers connections with speeds equivalent to the new advanced 5G mobile, which allows it to support low-latency levels required for mobile gaming, VR and AR applications, and Industry 4.0 solutions.

The Movement to Converge 5G and Wi-Fi 6

Currently, tech innovators are pushing 5G and Wi-Fi 6 usages from a point of co-existence to that of intelligently integrated convergence. According to Tiago Rodrigues, CEO of the Wireless Broadband Alliance (WBA), only a team effort combining cellular and Wi-Fi can deliver the reliable, high-quality, ubiquitous wireless experiences consumers and enterprises want.

“Both 5G and Wi-Fi 6 technologies are critical for the evolution of connectivity and the digitalization of the planet as a whole,” said Rodrigues. “While the focus for cellular is on wide-range coverage, … the focus for Wi-Fi is more prevalently on indoor coverage and high bandwidth connectivity.”

He further clarifies that there are a few use cases relevant for mobile operators to take advantage of Wi-Fi, such as offloading data from the cellular network when and where there is insufficient capacity; where there is limited cell coverage, mainly indoors; and in some cases, even outdoors where cellular signals are poor. 

Also read: Using Wi-Fi 6 and 5G to Build Advanced Wireless Networks

Constant Connectivity Capabilities and Potential

The other pertinent point here is that many devices today, such as tablets, laptops, cameras, etc., are Wi-Fi only and work in ways that depend upon Wi-Fi as the only wireless connectivity option.

“We realize that what the customer really cares about is being connected to the best-performing network at any given time, on whatever device they are using,” said Rodrigues. “Added to this factor, we need to remember that prudent organizations realize they need to pay the right price for the right connectivity technology at the right time.

“Sometimes, it will be Wi-Fi, sometimes it will be 5G, and sometimes it will be both. This becomes a harmonious reality with 5G and Wi-Fi 6/6E.”

The Wireless Broadband Alliance’s central position is clear—the continued development of 5G and Wi-Fi 6/6E networks unlocks further potential for Industry 4.0, residential connectivity, connected smart cities, and more. But, convergence is critical for all parties if users and organizations are to truly capitalize on the potential this technology has to offer.

Although, it is becoming clearer that Wi-Fi 6/6E and 5G is a win-win scenario for end users, cellular specialists, and Wi-Fi players.

“5G and Wi-Fi 6 have made enhancements to each technology that actually bring them much closer together, in terms of the services each technology can support, and 5G network cores are expected to support multiple access technologies and apply similar policy and security regimes,” said Rodrigues. “To realize the future opportunities, WBA members are addressing some interesting areas that will help support future convergence of 5G and Wi-Fi networks.”

The Path to Convergence

The WBA’s release of the 5G and Wi-Fi 6 Convergence report was developed with input from mobile carriers, Wi-Fi providers, telecommunications equipment manufacturers, and the WBA’s 5G Working Group. The report provides a breakdown of the current standards and key business opportunities for operators.

Figuring out the best path to convergence is important because businesses and operators stand to gain many benefits from the seamless integration of Wi-Fi and cellular access in 5G networks in areas such as enterprise Wi-Fi, smart cities, and the home.

The WBA works with the standards bodies such as 3GPP, Wi-Fi Alliance, GSMA, and IEEE to understand relevant challenges and address them.

Looking ahead, we can expect developments to surface in areas such as WBA OpenRoaming. The creation of the OpenRoaming cloud federation and development of a global industry standard for Wi-Fi roaming has been supported by more than 60 WBA members.

With WBA OpenRoaming, devices and users will connect to Wi-Fi networks seamlessly, securely, and privately while receiving a cellular-style experience.

The big reveal is that connectivity in general is getting a fairly substantial transformation, if not a complete overhaul in many senses.

The way we, as an industry, enterprise organizations, and users, now help to dovetail these technologies for the greater good could impact how successful they are in the medium- to long-term deployment curve, reconnecting how we connect.

Read next: Best 5G Network Providers for Business 2022

The post From Co-existence to Convergence, the Union of 5G & Wi-Fi appeared first on Enterprise Networking Planet.

]]>
The Reality and Risks of 5G Deployment https://www.enterprisenetworkingplanet.com/management/the-reality-and-risks-of-5g-deployment/ Fri, 08 Apr 2022 17:46:00 +0000 https://www.enterprisenetworkingplanet.com/?p=22414 5G promises transformation in communication and networking across industries, but its deployment is presenting challenges.

The post The Reality and Risks of 5G Deployment appeared first on Enterprise Networking Planet.

]]>
As a technology standard for broadband cellular networks, it is reasonable to think of 5G as a hardware-based entity. However, much of 5G architecture and networking mechanics rely on software components. In fact, the focal point of 5G is the coming together of precision-engineered pieces of software code.

The formation of 5G in and of itself is a complex procedure, as is the wider deployment of it to the world’s increasing population of Internet of Things (IoT) smart devices and the planet’s user base of consumer-grade mobile connected devices.

The Need for Forward and Backward Compatibility in 5G Development

A major consideration at this point is the need to develop 5G in a way that is both forward and backward compatible. This is the opinion of Adam Weinberg, CTO & co-founder of FirstPoint Mobile Guard, an Israel-based cyber cellular protection specialist.

Weinberg says this forward and backward compatibility mandate must be combined with an effort by progressive DevOps teams that insist upon following industry protocols. This will streamline operations and ensure obvious vulnerabilities are addressed at the beginning.

“Where developers need to be diligent is ensuring that they understand how the devices will be communicating with the networks to minimize the chance of creating a larger attack surface,” said Weinberg. “The complexity of 5G means that enterprises need to have comprehensive tools to manage, control, and secure their 5G cellular-connected assets to ensure reliable, secure, and stable connectivity.”

What we can take from this “toolset necessity” message is a call to action for developers themselves. Code gurus of all disciplines approaching 5G will need to work hand-in-hand with cybersecurity teams to ensure all code enables easy management and achieves a robust security rating before it resides in enterprise IoT devices at the network level.

Also read: The Future of Fixed 5G Networks is Now

Considering 5G Vulnerabilities and Risks

Obviously there are many different types of attacks that could manifest themselves within a 5G infrastructure, all of which could have a disruptive effect upon mission-critical supply chains or, in the case of life-critical systems, risk lives.

According to the FirstPoint Mobile guard team, device manipulation is a concern in 5G development. As developer-architects now code the 5G technologies our new networks will be based upon, there is potentially a new stream of opportunities to exploit network loopholes and access control functions. As this is what botnets initially look for, it’s a risk that needs to be considered for the future of 5G.

“Thinking about how 5G systems development will need to be cognizant of data channel rerouting attacks is another prudent (if not essential) act we need to consider here,” said Weinberg. “Attackers can uncover and tamper sensitive information by altering the path of the data on its way to or from the attacked device in the 5G cellular network.”

In short, developers need to work closely with the network engineers who will be managing the IoT devices to ensure the network itself has additional security protection.

Also read: IoT Faces New Cybersecurity Threats

Location-Tracking Frustrations with 5G Networks

Using the mobile network, attackers can utilize a powerful tracking capability to remotely track any mobile device anywhere on the globe. As a software-based network, 5G doesn’t always know with whom it is communicating with. So as a result, Weinberg says developers need to ensure their eSIM and SIM-based devices have the space to add applets on the card.

In addition, these powerful tracking capabilities leave devices vulnerable to information theft. Mobile cell phones with GPS, camera, microphone, and/or screenshot features allow for real-time intelligence gathering when active. This enables attackers to gain access to private or business data on the device through applications, stolen credentials, and more via malware and social engineering.

When building specialized devices for the 5G era, Weinberg and team say, with the possibility of information theft at front of mind, the software code itself must work at the user interface-level to allow easy on-off capabilities and to provide greater control.

“These types of attacks play upon the inherent vulnerabilities of the 5G network. Having a comprehensive solution that can recognize anomalies such as this is critical,” said Weinberg.

Also read: 5G and New Enterprise Security Threats

The Future of 5G Technologies in Key Industries

While industry vendor claims for 5G are optimistically upbeat, nobody is quite sure how real world application use cases will evolve.

In some cases, telecommunication futurists and commentators have suggested that a 5G network, at least in these early stages, will only function about as well as a good 4G network. And many talk about the application of 5G as a key enabler for autonomous driving and other connectivity considerations related to the automotive industry.

The justification for this suggestion is that, in practical terms, the networks up until now have been really good at allowing users to send emails, post pictures to social media platforms, and even stream videos and online cloud games. But for autonomous driving ubiquity, there needs to be a network-level order of magnitude upgrade, which is what 5G is supposed to represent.

Other key industries likely to benefit alongside automotive include retail, manufacturing, and logistics, all of which are verticals where machines, goods, production lines, and market swings move quickly. If an environment is ripe for the application of biometrics, wearable telemetry, and augmented reality, then it will logically be a good low-latency deployment target for 5G technologies.

Read next: The Role 5G Can Play in Global Sustainability Efforts

The post The Reality and Risks of 5G Deployment appeared first on Enterprise Networking Planet.

]]>
Copado: A DevOps Value Chain is Forged by Visibility https://www.enterprisenetworkingplanet.com/management/devops-visibility/ Mon, 21 Mar 2022 16:53:41 +0000 https://www.enterprisenetworkingplanet.com/?p=22339 Andrew Davis of Copado maps the importance of having DevOps teams create visibility in DevOps environments.

The post Copado: A DevOps Value Chain is Forged by Visibility appeared first on Enterprise Networking Planet.

]]>
Despite the rise of low-code/no-code software platforms and the efforts laid down by top-tier software vendors to simplify their expanding product portfolios, the general trend that surfaces across the bulk of all enterprise software deployments is one of interconnected complexity and convolution.

The truth is, modern software development is more complex than ever. A  single enterprise project can have dozens of stakeholders and hundreds of APIs.

However, with DevOps (development and operations), developer and operations teams can unify workloads and deliverables in a joint effort. With WebOps now joining DevOps as a more web team-focused version of the original DevOps workplace methodology, we can reasonably guess that IT teams will become more systematically process-centric in the years ahead.

But even when projects spiral, and it’s clear that teams are feeling DevOps pain and strain, the source of that negative energy can be far from obvious. Hence, visibility is the first consideration in a DevOps environment, according to Andrew Davis, senior director of research and innovation at Copado.

Without visibility, improvement for day-to-day business operations may be slow-moving or difficult to manage.

What is DevOps Visibility? 

To determine what DevOps visibility is or what it really means, Davis argues that a complete answer must include a combination of software development elements and aspects.

“These [aspects] can be as specific as a proper version control mechanism and pipeline management tools to something as large as a transparent organizational culture,” said Davis. “But without all these elements contributing to visibility, an enterprise simply won’t have the information it needs to improve quality or accelerate releases.”

Although DevOps has been widely adopted, and many IT decision-makers report their organizations have adopted DevOps practices to improve software, not all of them have reported success. Thus, when an organization experiences DevOps pain, the first step is to check for where it lacks visibility.

Also read: DevOps: Understanding Continuous Integration & Continuous Delivery (CI/CD)

Key Visibility Indicators

Copado offers three key indicators that can be improved by DevOps visibility:

  • Over-reliance on manual processes and interventions: Tracking changes manually creates problems later, and it is very easy to make a small change and fail to log it. Manual methods can also keep critical information siloed, so different teams might have incomplete or conflicting information—all of which frustrate visibility.
  • Unclear goals: Good DevOps is iterative and leverages circular feedback loops to work toward the ultimate end goal of a strong user experience. However, during development, teams can lose sight of the bigger picture, and unclear feedback mechanisms can make it challenging to prioritize the right things and set the best goals.
  • Trouble with complex merges: Continuous integration (CI) was supposed to be a solution to the time-consuming process of integrating changes into a code repository. However, even CI can create bottlenecks when joining multiple complex branches. Full pipeline visibility provides tools to simplify branches so even complex merges are manageable.

“As these three areas indicate, tackling visibility demands a coordinated effort, and it can be tempting to think that it is too much of an undertaking,” said Davis. “However, people waste an enormous amount of time and energy when they don’t have what they need to do their jobs, so the time spent improving visibility is worthwhile because it provides the insights necessary to get work done.”

Processes that Benefit from DevOps Visibility

Project planning

The key to ensuring project visibility is to make sure a project begins with proper planning that encompasses everything from situational awareness to version control, planning tools, traceability, auditability, and compliance. This will ensure the full field of vision can be brought into focus at any given point in time.

“Plans enable the team to know exactly what everyone is aiming to do with the DevOps initiatives in place,” said Davis. “Without insight into required use cases and planned work, you could face redundancies, missed benchmarks, solutions overlap, scope creep, and more.”

Value stream management

Since value streams center on project flow, businesses should place high importance in understanding and handling them. A team needs to understand the benefit of every project as well as the risk. Without value stream visibility, they can’t recognize the flow of work and where those benefits and risks appear.

System changes

According to Copado, system changes could benefit from visibility to ensure version consistency and that relationships across metadata in system architecture are clear and understandable.

“Visibility also helps to avoid overwrites, metadata conflicts, and the frequent introduction [or recurrence] of bugs,” said Davis.

Moreover, project leaders can see the potential impact of system changes by having an understanding of dependencies and their connections at every level of the software supply chain across teams, systems, packages, and within a codebase.

Performance tracking

Copado’s Davis also says that performance is arguably the most important aspect of visibility, as it can help businesses identify bottleneck sources, work distribution, and opportunities for improvement.

“Tracking performance can be team-focused based upon the four key DevOps metrics of lead time, deployment frequency, change fail rate, and time to restore,” said Davis. “These give a good indication of how the overall team works together, combined with an assessment of throughput from the development team to help understand work capacity.

“Alternatively, performance visibility can be individual-focused. Whilst granular metrics applied to individuals may seem appealing, this can do more harm than good when it comes to changing behaviors.”

Ideally, performance visibility is based on the team. Then, if needed, the business can trace back individual-level problems.

Also read: NetOps vs DevOps: Bringing Automation to the Network

How to Enable DevOps Visibility

To support visibility, a business needs tools to increase transparency and reproducibility throughout the software development lifecycle.

There are a few visibility best practices to increase DevOps ROI (return on investment). Key among these will be the need to adopt version control.

Version control is an automated method of tracking changes to code. Updates are kept in an immutable log that developers and stakeholders can reference and roll back to if problems arise. This becomes the single source of truth coders can turn to as they manage their updates.

Enterprises should also think about the need to connect user stories and metadata.

“When a business ties changes in version control to user stories, it gets a narrative that explains the who, what, where, when, and why of changes,” said Davis. “Specifically, it connects the exact change, when it was made, who did it, why they did it, if anything else was changed, and the goal of the change.

“This develops a story of historical changes to the same material.”

Davis further advises that firms should “organize the architecture” because “noisy” metadata can obscure architecture visibility. Often, one of the fastest ways to discover architecture issues is to simply organize it into folders and packages.

Importance of Value Stream Management in DevOps Visibility

A business looking for DevOps visibility control needs to enable value stream management (VSM). According to the Copado team, value stream management starts with mapping.

In this approach, the DevOps team visualizes workflows to understand where bottlenecks lie and why they occur. This means they can make improvements, eliminate waste, and focus on the strongest value drivers.

Further, by implementing a VSM platform, DevOps teams can get an understanding of the connections and integrations between tools, applications, SaaS (software-as-a-service) solutions, and other components in one place.

Another way DevOps teams can improve visibility is by leveraging DORA metrics. DORA metrics involve measuring lead time, deployment frequency, change fail rate, and time to restore to test how an overall process compares to others. It can be applied across a wide range of questions to quantify results.

Incorporating value stream mapping, a VSM platform, and DORA metrics will improve DevOps visibility, highlighting places in processes where value is stuck.

“Visibility gives teams and leaders alike the information they need to turn their attention further down the pipeline,” said Davis. “This will begin to increase quality, improve speed to deployment, increase and accelerate innovation, and finally achieve resilient development processes.”

Taking Steps Toward DevOps Implementation

There seems to be a fairly widespread assumption that DevOps is some kind of cure-all solution that can simply be directed at projects, teams, and all forms of software entities in order to provide some kind of improvement.

However, DevOps is a complex set of practices that relies on visibility of business processes to be managed properly. Businesses should be aware of this complexity before taking any steps toward implementing it for their projects or business processes.

Read next: Scaling DevOps: Best Practices

The post Copado: A DevOps Value Chain is Forged by Visibility appeared first on Enterprise Networking Planet.

]]>
Coralogix CTO: Why Cloud DevOps Runs On Infrastructure as Code https://www.enterprisenetworkingplanet.com/data-center/coralogix-iac/ Fri, 18 Mar 2022 13:00:00 +0000 https://www.enterprisenetworkingplanet.com/?p=22330 Coralogix’s Yoni Farin highlights the concerns and benefits of IaC within DevOp teams.

The post Coralogix CTO: Why Cloud DevOps Runs On Infrastructure as Code appeared first on Enterprise Networking Planet.

]]>
Over the last half-century, networks have changed through a number of different connectivity eras (client-server and more) and platforms, processes, and procedures have evolved in line with current trends and new innovations.

The latest trend when it comes to network architecture constructs is the notion of infrastructure as code (IaC). 

IaC is a descriptive model for defining and provisioning the structure of an IT network alongside its data storage capacities and associated functions.

It enables users to define server structures, load balancers, and other tiers of software management. While not commonly used in application development, IaC is applied to the lower substrate level of computing that users depend on.

Defined Connection Topologies

In terms of shape and function, infrastructure as code usually takes the form of text files, written in dedicated software languages including Terraform, AWS’ CloudFormation, and others. Compared to networks built with cables and wires, this type of network is built from descriptive model source code files that define connection topologies.

With IaC trending toward becoming an established part of the cloud firmament, businesses and IT teams should be prepared for the impact it will have on the structures and systems they operate with every day.

Also read: Automating DevOps with AI & ML

Impact on DevOps and Production Environment Consistency

IaC is great for cloud-centric IT teams, which need mechanisms to automate the creation of their infrastructure, roll out new environments and act as living documentation to keep track of everything they have created. It can deliver on many, if not all, of those needs because it is capable of spanning all levels of the production environment, including development, staging, testing, and production.

Multiple environments are useful for testing changes to code before deploying into production. However, consistency across environments, from development to production, is a major concern, which was presented by Yoni Farin, CTO & co-founder of Coralogix, a company known for its stateful streaming analytics platform that produces real-time insights and long-term trend analysis with no reliance on storage or indexing.

“With IaC, rather than manually creating each environment and hoping for consistency, you make your change in your codebase and apply this change to each environment,” said Farin. “This means that your environments can stay completely consistent, provided that your engineers stick to the code.”

Changes Caused by Code Function Drift

Farin explains that on the road to consistent change, we must realize that code function drift always traditionally happens. Changes can manifest themselves in live production due to different live production deployment zones, inherent complexity in systems, and simple human error.

“But with IaC, you know that if your change has worked on one environment, provided your environments are consistent, that same change should work on the next environment,” said Farin. “Consistent changes build confidence and lower error rates, which in turn will increase deployment frequency.”

Given the rise of low-code/no-code development and the emergence of the so-called citizen developer, citizen data scientist, and so on, there is perhaps no reason not to expect that citizen network engineers won’t emerge. With changes potentially impacting the way system structures behave coming from more angles, we need a way to achieve visibility across all services and their underlying infrastructure.

Also read: Using Low-code to Deliver Network Automation

IaC as an Essential Visibility Conduit

Coralogix’s Farin suggests that IaC can be a visibility conduit for services and their infrastructure.

“At a certain point, someone is going to ask you a simple question about your cloud architecture and ask ‘So what are we using then?’” said Farin. “This should be a simple question, but unfortunately, as your architecture scales, it becomes difficult to answer.

“IaC enables this visibility because all changes should go via the codebase. You can look at the code and understand how your architecture hangs together. This acts as living documentation that stays up to date by design.”

This code can also be scanned, for example, using terraform graphs to generate dot diagrams. This opens up the door for a series of diagrams, workflows, and other visualizations that can mostly automate the process of documenting and tracking infrastructure decisions.

The mission here is all about the ability to grasp reusable code modules. Even a DevOps team that has amazing skills, self-control, and dedication to the cause will benefit from being able to reuse code where it can.

In cloud development, IT teams are often clicking at the user interface to perform the same or similar tasks over and over again. However, this is not a productive or cost effective way of using up valuable team time. Instead, they should be seeking to automate these clicks.

“IaC allows DevOps teams to create code modules that will generate cloud resources, like databases, serverless infrastructure, servers and much more,” said Farin. “These modules are great for automating tasks that would otherwise become mundane.”

Going forward with a mission to attempt to maintain lifecycle management, there is more to draw upon from IaC.

Breaking Down Cloud Breakdown

“When people bring up cloud infrastructure, they almost never consider what they’ll need to do when they tear it all down,” said Farin. “Lifecycle management is a complex endeavor that is usually tacked on halfway through a project, if ever, because it has been deprioritized in favor of other feature work.

“Bringing down cloud infrastructure, especially when there are multiple products in a single account, is not an easy endeavor. It takes a lot of picking through and deleting specific resources, in the correct order, to avoid errors and slowdowns.”

But, IaC enables IT teams to turn teardowns into a single command. Farin and team point an example here which might see the team making use of terraform, to run the “terraform destroy” command. This command will bring down all of the relevant infrastructure for a specific module.

“This means that, even when there are multiple projects in the same cloud account, you can surgically remove your components, in the correct order, in a totally automated way,” said Farin. “This ensures a seamless teardown, but it also guarantees that you won’t impact other projects whose infrastructure may exist alongside yours.”

An Ability for Observability

Finally, IaC allows for a greater level of observability, allowing for improved control over system and infrastructure monitoring and management.

“When you’re running your infrastructure as code, you need to be able to declare your observability rules from within that code,” said Farin. “This will not only enable you to describe how the system works but also how the system is monitored.

“Encoding operational rules within your code is an incredibly powerful mechanism for scaling and managing your infrastructure.”

The future for infrastructure as code looks undeniably bright. There is plenty of positive opinion behind this approach to network base layers, and as it’s much more straightforward to look for cloud misconfigurations, IaC also offers compelling security control.

For the reasons showcased here and a host of other factors, IaC will likely continue to trend as an appealing infrastructure solution for large and growing businesses. 

Read next: Marrying Service Reliability Engineering (SRE) and DevOps

The post Coralogix CTO: Why Cloud DevOps Runs On Infrastructure as Code appeared first on Enterprise Networking Planet.

]]>
Patch Management in Cloud Technology https://www.enterprisenetworkingplanet.com/security/patch-management-in-cloud-technology/ Wed, 09 Feb 2022 21:21:04 +0000 https://www.enterprisenetworkingplanet.com/?p=22184 Patch management now forms an increasingly pressing process of the modern nature of cloud networks.

The post Patch Management in Cloud Technology appeared first on Enterprise Networking Planet.

]]>
Between bugs in the code, security vulnerabilities, and general updates, modern software is often upgraded or maintained with patches to ensure it functions as it should.

The now increasingly disaggregated world of cloud-native technology and its interwoven neural connections, formed by application programming interfaces (APIs) and the orchestration layers that harmonize in between, often need to benefit from patch management.

Why Do We Patch Software?

At its core, patch management is the application of additional code to existing software deployments to upgrade; update; fix vulnerabilities; or remediate against incompatibilities, performance bottlenecks, platform version alignment, or some other substrate-level change. 

Patching can occur at the application level, the operating system level, the networking level, its connection to API conduits, or some combination of all of the above.

Patches can occur for a multitude of reasons. Sometimes they are executed for reasons relating to performance, functionality, regulatory compliance, system health, and security or as part of a deeper strategic step in a more complex software lifecycle management schedule. However, some patches are simply installed due to user requests for functionality and useability. 

Also read: Establishing Server Security Best Practices

Use Case: JumpCloud Patches SME Space

JumpCloud, based in Louisville, Colorado, is among the firms now putting their patch prowess forward as a core competency and competitive differentiator. Keen to make sure that IT administrators can minimize vulnerabilities by having increased visibility and control over their remote device fleet, the company has announced the addition of JumpCloud Patch Management to its cloud directory platform.

Focused on the small-to-midsize enterprise (SME) segment of the market, this service gives IT administrators the power to create schedules, report on operating system patches and versions, and see patch versions across their remote fleet. It can manage both Mac and Windows updates and patches from the JumpCloud console.

“We all know that users don’t update their devices with bug fixes or security patches with any regularity or discipline,” said Greg Armanini, senior director of product management at JumpCloud. “This creates huge security gaps for every organization, especially those that are distributed, which is almost every organization today.”

The core advancement offers robust patch management update functionality as part of a directory platform, which further centralizes user identity and device management. SME IT teams are likely conscious of their admin’s time and money, so they may find it appealing to eliminate the stress and potential vulnerabilities of having to work with multiple vendors.

“The practice of patch management is a critical pillar of cyber defense, yet many organizations still struggle with patching in a remote or hybrid work environment,” said Armanini and team. “This is due to the lack of visibility into user devices, the frequency of new patches introduced by software vendors, and the need to leverage multiple solutions and complicated workflows to manage patching across multiple operating systems (OS).”

NTT Application Security researchers recently found the average time to fix critical vulnerabilities is 205 days. Further, some 60% of breached organizations reported breaches were due to a vulnerability for which a patch was already available, but not deployed throughout the organization’s systems and devices.

JumpCloud gives administrators a single cloud directory platform to secure users in heterogeneous device environments. The first release of JumpCloud Patch Management focuses on OS-level patching for Mac and Windows, with Linux, browser, and application patching coming soon. 

A Variety of IT Stacks, Workloads, and Data

Given the variety of technology stacks, workloads, and data estates that exist, it is perhaps no surprise to see different technology vendors aligning themselves to serve different areas of the patch management market.

With a specialization in cloud system security, compliance, and deep competencies in cloud misconfigurations detection and remediation, Qualys knows a thing or two about minimizing vulnerability risks. The California-based firm has now added advanced remediation to its Qualys Patch Management. 

The technology proposition promises that organizations will now be able to use one application to comprehensively remediate vulnerabilities regardless of whether they need configuration changes or deployment of scripts and proprietary software patches. The application hopes to improve efficiency by eliminating the need to use multiple products and agents.

Timely and comprehensive remediation of vulnerabilities is of course critical for maintaining good security hygiene and proactive risk management. Yet, organizations struggle to remediate quickly due to multiple factors including ambiguity between IT and security on process ownership, especially when the action requires sophistication beyond the deployment of a simple patch. 

Also read: Best Network Security Software & Tools of 2021

Detection Logic vs. Remediation Complexity 

The lack of clarity between vulnerability detection logic and potential remediation complexity, due to the need for multiple tools, increases the struggle IT and security teams face. For example, to remediate the Spectre/Meltdown vulnerability, a configuration change is required in addition to deploying the patch. Further, some vulnerabilities need a registry key change without a patch, while others need a proprietary patch to remediate.

“In this Log4Shell and Pwnkit era, organizations need to be extra vigilant and so be able to patch weaponized vulnerabilities without delay,” said Sumedh Thakar, president and CEO of Qualys. “This requires efficiency and rapid remediation that many organizations find daunting due mainly to complex processes and the need for several different tools.”

According to Thakar, Qualys Advanced Remediation increases efficiency by using one application to comprehensively remediate vulnerabilities. This, he says, eliminates the need to use multiple products and agents to improve response times, which is a critical success factor in strengthening any size of enterprises’ cyber defenses.

Qualys Patch Management integrates with Qualys Vulnerability Management, Detection, and Response (VMDR) to remediate vulnerabilities by deploying patches or applying configuration changes on any device regardless of its location. The new remediation feature allows teams to use one application to detect, prioritize, and fix vulnerabilities regardless of the remediation method required.

Patching as a Core Process

The practice of patch management now forms an increasingly pressing process of the modern nature of cloud networks and their necessary device management. Regardless of whatever method you choose, patch management will continue to be a core process in maintaining cloud networks.

Read next: Taking the Unified Threat Management Approach to Network Security

The post Patch Management in Cloud Technology appeared first on Enterprise Networking Planet.

]]>
Common Data Language is Needed for XaC Development https://www.enterprisenetworkingplanet.com/data-center/data-language-xac/ Fri, 14 Jan 2022 21:46:11 +0000 https://www.enterprisenetworkingplanet.com/?p=22098 Everything as code (XaC) embodies everything from infrastructure to platforms to applications and elements of the stack. Here is what that means for our data future.

The post Common Data Language is Needed for XaC Development appeared first on Enterprise Networking Planet.

]]>
The current reversion to software code-centricity is almost a paradox. If we had told the programmer-developers engineering us out of the IBM-PC era into the earlier iterations of Windows that code would drive everything, they may have offered some quizzical glances.

“Of course, code will drive everything; that’s why we’re building applications, establishing database procedures, and looking to the future when artificial intelligence (AI) finally graduates out of the movies,” said any given software developer in the 1980s and probably most of them in the 1990s, too.

Moving Forward with Everything as Code

But just as software application development syntax changes and evolves over the decades, our notion of what we mean by built-as-code has also moved on. This truism is coming to the forefront largely due to the fact that we now reside in the virtualization-first world of cloud, with all the abstracted layers that make up our new compute solutions.

When we talk about code constructs today, we’re not just talking about applications or their core components.

Today, when we say that something is delivered as code, we could be talking about infrastructure as code (IaC), testing as code (TaC), or some more general layer of networking as code (NaC) that we used to entrust to the hubs, switches, and routers of the pre-cloud era.

Typically written as XaC, everything as code embodies everything from infrastructure to platforms to applications and every deeper “service” element of the stack, such as compliance, security, and daily operations.

Data at the Core

As we now work to finesse the features of the new IT stack methods with everything as code, we will know our way around in this new era of technology solutions.

So where should we start? With data, obviously.

Data is of course the core element with which we will build all tiers of cloud. Logically then, cloud data observability should form a core discipline in any testing-as-code capability if we are to navigate the everything-as-code universe.

Aiming to resonate with that technology proposition and work in precisely this space is Soda, a provider of open source data reliability tools and cloud data observability platform technology based in Brussels, Belgium.

Late last year Soda released its Cloud Metrics Store to provide testing-as-code capabilities for data teams to get ahead of data issues in a more sophisticated way. This technology captures historical information about the health of data to support the intelligent testing of data across every workload.

As the modern cloud-native network stack now evolves and the use of everything as code starts to become a de facto approach, we start to think of the so-called “data value chain” as a measure of the worth of our wider IT system.

“It’s advantageous for data teams to unify around a common language that allows them to specify what good data looks like across the data value chain from ingestion to consumption, irrespective of roles, skills, or subject matter expertise,” said Maarten Masschelein, CEO at Soda. “Most data teams today are organized by domain, so when creating data products, they often depend on each other to provide timely, accurate, and complete data.”

Also read: Using Low-code to Deliver Network Automation

A Common Language for Data

A common language for data might see coding and data solutions become available for use by anyone; there is little or no hierarchy between users in the democratized everything-as-code future.

Without a clear strategy to monitor data for quality issues, many organizations fail to catch the problems that can leave their systems exposed and can result in serious downstream issues. Masschelein and team say they are working to give data teams the tools to create a culture and community of good data practice through a combination of the Soda Cloud Data Observability Platform and its open source data reliability tools built by and for data engineers.

He says that his firm’s latest release compels data teams to be explicit about what good data looks like, enabling agreements to be made between domain teams that can be easily tracked and monitored, giving data product teams the freedom to work on the next big thing.

“With this latest release, Cloud Metrics Store gives data and analytics engineers the ability to test and validate the health of data based on previous values,” said Masschelein. “These historical metrics allow data tests to use a baseline understanding of what good data looks like, with any bad data efficiently quarantined for inspection before it impacts data products or downstream consumers.”

Alerts are sent via popular on-call tools or Slack, so data teams are the first to know when data issues arise and can swiftly resolve the problem.

Also read: Top 8 Data Migration Best Practices and Strategies

The Era of Data Best Practice

Moving forward, we’re not yet hearing IT vendors talk about data best practice as a defined principle or workflow objective; most are still just doling out the usual best practice messages, some of which will cover data.

But if data best practice does exist in the everything-as-code arena (which it arguably very well fundamentally should), then we will need data reliability tools that work across the data product lifecycle. This will mean that it is straightforward for data engineers to test data at ingestion—and it will mean data product managers can validate data before it is consumed in other tools.

This test and validation function is precisely what Soda is bringing to market.

“All checks can be written ‘as-code’ in an easy-to-learn configuration language,” said Masschelein. “Configuration files are version controlled and used to determine which tests to run each time new data arrives into a data platform. Soda supports every data workload, including data infrastructure, science, analysis and streaming workloads, both on-premises and in the cloud.”

Our world is now cloud-first, code-first, and data-first. With everything as code pushing data solutions forward, it will be important to keep an eye out on the future of data innovation.

Read next: Data Center Technology Trends for 2022

The post Common Data Language is Needed for XaC Development appeared first on Enterprise Networking Planet.

]]>
The Mainframe Brain Drain Modernization Game https://www.enterprisenetworkingplanet.com/data-center/the-mainframe-brain-drain-modernization-game/ Tue, 21 Dec 2021 18:29:56 +0000 https://www.enterprisenetworkingplanet.com/?p=22015 With mainframes still vital to many aspects of daily life, integrating them into modern infrastructure architectures poses new challenges.

The post The Mainframe Brain Drain Modernization Game appeared first on Enterprise Networking Planet.

]]>
Mainframes aren’t going away. Despite the rise of cloud, the forthcoming promise of quantum and the mass-scale ubiquity of datacenter operations capabilities, the installed base of mainframe applications out there—largely serving the financial sector, retailers, and enterprise conglomerates as they do—appear to be something we will still be talking about for some time to come.

As IBM itself reminds us, every time you use an ATM, book an airline ticket and most times you’ve visited and paid for something in the supermarket, you have interacted with a mainframe’s computing power at some level.

There is a brain drain in terms of the skills needed to operate mainframes today. With mainframe operating systems and languages being consigned by many millennial and Generation-Z programmers as archaic relics of a bygone era, the industry has had to find new ways to access the installed base of mainframe resources giving us their high availability and breadth for high volume input/output, which still has a place.

The Ultimate Cloud Mainframe Question Is…

If so-called mainframe modernization tools exist to primarily provide access to and control of pre-existing mainframe resources, then do they fail to also provide advantages of the more granular and composable approach we take to modern technologies such as containerization, shift-left testing, Infrastructure-as-Code and the whole smorgasbord of cloud-native tooling that we now think of as state-of-the-art?

If the discussion thus far sets out something of the state of the nation, then what kind of work is going on to engineer the incumbent global estate of mainframe resources into our modern stacks?

Of recent note is the work executed by enterprise system connectivity and data management specialist Software AG. The Germany headquartered softwarehaus has joined the IBM Z and Cloud Modernization Center. Not a physical office center premises as such (although somewhat cheesily described as being a digital front door), this is an IBM service-portal cum tool training website resource designed to help firms modernize their applications, data and processes in an open hybrid cloud architecture. 

Also read: Fighting API Sprawl in the Modern Cloud Maul

Open Hybrid Cloud Architecture

Those last four words were important. This is an ‘open hybrid cloud architecture’ that works to take down barriers to mainframe modernization for customers by using application programming interfaces (APIs) to connect mainframe applications to the cloud without altering any code. 

Both firms insist that APIs provide a non-invasive approach to modernization by creating real-time interactions between applications distributed across on-premises and multi-cloud environments. Software AG’s API-enabling mainframe integration solutions, webMethods and CONNX, are designed to give enterprises options to create reusable services from mainframe application code, screens or data.

IBM appears to be quite comfortable with stretching the definition of hybrid multi-cloud platform deployment to also include API-connected mainframe conduits. According to a recent IBM Institute for Business Value survey known as The hybrid cloud platform advantage, “The value derived from a full hybrid, multi-cloud platform technology and operating model at scale is 2.5 times the value derived from a single platform, single cloud vendor approach. Further, an IBM hybrid cloud transformation that integrates IBM Z can extend up to 5x the value of a public cloud only approach.”

The value within here is said to not only come from system availability and transactional throughput prowess, IBM worked with strategy consulting company Hurwitz and Associates on a sponsored paper that found value derived by: business acceleration, developer productivity, infrastructure cost efficiency, regulatory, compliance and security, as well as deployment flexibility.

Also read: The Integration Chasm that is Killing Cloud

An Ability for Flexibility & Stability 

General Manager for Mainframe Solutions at Software AG Arno Theiss also appears to buy the ‘hybrid means mainframe too’ argument. Theiss insists that companies need the flexibility of the cloud for some applications and platforms, combined with the stability and ownership of an on-premises mainframe. 

“In a truly connected enterprise, data should be able to flow in any direction between applications, regardless of where they reside. This is the only way to deliver connected services to customers and make business operations more efficient. We will help IBM Z clients build a hybrid environment using APIs around their mainframes so it can connect to the cloud in a non-invasive way and address the risk of disrupting their core applications,” Theiss says.

Inside the IBM Z and Cloud Modernization Center the company promises to bring together tools, training workshops, global systems integrators and technology partners to create and execute a roadmap that is engineered to lower risk and maximize business value. 

Big Blue refers to the complete service as an “interactive digital journey”, which showcases comprehensive access to resources, capabilities and guidance for business professionals, IT executives, and developers alike. Through the Center, customers will gain access to a partner learning hub, including resources from Software AG and details about its API-enabling mainframe integration solutions.

Faced With a Choice, Take Both

Did we answer the big cloud mainframe question?

If we take Software AG’s Theiss’ words as gospel and accept that we need the cloud flexibility plus mainframe stability, then we can perhaps say yes, we still have the ability to embrace granularity and containerized composability, but with a serious side order of mainframe power as well. 

The real crux of this cloud-mainframe hybrid marriage will arguably come if we can make core mainframe-side—like ‘server-side’ with deeper guts—functions available for reuse (things like payment reconciliation or other back-end core data stream functions) without all the upper tier application white noise—functions like inventory, sales, logistics etc.—which can be upwardly abstracted to more modern, typically cloud-native, environments.

After all, The Creed for the Sociopathic Obsessive-Compulsive (Peter’s Laws) is much beloved of programmer/developer software engineers of all types and it does state that when given choice, take both. 

The hybrid cloud mainframe combo platter is served, so caution: filling may be hot.

Read next: Managed Cloud Services for SaaS Companies

The post The Mainframe Brain Drain Modernization Game appeared first on Enterprise Networking Planet.

]]>
Fighting API Sprawl in the Modern Cloud Maul https://www.enterprisenetworkingplanet.com/management/api-sprawl/ Thu, 16 Dec 2021 22:00:17 +0000 https://www.enterprisenetworkingplanet.com/?p=21993 The new network malady is API sprawl. Learn how this happens and what can be done to stop it.

The post Fighting API Sprawl in the Modern Cloud Maul appeared first on Enterprise Networking Planet.

]]>
Technology often moves in circles. Back in the good old days (around the turn of the millennium), enterprise IT teams that were busy embracing cloud computing had a comparatively new headache on their hands. Virtual machine (VM) management proliferation, or VM sprawl as it was known, was the new bugbear.

Now increasingly handled by server-side autonomous management and forms of artificial intelligence (AI) -based automation directed at system-level operations, the ‘too many spinning plates’ headaches (figuratively and literally) experienced in VM sprawl do still exist, but they have given way to a more granular brain ache that stems from the same type of neural imbalance.

The new network malady is API sprawl, also potentially known as API abomination or API application anathema if we’re looking for a snappier and more alliterative name tag. So what kind of  API meltdown is happening and what can we do about it?

A Latticework of Layers

If we accept the new well-worn suggestion that digital transformation is underway and enterprises are embracing cloud, mobile, data analytics, and device ubiquity, then we don’t need to remind ourselves that computing is becoming an inter-networked lattice of layered services and tiers. 

In this new IT fabric, the growing use of hybrid, multi and poly cloud environments means everyone’s application programming interfaces (APIs) are spread out all over the place. They’re all built with different standards, gateways, frameworks, policies and so on, subject to the environment they live in. It’s a Wild West API sprawl inside the new cloud maul.

Trying to manage all those APIs—keeping them secure, setting governance policies, maintaining performance plus availability and so on—is like herding cattle. Developers are forced to constantly switch from dashboard to dashboard as they hop between different sets of APIs to keep their house in order.

Also read:  What You Need to Know About Cloud Automation: Tools, Benefits, and Use Cases

An API Lasso

Integration and management centric API platform company MuleSoft’s answer is—give everyone a universal platform that can lasso any environment and round APIs up into a central corral—so developers and ‘business technologists’ (people outside IT who use APIs to build new stuff for themselves) can just head to one place to manage and access them all.

The company this month detailed its latest universal API management capabilities designed to enable IT teams to securely create, manage and govern any API across any environment. The universal API management capabilities—including Anypoint Flex Gateway, API Manager, API Experience Hub, API Designer with event-driven capabilities and API Governance—are built directly on Anypoint Platform, MuleSoft’s own branded platform for integration, API management and automation. 

MuleSoft reminds us that with the proliferation of digital touchpoints and the need to create seamless experiences for employees and customers, companies are creating more APIs than ever before. The firm points to its own research which suggests that the average enterprise organization today uses over 800 applications on average and  96% of them currently use public or private APIs—up from 80% last year. 

“Companies now have to manage and compose thousands of APIs spanning different teams, environments, and technologies,” said Meir Amiel, chief product officer, MuleSoft. “[Our] universal API management capabilities bring companies closer to achieving the composable business vision, by allowing them to choose and integrate best-of-breed solutions and compose new services using any API.” 

Also read: Juniper Launches Software Agent to Protect Applications in Cloud, On Premises

Composable Complexity vs. Consistency & Compliance

If there are opposing forces in action here, it may be the composable complexity of cloud vs. the need to strive for consistency, compliance, and compatibility. 

There is much talk among the behemoth cloud hyperscaler vendors and their dutiful litany of partner practitioners, protagonists and (even among some) pretenders about the need to enable so-called ‘unlocked innovation’ and be able to use cloud as it was intended for true breadth and flexibility. Without the ability to shift workloads across clouds effectively, that innovation channel remains closed off (or at least partially blocked) for many.

To navigate these hybrid and distributed ecosystems, IT teams can use MuleSoft’s new universal API management capabilities on Anypoint Platform to design, build, deploy, operate, and discover all of their organization’s APIs. In addition, MuleSoft also aims to help companies operationalize governance across all of their enterprise APIs to help them comply with industry regulations and internal design standards, without adding friction to development. 

But Mulesoft is not the only player; security stalwart F5 also plays heavily in this space.

Senior director and distinguished technologist at F5 Rajesh Narayanan says that enterprises are, irrespective of size, a combination of product and IT services developed on behalf of business units that make up the organization.

“Enterprises are naturally siloed with information shared on a need-to-know basis. As enterprises expand, so do the various business units, product teams and operational teams. In essence, the business ‘sprawls’. Because every team and business unit today relies significantly on APIs, we can see the inevitable result is API sprawl,” said Narayanan.

Looking at where we go next, Narayanan said that we can see that a new approach is needed to address the challenges that will arise from API sprawl because existing solutions focus on challenges within a cluster; that is, the challenges of managing APIs within a microservices environment. 

“Existing solutions have not yet expanded their scope to address the challenges of API sprawl across clusters; that is, between microservices environments that span locations, business units, and product and operational teams,” wrote Narayanan, in a technical discussion blog jointly authored by Lori MacVittie in her role as F5 principal technical evangelist.

Hybrid, distributed ecosystems have become the norm, which adds complexity to the IT landscape. According to Deloitte, 97% of IT managers are planning to take a best-of-breed approach by distributing workloads across two or more clouds to boost resilience and support regulatory requirements. These distributed ecosystems result in data silos, limited reuse, inconsistent governance, and security across services. 

There’s also the spectre of limited visibility with many management consoles across cloud vendors. API management has clearly become an entire sub-genre and sub-discipline of cloud computing in and of itself. 

 Read next: The Integration Chasm that is Killing Cloud

The post Fighting API Sprawl in the Modern Cloud Maul appeared first on Enterprise Networking Planet.

]]>
The Integration Chasm that is Killing Cloud https://www.enterprisenetworkingplanet.com/data-center/the-integration-chasm-that-is-killing-cloud/ Wed, 01 Dec 2021 18:15:30 +0000 https://www.enterprisenetworkingplanet.com/?p=21919 Obstacles exist in the cloud computing model of service-based IT delivery. Here is how better integration can help.

The post The Integration Chasm that is Killing Cloud appeared first on Enterprise Networking Planet.

]]>
Cloud is eminently flexible. That’s the core technology proposition, underpinning architectural substrate, the central truth driving the development of this technology. But for all the flexibility, obstacles exist in the cloud computing model of service-based IT delivery. There’s no such thing as an easy-to-integrate multi-poly-cloud set of instances spread around a variety of private, public, and hybrid data center pipes.

Multi-poly-cloud Fundamentals

Before we look at the connection and integration chasm that exists (and hopefully also offer some routes to get around the gulf in front of us), let’s remind ourselves how diverse the cloud landscape is. As we know, there is private cloud and there is public cloud and there are hybrid deployments that span both resource types.

There is also multi-cloud, a deployment scenario where enterprise organizations use more than one cloud services provider (CSP) to host their workloads. The spread here is generally designed to optimize workloads according to different CSP  price points, the availability of specialized tools, the presence of cloud service optimizations (extra storage power, better transactional capacity, super-charged processing or analytics or other) and sometimes even as a result of brand loyalty.

Finally, within the scope of this discussion at least, there is also poly cloud. This is where individual component parts of an application or data service are ‘separated out’ across different clouds based on optimizations similar to those noted above.

Why does all of this matter? There is a weight of industry momentum driving multi (and also poly) cloud. Analyst house IDC thinks that multi-cloud will “achieve total ubiquity in enterprise IT by 2022” no less. At which point 90% of enterprises will depend on multiple private and public clouds.

As we now come through the wake of what the world prays might be the tail end of the COVID-19 pandemic, cloud deployments will obviously and clearly further expand; nobody needs to be reminded how cloud’s remote connectivity DNA has helped the world to still function throughout 2020 and onwards.

Also read: 5G Drives Collaboration Between Carriers, Cloud Infrastructure Providers

Cloud Flexibility isn’t Working

Despite all of this, multi-cloud isn’t delivering. IDG, in a separate report, revealed that 79% of organizations are struggling to realize synergies of employing multiple platforms. Flexera’s 2021 State of the Cloud report details what that looks like: workloads firmly siloed on different clouds in almost half of cases.

“Freedom of choice and the ability to shop around are great, but if the digital ecosystem of providers you use don’t work together then the teams who build, manage and secure your infrastructure will be saddled with disconnected tools, siloed technologies and manual processes,” says Mandi Walls, DevOps advocate at digital operations management company PagerDuty. “They will lack the information to see what’s happening in your infrastructure and an effective means to respond, delaying changes and taking longer to solve technology problems.”

Walls highlights the disconnects with some eloquence, but she also offers some suggestions as to how we can overcome the inherent lack of integration that is now hampering multi-poly-cloud integration, interoperability, and the subsequent orchestrate-ability that we all seek.

One method the PagerDuty team says is an option is to try to keep the technology stack simple. In practice that means working to standardize, say, on virtual machines, network fabric, load balancers and so on. But this loses the elastic benefits of being cloud native.

But other options exist. Wall points out that another option is to work only with common overlapping architectures, but this path is not simple: even with Kubernetes, there are at least 67 certified distributions, according to CNCF.

“Alternatively, you can build an engineering operation capable of taming different tools and services. This means becoming something akin to a systems integrator, with engineering teams building and maintaining multi-clouds around best of breed. It’s the kind of approach undertaken by U.S. retail giant Target, with 4,000 engineers working on its Google and Microsoft multi-cloud,” says Walls.

The core problem with an approach of this kind (or one that is redolent of it with similar constructs) is that it requires thousands of dollars of investment and engineering hours, all of which go towards ‘simply’ building and maintaining a control plane, rather than working to establish differentiation. 

Walls and team insist that a more accessible approach is to use a model where tools, technologies, services, and vendors are pre-integrated in a way that connects teams and information resources together in real time. This is a model that means infrastructure can be smoothly and reliably managed and that allows teams to communicate and collaborate effectively. It might even help enterprises realize the benefits of multi-cloud and crossing the chasm.

Also read: Public vs. Private Cloud: Cloud Deployment Models

Three Cornerstones of Integration

There are many routes, channels, types, platforms, tools, and processes that make up the technology sub-universe that is integration. With some IT vendors self-styling themselves as integration specialists (Tibco would be a good example, although the company has now moved on to wider cloud and data platform status), we can think about integration in the multi-poly-cloud arena from three perspectives.

Workflows

Having eaten its own dogfood and been through the processes to understand how workflows should be handled, PagerDuty’s position on this aspect could be valuable. 

According to Walls, workflows establish procedures that people, teams and systems follow in lifecycle management of applications and in response to IT emergencies. Integrated across the digital ecosystem, workflows provide the consistency DevOps teams need to work at scale; they help teams respond to events and work with IT to fix them quickly and efficiently according to a targeted and prescribed plan.

“Workflows can be codified for lists of approved tools, technologies and platforms; they can, for example, state how to run Python with containers. Workflows should start small and grow as your cloud changes and they evolve as your business and customer needs change,” says Walls.

Automation

Automation is the rails on which workflows run. Walls explains the process at work here and says that automation triggers workflows and processes without manual intervention that could very typically be inefficient and so risk delay in responding to events. 

Blasting, sweeping, silo-bursting

“Automation blasts through the process silos between teams and technologies; it’s a means to sweep up mundane activities such as provisioning a node or rolling out an update. It’s a way to ensure the right members of a team are alerted at the right time when there is an IT-related incident to solve,” clarifies Walls.

Visibility

We can reasonably suggest that integration is the cornerstone of visibility—a means of looking inside your technology operations. Done well, integration provides a means for the teams to gain the information needed to act. Combined with workflows and automation, visibility means teams can also work effectively with others as needed. 

With visibility shot throughout the ecosystem, teams have the added flexibility of working with the tools they want—supporting productivity—rather than having to use prescribed tools.

Walls arguably has exactly the right level of insight to surmise this whole analysis—she underlines the analysis by saying that multi-cloud may have prevailed, but its attributes can act as hurdles to its benefits. 

“Without integration across your digital ecosystem—between tools, technologies, and providers—siloes remain, workflows are truncated, and automation operates on an island. Integration provides the tunnel through which workflows can be combined, systems orchestrated, and processes automated for IT to scale and manage your infrastructure at a platform level,” she says.

If we can get all of these fundamentals under our belts, then we can perhaps feel better about the extended use of multi-poly-cloud implementations for the future.

Read next: What You Need to Know About Cloud Automation: Tools, Benefits, and Use Cases

The post The Integration Chasm that is Killing Cloud appeared first on Enterprise Networking Planet.

]]>
Falco Rocks AWS Cloud Security One Louder https://www.enterprisenetworkingplanet.com/security/falco-rocks-aws-cloud-security-one-louder/ Fri, 05 Nov 2021 19:09:34 +0000 https://www.enterprisenetworkingplanet.com/?p=21803 Sysdig’s Falco is an open-source software project positioned as a ‘de facto’ detection engine for containers. Here is why that matters.

The post Falco Rocks AWS Cloud Security One Louder appeared first on Enterprise Networking Planet.

]]>
Cloud is open. Or perhaps more accurately, many of the fastest growing and most widely deployed technologies currently playing out and evolving across the global cloudscape are open source. 

The inherent openness that spans much of the cloud ecosystem creates a wide variety of gateways for deployment. The last two decades have seen us move rapidly through initial notions surrounding public and private cloud to (in so many instances) settle upon a realization that a hybrid combination of both is often the most prudent configuration.

As the distributed nature of hybrid cloud continues to widen, enterprises are adopting multi-cloud by using more than one Cloud Services Provider (CSP) and, in some cases, poly-cloud deployments, where single application and data services workloads are ‘separated out’ across multiple instances on multiple CSPs.

Widened Toolset Mechanics

Aiming to provide a new thread of security control management across the undeniably uneven and fragmented surface on planet cloud is Sysdig. The company used its appearance at KubeCon + CloudNativeCon North America 2021 this fall to explain how the Falco open source software project is widening its toolset mechanics.

Falco is a cloud-native runtime security project. Sysdig positions it as a ‘de facto’ detection engine for containers and Kubernetes robustness (it has over thirty million downloads, so perhaps not quite a de facto industry standard). Created by Sysdig and contributed to the Cloud Native Computing Foundation (CNCF), Falco is now an ‘incubation level’ hosted project. 

Now aligning with Falco is AWS CloudTrail, an AWS service that helps organizations control aspects of governance, compliance, and operational risk auditing of their AWS account. A new Amazon Web Services (AWS) CloudTrail plug-in provides real-time detection of unexpected behavior and configuration changes, intrusions and data theft in AWS cloud services using Falco rules. 

The Falco community developed this extension with Sysdig based on a new plug-in framework that allows any systems engineer or software developer to extend Falco to capture data from additional sources beyond Linux system calls and Kubernetes audit logs. 

Also read: Public vs. Private Cloud: Cloud Deployment Models

Consistent Distributed Threat Detection 

Loris Degioanni, founder and chief technology officer at Sysdig points to the reality of organizations having to manage critical data across multiple clouds. He says they need consistent threat detection across their distributed environments. 

Additional plug-ins will allow organizations to use a consistent threat detection language and close security gaps by using consistent policies for workloads and infrastructure. In addition, more than twenty new out-of-the-box policies supporting compliance frameworks were released.

Falco inspects cloud logs using a streaming approach, applying the rules to the logs in real time and immediately alerting on issues, without the need to make an additional copy of the data. This approach complements static cloud security posture management by continually checking for unexpected changes to configurations and permissions that can increase risk.

Today, security teams are forced to export AWS CloudTrail logs into a data lake or security information and event management (SIEM) for processing, and then search for threats and changes to configurations that can indicate a risk. This approach adds delay in identifying risks, as well as cost and complexity.

Cloud and security teams struggle with an ever-growing list of tools to master and manage. Falco provides a single tool for threat detection across container and cloud environments, reducing complexity by reducing the number of tools in the stack.

With this technology, users can use the same ‘rule language’ to create consistent policies for workloads and infrastructure, removing security gaps. Because there is a shortage of talent in both cybersecurity and DevOps, reducing the learning curve by using consistent tools for threat detection is critical.

Also read: Managing Security Across MultiCloud Environments

Cloud’s Next Challenge

The story thread here, arguably, points to cloud computing’s next major step challenge (i.e., consistency in the face of interchangeability). We know that no two instances of clouds are necessarily equal—clouds can be optimized for quite radically operational performance parameters—and that’s just inside the delivery framework from a single CSP.

If we span that differentiation factor out over a handful of CSPs (it’s mainly only AWS, Google, and Azure, but there are some others) and think about the multi-poly cloud combos currently being built, then it’s easy to see where mismatches and incompatibilities will throw themselves up.

This is what we’re hearing so much about efforts to keep the Kubernetes container orchestration technology relevant wherever it is applied. We don’t want cloud connected ubiquity to fall over just because one database is configured in a different way to another one across different parts of an IT stack, so we need to be able to create a template and fit it with one set of spanners wherever we are working.

In Falco’s case that spanner is a threat detection tool, but there are wrenches and levers for all the internal mechanics of an operational cloud.

“The Falco plug-in capability gives DevOps and security teams a single threat detection tool with a single rules language across container and cloud environments. This allows users to create consistent policies for workloads and infrastructure and close security gaps,” says Chris Aniszczyk, CTO of Cloud Native Computing Foundation. “The basis is now in place for rapid innovation by the community to extend Falco to additional cloud environments.”

The new plug-in capability and framework have been contributed by the Falco community and Sysdig to the project over the last few months. As of now, the AWS CloudTrail plug-in is available for use in preview mode and contributors can build new plug-ins on the framework.

Cloud is still open, cloud is still interoperable and cloud is still eminently precision-engineered for interchangeable integration and interconnectedness, but we still have work to do. Nobody should be taking a lump hammer to a cloud connection point that at first appears to need forcing. This is precisely the type of action that can lead to vulnerabilities Falco is working to address.

A safer cloud is a more tuneful cloud. Even Amadeus himself would agree on that.

Read next: Best Enterprise Cloud Migration Tools & Services 2021

The post Falco Rocks AWS Cloud Security One Louder appeared first on Enterprise Networking Planet.

]]>