OS Archives | Enterprise Networking Planet https://www.enterprisenetworkingplanet.com/os/ Tue, 17 Oct 2023 12:49:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 How to Block a Program with Firewall in 7 Easy Steps https://www.enterprisenetworkingplanet.com/security/how-to-block-program-in-firewall/ Fri, 02 Jun 2023 13:47:33 +0000 https://www.enterprisenetworkingplanet.com/?p=23664 Learn how to block a program in Windows Firewall. This step-by-step guide shows you how to block programs from running on your system.

The post How to Block a Program with Firewall in 7 Easy Steps appeared first on Enterprise Networking Planet.

]]>
Like all other firewalls, Windows Defender Firewall acts as a barrier and gatekeeper, monitoring incoming and outgoing network traffic and allowing or blocking it based on security rules. Through its advanced security features, users can also configure the firewall to block a program.

How to block a program in Windows Firewall

Blocking a program with Windows Firewall is straightforward if you have the right steps. The process is very similar for Windows 10 and 11. Here is how to block a program in six simple steps.

1. Open Windows Defender Firewall

The first step involves accessing the Windows Firewall Advanced Setting configuration, where you will make the necessary adjustments and configure the firewall to block a program of your choice.

There are many ways to open the Windows Defender Firewall. You can open it through the search bar at the bottom left of your screen or open the Run window by pressing the Windows + R keys and typing in WF.msc.

To access Windows Defender Advanced Settings:

  1. Open Windows Defender Firewall by searching or using run commands. 
  2. On the left menu of the Windows Defender Firewall window, click on Advanced Settings (Figure A).
Advanced Settings in Windows Defender Firewall
Figure A. Advanced Settings in Windows Defender Firewall.

2. Access Outbound Rules

As previously mentioned, Windows Firewall works through a set of security rules. While most users keep the default settings, the Windows Defender Firewall is much more sophisticated and can be customized.

Creating inbound and outbound (network traffic) rules is one of the main features that can strengthen security and create isolated digital architectures.

In this step, you will access the outbound rules to block a program with the firewall.

To do this, click on Outbound Rules in the left pane of the Windows Firewall window (Figure B).

Outbound Rules in Windows Defender Firewall
Figure B. Outbound Rules in Windows Defender Firewall.

3. Create new Outbound Rule

Creating firewall rules may seem overwhelmingly technical for nonadvanced users. However, if you follow the following steps, it can be a simple and effective way to block programs.

To create a new outbound rule, click on New Rule… under Actions in the right pane of the Windows Firewall window (Figure C).

Creating new Outbound Rule in Windows Firewall
Figure C. Creating new Outbound Rule in Windows Firewall.

4. Select Program from the Rule Types

This step is essential, as it defines what type of rule you want to create. Once you click New Rule…, you will see four options: Program, Port, Predefined, and Custom. Each option serves a different function:

  • Program rules block or allow apps and software.
  • Port rules control connections for TCP or UDP ports.
  • Predefined rules control connections for the core Windows experience.
  • Custom rules are those that a user customizes. 

Because you are trying to block a program, select Program from the four types of rules (Figure D) and click Next.

Select Program from the four types of Rules options
Figure D. Select Program from the four types of Rules options.

You will now have two options: To either select All programs or to block a specific program.

To block a specific program, select This program path: and click Browse (Figure E). The directory will open for you to search for the program you want to block. You’ll need to locate the .exe program file of the program you wish to block.

Blocking specific programs in Windows Firewall
Figure E. Blocking specific programs in Windows Firewall.

Once you have located and selected the program you want to block, click Next.

5. Block the connection

At this stage, Windows Firewall will move on to Action. You will then have three options: 

  • Allow the connection
  • Allow the connection if it is secure
  • Block the connection

Select Block the connection and click Next (Figure F).

Blocking the connection to a program on Windows Firewall
Figure F. Blocking the connection to a program on Windows Firewall.

6. Set profiles

The final step involves setting the profile and choosing a name for your rule so you can quickly identify it in the future.

On the New Outbound Rule Wizard window, you must answer the question, “When does this rule apply?”

You can select one, two, or all three options listed:

  • Domain: The rule will apply when your computer is connected to its corporate domain.
  • Private: The rule will apply when the computer is connected to a private network, such as your home.
  • Public: The rule will apply when connected to a public network.

Select the type of profile or profiles you want the rule to apply to, and click Next (Figure G).

Select the new firewall rule profile(s)
Figure G. Select the new firewall rule profile(s).

7. Name the rule

Finally, you must choose a name for your rule and add a description.

Do not type in a generic name. It will only make it difficult to find when you need to deactivate or delete the rule.

Even though it’s optional, adding a description is important, especially if you are creating several rules, as it may help you remember the rule’s specifics and why you created it.

Once you’ve entered your name and description (Figure H), click Next.

Name your new firewall rule and provide a description
Figure H. Name your new firewall rule and provide a description.

You should now see the new rule under the list of all Outbound Rules in the Windows Firewall Advanced Settings center panel (Figure I).

Confirm the new rule is listed under the Outbound Rules menu
Figure I. Confirm the new rule is listed under the Outbound Rules menu.

Can you temporarily block a program with a firewall?

You can temporarily block a program using Windows Defender Firewall’s Allowed Apps and Features tool. 

To temporarily block a program:

  1. Open Windows Defender Firewall by searching or using run commands.
  2. In the left pane, click Allow an app or feature and then click on Change settings.
  3. You will now see a list of all your apps and programs. Search the list to find the program you want to block. If you do not find the program, use the Browse option to locate it. Then select the program and click Add to add it to the list. 
  4. You will notice that some of the programs in the list are checked in the left checkbox, while others are not. Those that are checked are currently allowed by Windows Firewall. Uncheck the program you want to temporarily block and click OK to save your changes (Figure J).

Note that you can decide whether to block the program on Private networks, Public networks, or both.

Temporarily blocking programs, apps, and features with Windows Firewall
Figure J. Temporarily blocking programs, apps, and features with Windows Firewall.

Bottom line: Blocking programs in Windows Firewall

Firewalls are essential for data privacy and security, and Microsoft Windows Firewall Defender has an excellent performance rate.

Though customizing firewalls may seem to be something only the most advanced users should be doing, nothing is further from the truth.

If you need to block a program, an app, or a feature, all you need to do is follow the simple steps outlined in this guide.

Thinking about moving on from Windows Firewall? We reviewed the best firewall software to protect your network.

The post How to Block a Program with Firewall in 7 Easy Steps appeared first on Enterprise Networking Planet.

]]>
Linux Virtual Memory: Optimizing Virtual Memory on Linux https://www.enterprisenetworkingplanet.com/management/understand-linux-virtual-memory-management/ Wed, 24 May 2023 19:40:00 +0000 https://www.enterprisenetworkingplanet.com/uncategorized/understand-linux-virtual-memory-management/ Understand how Linux virtual memory works and optimize your system performance. Learn about the paging process, memory allocation, and more.

The post Linux Virtual Memory: Optimizing Virtual Memory on Linux appeared first on Enterprise Networking Planet.

]]>
Virtual memory is one of the most essential elements of an operating system (OS). Linux virtual memory works differently than other OS.

Linux virtual memory uses different techniques, such as copy-on-write, demand paging, and page aging in order to improve performance by offloading redundant or unnecessary processes from the disk.

This article will explain how Linux virtual memory works and provide a brief tutorial on how you can set up virtual memory on your own Linux machine.

How virtual memory works in Linux

Virtual memory is generally thought of as only used to extend a system’s physical RAM. However, Linux techniques and virtual memory components execute more sophisticated tasks.

Linux virtual memory allows each process to have its own private address space, even though there may not be enough physical memory to map all the addresses in all processes simultaneously.

It achieves this using a technique known as “paging.” Paging basically swaps memories from the physical memory to disk storage and back, depending on how frequently they are used.

Paging also gives each process the exact amount of memory it needs. For example, when a new process is created, the OS creates a virtual address space. But because this address space is “bigger” than the process itself, the OS will identify the pages of memory that the process needs. Then all the pages that the process does not require will be kept on disk in a swap file or swap partition.

Linux virtual memory techniques

Using Linux virtual memory techniques, users can run more processes than would normally be possible, run processes simultaneously, or use all of the available physical memory.

Linux virtual memory techniques include copy-on-write, demand paging, and page aging.

  • Copy-on-write: When creating new pages of memory, Linux does not allocate new physical memory for a page but creates a virtual address for the page and marks the page as “shared.” When the process tries to write to the page, the OS only writes to the physical memory if the page is not already in use by another process. This can save a lot of physical memory, especially if multiple processes are using the same data.
  • Demand paging: The only pages of memory that are loaded by Linux into the physical memory are those that a process demands or attempts to access. The benefits of this technique include speeding performance by not loading pages that are not often needed.
  • Page aging: Linux virtual memory keeps track of every page of memory being used. The inactive or less accessed are likely to be swapped out to disk, while the most used ones are kept in the physical memory.

The Buddy and the Slab Allocator

The Linux kernel has two main memory allocators: the Buddy Allocator and the Slab Allocator.

The Buddy Allocator

The Buddy Allocator interacts directly with the Memory Management Unit (MMU), providing valid pages when the kernel asks for them. It also manages lists of pages and keeps track of different categories of memory addresses.

The Buddy Allocator is a general-purpose memory allocator and can be used to allocate memory of any size. It works by dividing memory into a binary tree of blocks. Every time a process requests memory, it will find the smallest block that is large enough to meet the demands of the request and allocate the block to the process.

The Slab Allocator

The Slab Allocator is a layer of the Buddy Allocator and provides the ability to create a cache of objects in memory. It is used to allocate memory for small objects.

The Slab Allocator groups objects of the same size together into a slab. When a process requests an object, the Slab Allocator checks the cache to determine if the object is already in memory.

If the object is in memory, the Slab Allocator returns the object to the process. If the object is not in memory, the Slab Allocator allocates a new object from the cache and returns the object to the process.

Linux memory pages

Linux classifies different types of pages in the virtual memory as:

  • Free pages: Pages of memory not used by any process at the time and available to be allocated to a process that needs them.
  • Active pages: Pages of memory in use by one or more processes. These are not available to other processes until they are no longer being used.
  • Inactive pages: Pages of memory that are not being used by any process and have not been modified since they were last read from disk. These pages are available to be swapped out to disk to free up physical memory.
  • Dirty pages: Refers to pages in use by one or more processes and which have been recently modified. To be swapped, these pages must be written.

Linux kernel tasks

Linux also has a number of kernel tasks for specific aspects of virtual memory management.

For example, the page fault handler, responsible for managing page faults, is activated when a process attempts to access a page of memory that is not in physical memory. The page fault handler brings the page from the disk into physical memory and resumes the execution of the process.

The memory pager is responsible for swapping pages of memory to disk. It is used by the OS when it needs to free up physical memory. The memory pager selects the pages of memory that are swapped by identifying which are not being used frequently.

Similarly, the page reclaimer brings pages of memory that have been swapped out to disk back into physical memory. It is activated when a process requests access to these pages.

Setting up virtual memory in Linux with tunable parameters

Tunable parameters are kernel settings that can be adjusted to improve the performance of your system. They can be adjusted in real-time using the sysctl command or in the /etc/sysctl.conf file.

Remember that changing parameters can have negative impacts on your system and affect your performance.

Let’s look at the most important tunable parameters for virtual memory.

Vm.Swappiness

vm.swappiness is a parameter that controls how aggressively the kernel will swap out pages of memory to disk.

When set at a value of 0, the kernel will never swap out pages of memory. If set at 100, the kernel will swap out pages of memory as soon and as rapidly as they are no longer in use. The value set by default is 60.

For example, to set vm.swappiness to 10, you would use the following command:

sysctl -w vm.swappiness=10

VM.Dirty_Ratio

With this parameter, users can control how full the kernel’s writeback cache can get before it starts writing dirty pages to disk.

Once again, if vm.dirty_ratio is set to 0, the kernel will never write dirty pages to disk. Set to 100, it will write dirty pages to disk as soon as they are dirty. The system default value is 20.

To change this parameter to 80 in real-time, with immediate effect, use the following command:

sysctl -w vm.dirty_ratio=80

Vm.Dirty_Background_Ratio

vm.dirty_background_ratio defines how full the kernel’s writeback will get before it starts writing dirty pages to disk in the background.

Set at 0, no dirty pages will be written in the background, while when set at 100, the kernel will write dirty pages to disk in the background as soon as they are dirty. The default value is 10.

The code to set the parameter at 25 is:

sysctl -w vm.dirty_background_ratio=25

This will immediately switch the parameter to 25. Using this example, the kernel will only start writing dirty pages to disk in the background when the writeback cache is 25% full. This ensures dirty pages are not written to disk too late.

If the writeback cache is too full, the kernel may have to start writing dirty pages to disk in the foreground, causing noticeable slowdowns.

What are the best practices for managing memory in Linux?

There are several techniques and good approaches to take when managing virtual memory in Linux systems.

You can use a swap file or a partition, a portion of your hard drive, to store pages of memory that are not in use by any process. This improves performance by freeing up physical memory.

You can also check the amount of physical memory on your system (free -h), determine the amount of swap space you need (free -h | grep -i swap), and check the status of your virtual memory (free -h | grep -i swap).

Other common good practices, such as keeping your computer in good shape, closing applications you are not using, and deleting unnecessary files, can also help improve your virtual memory management.

It’s critical that you keep your Linux OS updated constantly, as the Linux kernel is constantly releasing new features and bug fixes.

Bottom line: Using Linux virtual memory to improve performance

Linux virtual memory is an excellent feature, but understanding how it works is critical to improving its performance.

Armed with the information in this article—from how it works, how it defines and classifies pages, and what techniques it utilizes to the most used parameters to better configure a system—you should be able to start implementing this valuable feature.

Always remember that resetting parameters can negatively impact your system and requires testing, but if you are careful and take the right steps, you can dramatically increase the performance.

We analyzed, selected, and reviewed the best network virtualization software to further boost server performance.

The post Linux Virtual Memory: Optimizing Virtual Memory on Linux appeared first on Enterprise Networking Planet.

]]>
Top 5 Web and Internet OSs for Enterprises in 2023 https://www.enterprisenetworkingplanet.com/os/best-internet-operating-systems/ Wed, 29 Mar 2023 17:30:00 +0000 https://www.enterprisenetworkingplanet.com/uncategorized/five-web-operating-systems-you-can-take-for-a-spin/ A web OS provides an accessible, secure collaboration option with low system demand. Here’s how they work, and how to pick the best one for your use case.

The post Top 5 Web and Internet OSs for Enterprises in 2023 appeared first on Enterprise Networking Planet.

]]>
An operating system (OS) is software that manages a computer’s hardware and software resources, enabling users to interact with the computer, manage systems files, install software, and execute programs.

A web operating system (also known as a webtop or dummy OS) is an internet-based user interface that enables users to access and use various applications and services through the cloud without installing software on their local computers. 

Web OSs provide users many features and capabilities, including file storage and management, online collaboration tools, email, media playback, and productivity applications. With these OSs, users can perform various tasks, such as creating documents, editing images, and accessing files. They also typically support multi-user access, allowing multiple users to collaborate on the same documents and projects in real time.

Here are our picks for the top web and internet OS software:

  • SilveOS: Best for ease of use
  • OSv: Best for virtual machines
  • ChromeOS: Best for enterprises and dedicated Chromebooks
  • Xcerion: Best for developers and file storage
  • LucidLink: Best for remote teams

Top web & internet OSs software comparison

Best forBrowser supportOffline functionsNotable appsStarting price
SilveOSEase of useYesNoCalculator, Map, Word ProcessingFree
OSvVirtual machinesYesNoApps written in Java, C, Ruby, and moreFree
ChromeOSEnterprises, ChromebooksYesYesGmail, YouTube, Play StoreFree
XcerionDevelopers and cloud storageYesYesCalendar, Contacts, NotesFree (premium options available)
LucidLinkRemote TeamsYesYesCAD/CAM models$20 per TB per month

Jump to:

SilveOS icon

SilveOS

Best for: Ease of use

SilveOS is a web-based OS that runs directly in a web browser. It’s based on Silverlight technology, a web application framework developed by Microsoft.

All you need to do is log in to the website on your device browser, and you can access applications and functions such as a file manager, calculator, YouTube, web browser, and media player.

SilveOS desktop environment.
Image: SilveOS Desktop Environment

SilveOS has a Windows-like user interface, as it’s meant to provide a familiar and easy-to-use interface for users familiar with the Windows OS. The SilveOS desktop environment features a taskbar, start menu, and desktop icons, all typical Windows desktop elements. 

SilveOS was initially released in May 2008 as Windows4all.com; in November of the same year, it was renamed SilveOS (SilverLight Operating System).

Pricing

SilveOS is free to use.

Features

  • Built-in apps such as calculator, YouTube, Doc, spreadsheet, Map, Solitaire, and more.
  • Features a desktop, start menu, taskbar, and sidebar widgets.
  • Option to install or uninstall applications.
  • Users can personalize the SilveOS environment by changing the wallpaper and theme color.
  • The latest version was built on Vuejs.

Pros

  • No registration required.
  • Easy to use.

Cons

  • Application support limited to SilverLight-based apps.
OSv logo

OSv 

Best for: Virtual machines

OSv is a cloud-optimized OS developed by Cloudius Systems (now ScyllaDB). It’s an open source modular unikernel built to run a single unmodified Linux application as a microVM on top of a hypervisor. Its modular architecture enables users to select and include only the components necessary for their specific applications, which can help reduce its footprint and improve performance.

OSv dashboard.
Image: OSv Dashboard

One of the key features of OSv is its ability to run Java applications natively without requiring a full Java Virtual Machine (JVM). This is accomplished through a custom runtime environment called Capstan.

Pricing

OSv is a free and open source tool. Users can download OSv and run it locally or in the cloud via Amazon EC2 (elastic compute cloud) or Google Compute Engine (GCE).

Features

  • Includes the Jolokia JMX connector, an HTTP/JSON bridge for remote JMX access.
  • Supports various language runtimes, including unmodified JVM, Python 2 and 3, Node.JS, Ruby, and Erlang.
  • OS instances are deployable from a developer IDE or through a continuous integration system.
  • Can run applications written in languages compiling directly to native machine code, such as  C, C++, Golang, and Rust.

Pros

  • Offers an optional in-browser dashboard.
  • OSv can be used for horizontal scaling.
  • Fast boot time.

Cons

  • OSv is designed to run only one application at a time.
Google Chrome logo

ChromeOS

Best for: Enterprises and dedicated Chromebooks

ChromeOS is a Linux-based, lightweight, web-based OS developed by Google on its open-source Chromium OS platform. It’s designed to work primarily with web apps and Chromebooks. It’s built around the Google Chrome web browser, uses the web browser as its primary interface, and is designed to provide users with a simple, secure, and fast computing experience. 

ChromeOS comes preinstalled on Chromebooks, but it’s also available as the web-based ChromeOS Flex for Windows and Mac PCs.

Google ChromeOS environment.
Image: Google ChromeOS Environment

ChromeOS is optimized for the web, with most apps and documents in the cloud. It also includes integrated media players, file managers, and access to the Google Play Store for downloading Android apps.

Pricing

ChromeOS (including ChromeOS Flex) is a free, open-source OS. 

Features

  • Access to Google Play Store.
  • Supports offline functions like document editing and viewing email.
  • Comes preinstalled on Chromebooks and can run free on Windows and Mac with only a USB drive.

Pros

  • Lightweight open source.
  • Speedy boot: it can boot up in as fast as five seconds.

Cons

  • Locked into Google Chrome browser.
  • Not great for privacy, as Google collects all user data.
Xcerion logo

Xcerion

Best for: Developers and file storage

Xcerion is a Swedish software company founded in 2001 by Daniel Arthursson. Xcerion’s CloudTop is a virtual desktop web OS that provides users with an online computer you can access from anywhere with an internet connection. To use CloudTop, users must create a free account with CloudMe, another Xcerion product that provides secure cloud storage for files and media.

Xcerion currently operates as a holding company for its cloud computing products. The company offers various services, such as cloud storage via CloudMe, database-as-a-service in the cloud via CloudBackend, and an edge application platform via its XML Internet OS (XIOS/3) for developers to build web apps. 

Xcerion is also known for being the inventor of MyCloud and iCloud. iCloud was sold to Apple in 2011.

Pricing

Users can sign up and use some Xcerion products for free. However, CloudMe also offers premium versions for additional storage. The premium plans have sub-plans in two editions, Consumer and Business. Both offer monthly or yearly subscription models.

Here are the full details of CloudMe’s pricing plans:

Consumer plans

Max storageMax file sizeCost per monthCost per year
Free Plan3GB150MBFreeFree
Start Plan25GBUnlimited€4€40
Small Plan100GBUnlimited€8€80
Standard Plan200GBUnlimited€14€140
Large Plan500GBUnlimited€30€300

Enterprise plans

Max storageMax file sizeAdd’l users (100GB/ea.)Cost per monthCost per year
Team Plan1TB2GB5€149€1,490
Business Plan2TB2GB15€279€2,790
Enterprise Plan5TB2GB50€759€7,590

Features

  • Serves as a shared workspace for enterprise teams.
  • Low-code and edge/client-side application server XIOS/3.
  • Cloud storage service.

Pros

  • Its XIOS/3 product helps speed up web application development.
  • Enhances team collaboration.

Cons

  • Users may find it difficult to determine the best product for their use case. 
LucidLink logo

LucidLink

Best for: Remote teams

Though not technically a web OS, LucidLink provides some of the same advantages by letting creative and remote teams access, share, and collaborate on projects in real time without downloading and syncing media locally. Companies can buy filespace on the cloud and assign secure access to their teams across the world.

LucidLink interface.
Image: LucidLink 

LucidLink provides a unified, secure and fast cloud storage experience that works across all major cloud providers, including AWS, Azure, IBM, and GCP.

Pricing 

LucidLink has three plans. Each plan provides access for up to 5 users; additional users can be added for $10 per user per month.

  • Basic Filespace: Wasabi storage at $20 per TB per month.
  • Advanced Filespace: IBM storage at $80 per TB per month.
  • Custom Filespace: “Bring your own” S3 compatible storage at $40 per TB per month.

LucidLink also offers a 14-day free trial of its services.

Features

  • ​​Media and entertainment team can perform video and post-production editing directly in the cloud.
  • Architecture, engineering, and construction can also collaborate on CAD/CAM models in real-time.
  • LucidLink works on several OSs, including Microsoft, macOS, and Linux.

Pros

  • LucidLink has an online quote calculator tool to help customers estimate license costs.
  • Works with applications that support NAS storage.

Cons

  • Users reported infrequent connectivity issues and glitches.  

5 key features of web and internet OS software

Some essential features of web and internet OS software that prospective buyers should consider when deciding on a solution include browser and app compatibility, offline functionality, cloud storage integration, and collaboration tools.

Browser support

Web and internet OS software usually work across multiple web browsers, such as Google Chrome, Mozilla Firefox, Microsoft Edge, and Safari. This allows users to access the software from their preferred browser without compatibility issues. Check the list of supported browsers before selecting a tool to ensure it works on the user’s preferred browser.

Offline functionality

Although web and internet OS software is primarily designed to work online, some software may offer offline functionality. This means that users can access and edit their documents and files without an internet connection. This feature is helpful for users who frequently work in areas with limited or no internet connectivity.

Application support

Generally, web OS software offers a suite of applications, such as word processors, spreadsheets, calendars, and email clients. Some software may also provide additional applications, such as graphics editors, project management tools, and video conferencing software. Ensure your desired software supports the applications your organization and teams need to function properly. 

Cloud storage integration

If you need to save files, documents, or other information, shop for a tool that enables you to integrate with cloud storage services, such as Google Drive, OneDrive, and Dropbox. This allows users to store their files and documents online, making them easily accessible from any device with an internet connection. 

Collaboration Tools

Does your team need to collaborate on projects? If yes, consider a tool that allows real-time collaboration. Some tools offer messaging capability, real-time editing, commenting, and version history tracking, and some even allow users to work on the same document or file simultaneously. 

How do I choose the best web OS software for my business?

Choosing the best web and internet OS software for your business depends on the needs of your particular industry and your company’s goals. 

For example, if you are in the creative industry, LucidLink may be best for you, as it enables media, engineering, and even marketing teams to collaborate and share files easily. On the other hand, for developers and those looking to store large files in the cloud, Xcerion is a good option. And if you just need a quick, easy option to get to work without any fuss, SilveOS might be the best choice for you.

Ultimately, the best web OS software for your business depends on your specific needs and goals. Before making your final decision, conduct research, read reviews, apply for a product demo or free trial, and ask questions to ensure the software you choose best fits your business.

Methodology

Most web and internet OS tools available over a decade ago have been discontinued. To provide the most up-to-date solutions for enterprise, we researched the latest products and services available, focusing on reliable ones with a proven track record of success. 

We looked for features such as security, easy-to-use user interfaces, supported applications, and customizability, as well as customer reviews and industry reviews. We also considered the companies’ support and the solution’s affordability. After careful research and evaluation, we narrowed down the top options for specific use cases, then outlined the best web and internet OS software solutions based on the factors mentioned.

Also See

The post Top 5 Web and Internet OSs for Enterprises in 2023 appeared first on Enterprise Networking Planet.

]]>
Top IoT Operating Systems https://www.enterprisenetworkingplanet.com/os/iot-operating-systems/ Fri, 18 Nov 2022 00:00:42 +0000 https://www.enterprisenetworkingplanet.com/?p=22919 As with any other computing system, an Internet of Things (IoT) deployment is incomplete without an operating system. These operating systems enable users to carry out basic computing functions within Internet-connected devices. In this article, we take a look at these IoT operating systems, and we detail the leading OSes that are used to drive […]

The post Top IoT Operating Systems appeared first on Enterprise Networking Planet.

]]>
As with any other computing system, an Internet of Things (IoT) deployment is incomplete without an operating system. These operating systems enable users to carry out basic computing functions within Internet-connected devices.

In this article, we take a look at these IoT operating systems, and we detail the leading OSes that are used to drive IoT systems.

Also see: 6 IoT Challenges and How to Fix Them

What are IoT Operating Systems?

IoT operating systems are operating systems that enable developers and business teams to engage with embedded devices and systems, program their capabilities, and track the data they generate as part of IoT applications.

These operating systems provide processing ability at a scale required for stable and consistent performance. Akin to standard operating systems, IoT operating systems help users to execute computer functions within connected devices.

Also see: 7 Enterprise Networking Challenges 

Why Use IoT Operating Systems?

IoT strategies are increasingly being prioritized by businesses across numerous industries as greater connectivity of devices and systems shows promise in optimizing customer experience and operations in unprecedented ways.

For a successful IoT initiative, developers require access and control over individual devices to ensure they execute the correct applications for each device system or asset. An IoT operating system gives them such power.

IoT operating systems will be useful to you if you seek to:

  • Manage software and data on individual IoT devices.
  • Tweak programming for each device for maximum effectiveness in an IoT architecture.
  • Conserve the utilization of resources and power across IoT hardware.
  • Link embedded devices to IoT applications, cloud services, or edge devices.

Also see: Best Network Management Solutions 

How to Choose an IoT Operating System

To choose the correct IoT operating system for your applications and environments, there are a few factors to consider:

  • Use Case: Since IoT operating systems differ in functionality and application, the desired use case of a user determines the kind of IoT operating system they implement.
  • Security: The operating system has to offer the right security add-ons for the applications and environments of its users since IoT security is one of the biggest determinants of the success of IoT initiatives. Vulnerabilities and security gaps exploited by threat actors often result in expensive consequences for IoT systems.
  • Scalability: A good IoT operating system provides scalability for any type of device in a user’s environment.
  • Connectivity: An operating system that supports a wide range of connectivity protocols should be considered. It should also support relevant and up-to-date protocols to ensure it is future-proof.
  • Footprint: Dependent on the use case, it is important to choose an operating system that meets memory, power, and processing requirements. When dealing with constrained devices, the operating system should have minimal processing, power, and memory overhead.

Also see: Top Edge Computing Companies

Top IoT Operating Systems

Nucleus RTOS

Siemens logoNucleus RTOS is a real-time operating system that equips system developers with the ability to tackle the complex requirements of advanced embedded designs. It is deployed in more than 3 billion devices and delivers a microkernel-based operating system built for reliability and scalability. Its kernel-rich capabilities and tooling features are ideal for use cases that require a scalable footprint, power management, security, and deterministic performance.

Highly demanding markets with stringent safety and security prerequisites like industrial systems, airborne systems, medical devices, and automotive all feature successful Nucleus deployments.

Key Differentiators

  • Multicore Support: Nucleus RTOS provides extensive multicore support with both 32-bit and 64-bit solutions for uAMP, sAMP, and SMP architectures.
  • Process Model: Nucleus uses its process model to deliver greater reliability by providing space domain partitioning to isolate software and subsystems.
  • Low-Power Design: Through the Nucleus Power Management Framework, Nucleus RTOS offers embedded developers the latest power-saving features. Developers can use this framework to create power-aware applications that satisfy the low-power requirements of embedded systems.
  • Support for Diverse Connectivity Solutions: Nucleus supports a wide range of connectivity solutions, including optimized USB 2.0 and 3.0, SDIO 2.0 and 3.0, 802.15.4, Bluetooth Low Energy (BLE) and Bluetooth, Wi-Fi, and PCIe.

TinyOS

TinyOS logoTinyOS is an embedded, open-source, component-based operating system for low-power wireless devices used by a community spanning academia and industry. The operating system serves low-power wireless devices like those used in ubiquitous computing, wireless networks, smart meters, smart buildings, and personal area networks.

Since TinyOS is dependent on the events it receives from its environment, it is an event-driven operating system. Its memory optimization capabilities make TinyOS popular among developers.

Key Differentiators

  • Optimization for Memory Limits of Sensor Networks: The applications of TinyOS are written in a dialect of the C programming language called nesC, which is optimized for the memory limits of sensor networks.
  • Common Abstraction Interfaces and Components: TinyOS delivers interfaces and components for standard abstractions like routing, packet communication, actuation, sensing, and storage.
  • Simulation of Algorithms and Protocols: The TinyOS operating system contributes to the simulation of algorithms and communication protocols on a large scale. TinyOS is thus useful for the development of communication protocols for wireless sensor networks.

Amazon FreeRTOS

Amazon Web Services logoAmazon FreeRTOS is an open-source real-time operating system for resource-constrained devices. It simplifies the programming, deployment, security, connectivity, and management of small, low-power edge devices. The cloud-neutral operating system is characterized by a fast, responsive and reliable kernel and is implemented in more than 40 architectures. This provides developers with a vast choice of hardware to go with sets of prepackaged software libraries.

Some of FreeRTOS’s use cases include the local collection and processing of data, management of multiple commercial equipment tasks and the remote updating of devices.

Key Differentiators

  • Connectivity: FreeRTOS devices can maintain local connectivity via Ethernet and Wi-Fi using local connectivity libraries like Wi-Fi management. FreeRTOS also supports cloud connectivity to enable users to comfortably collect data and act on microcontroller-based devices for use in IoT applications as well as with other AWS cloud services.
  • AWS IoT Features and Services Support: Amazon FreeRTOS supports AWS IoT features and services such as AWS IoT Core Device Shadow and AWS IoT Device Defender.
  • Over-the-Air Updates: Using FreeRTOS with AWS IoT Device Management delivers an over-the-air update solution. FreeRTOS makes it less memory-intensive to deploy over-the-air updates for microcontroller-based devices.

Windows 10 IoT

Microsoft Windows10 IoT logoWindows 10 IoT enables developers to use the power of Windows 10 to build IoT solutions quickly and securely by providing developer tools, enterprise-grade security and long-term support. Windows provides a trusted operating system upon which IoT solutions can be created and deployed. It helps its users to connect their devices to the cloud using Azure IoT and take advantage of insights to deliver personalized experiences, deepen customer engagement and improve business results.

Windows 10 IoT comes in two editions: Windows 10 IoT Core and Windows 10 IoT Enterprise.

Key Differentiators

  • Windows 10 Enterprise: Windows 10 IoT Enterprise delivers the full power of Windows 10 Enterprise for usage in dedicated devices like retail points of sale, smart gateways, robotics, kiosks and more.
  • Security: Windows 10 IoT has numerous security features to keep up with the ever-growing need to manage and secure digital devices with the increased prevalence of IoT. Windows 10 IoT offers device security technologies such as Trusted Platform Module (TPM), Secure Boot, BitLocker, Device Guard, and Device Health Attestation.
  • Development Tools: Windows for IoT delivers effective and familiar development tools to create and manage IoT devices.
  • Open Cloud Protocol Support: Windows for IoT supports open cloud protocol and out-of-the-box experiences that deliver Azure intelligence to Windows for IoT.

Tizen

Tizen logoTizen is a flexible operating system designed by a community of developers under open-source governance to specifically address the needs of application developers, device manufacturers, mobile operators, and more stakeholders of the mobile and connected device ecosystem. Developers can use Tizen to build powerful applications and execute them on a wide spectrum of devices.

It provides a set of exhaustive tools to create Tizen-native and web applications through Tizen Studio, which consists of an integrated development environment (IDE), toolchain, Emulator, sample code, and documentation.

Key Differentiators

  • Multiple Profiles: The Tizen operating system presents multiple profiles to cater to different industry requirements. These Tizen profiles include Tizen IVI (in-vehicle infotainment), Tizen TV, Tizen Mobile, and Tizen Wearable. All of these profiles are built atop the same shared infrastructure called Tizen Common, as of Tizen 3.0.
  • Operating System Customization: With Tizen, device partners and mobile operators can work together to customize the operating system and user experience to satisfy the specific customer needs of their customer segments.
  • Native Application Development: Tizen provides the power of native application development to application developers and independent software vendors, with flexible HTML5 support. It enables application developers to widen their scope to smart devices running Tizen.

RIOT OS

Riot logoRIOT OS is a free open-source operating system, with a global community cutting across industry, hobbyists, and academia, that supports most low-power IoT devices and external devices. It also support 8-, 16-, and 32-bit microcontroller architectures. It provides a microkernel, utilities, and network stacks that include data structures, cryptographic libraries, and a shell among others. The operating system mostly targets systems that are too constrained to run Linux. It seeks to implement all applicable open standards that support a secure, durable, and connected Internet of Things.

Key Differentiators

  • Connectivity: RIOT OS uses a modular approach to adapt to application needs and break silos. The operating system seeks to support all standard network technologies and internet standards.
  • Security: RIOT supports DTLS security, IEEE 802.15.4 encryption, secure firmware updates, numerous cryptographic packages, and crypto-secure elements to enable secure IoT applications.
  • Code Quality: The RIOT community uses established tools to constantly test code and maintain the highest standards of code quality.

Wind River VxWorks

Wind River logoWind River VxWorks is a real-time operating system that offers the performance, reliability, security, and safety functionality required to attain the maximum standards for running the embedded computing systems of its users’ critical infrastructure. It is a priority-based preemptive RTOS with low latency and the slightest jitter. It’s built on an architecture that’s not only upgradable but also future-proof to enable its customers to respond to shifting market and technology needs.

VxWorks also supports application deployment through containers. This modern approach to RTOS raises developer productivity and helps them deploy embedded and safety-critical applications confidently.

Key Differentiators

  • Extensive Multi-Core and Multiprocessing Support: VxWorks helps its users utilize hardware to its fullest potential, as it supports 32- and 64-bit multi-core processors based on Arm, Intel, RISC-V, and Power architectures.
  • OCI Containers: Users can use IT-like tools and methods to package and deploy their applications at a rapid speed. They can push their applications to standard container registries and pull them from their deployed VxWorks-based devices.
  • Security Capabilities: VxWorks integrates comprehensive and rapidly evolving security capabilities to enable architects to develop levels of security that are suitable for the attack surface and threats facing their use cases and environments.

Comparison Chart: IoT Operating Systems

Operating System Real-Time Multicore Support Virtual Security IDE
Nucleus RTOS Boot, data and communications including TLS 1.3, secure storage, root of trust, protection for data in transit etc Integrated Sourcery Codebench IDE
TinyOS Offers network security protocols YETI 2, XPairtise, TinyDT
Amazon FreeRTOS Libraries including secure cloud connection, certificate authentication, key management, code signing. TLS v1.2, cryptography Microsoft Visual Studio
Windows 10 IoT ASLR, DEP, control flow guard, trusted platform module, secure boot, BitLocker, Windows updates Visual Studio, etc
Tizen Tizen RT Tizen Secure Repository Tizen Studio, Visual Studio
RIOT OS Soft real-time capabilities DTLS security, IEEE 802.15.4 encryption, cryptography, Secure Firmware Updates RIOT shell
Wind River VxWorks TPM 2.0/TSS support, Firewall, Cryptography, AD/LDAP support, Kernel hardening, secure boot, secure ELF, secure storage, address sanitizer ✔ Eclipse-based IDE

The post Top IoT Operating Systems appeared first on Enterprise Networking Planet.

]]>
Best Enterprise Cloud Migration Tools & Services https://www.enterprisenetworkingplanet.com/guides/cloud-migration-services/ Thu, 22 Jul 2021 17:06:49 +0000 https://www.enterprisenetworkingplanet.com/?p=21299 An ever-increasing number of enterprises are moving their storage and data processing needs to the cloud, and some are past that stage, moving from one cloud provider to another. When an enterprise determines that they want to move to a cloud infrastructure, a variety of migration platforms and managed service providers offer solutions to make […]

The post Best Enterprise Cloud Migration Tools & Services appeared first on Enterprise Networking Planet.

]]>
An ever-increasing number of enterprises are moving their storage and data processing needs to the cloud, and some are past that stage, moving from one cloud provider to another. When an enterprise determines that they want to move to a cloud infrastructure, a variety of migration platforms and managed service providers offer solutions to make that migration easier. Read on to learn more about some of the best enterprise cloud migration tools and services and why so many enterprises are benefiting from cloud-based solutions.

Also Read: The Importance of Application Performance Management (APM) for Cloud-based Networks

Best Cloud Migration Solutions

Top Considerations for a Cloud Migration

Before your organization selects a cloud migration tool or partner, it’s important to determine your hoped-for outcomes and the solutions that best fit those goals. As you begin to plan your cloud migration, ask yourself the following questions:

Do you need a platform or managed services solution?

Depending on your organization’s in-house expertise and the complexity of your migration needs, you may want to move beyond just selecting a platform and instead choose a managed services company that will lead cloud migration management for you. Although this approach often costs more, it can save time and prevent user error in the long run.

Does a free or paid solution fit your needs?

Many private and public cloud providers offer free migration solutions, although most of their cloud add-ons incur additional costs. Other cloud solutions and services offer subscription or solution-based pricing, so be sure to look at what’s included in the free or baseline packages of your chosen solution(s).

Does everything need to be migrated?

Chances are, your organization is holding onto applications, workloads, or other types of data that you no longer need. Migrating those items to your new cloud solution may require unnecessary expenses and time, especially if these tools are no longer in use. Before selecting or diving too deep into the cloud migration process, consider completing a network audit to determine what needs to be migrated and what can be discarded.

Do you have an existing partnership that can be leveraged?

If your organization already works with software or services from a specific vendor, it’s worth talking to them about the solutions that they offer for cloud migration. Your team will already be comfortable with many of their core features, and your company might be able to get a discounted rate or other perks if you work with them.

What does customer support look like for your selected solution?

Even the most experienced teams can run into bugs or questions pre- and post-migration. Look at review sites and do your own research on company websites to decide if their customer support structure will be supportive enough to smoothly transition to the cloud.

Also Read: Effective Cloud Migration Strategies for Enterprise Networks

Top Cloud Migration Platforms and Providers

AWS Cloud Migration Services Amazon AWS Logo

AWS is one of the largest cloud platforms and migration services providers on this list, with over one million customers that include major enterprises like Coca-Cola, Samsung, and GE. They particularly stand out by offering a diverse range of application workload migration services for Windows, SAP, VMware, databases, and mainframes.

To help their customers get started with Cloud Migration Services and other AWS solutions, AWS offers custom-designed training and certification courses to meet your needs. With their approximately 250 unique tools and solutions, there’s a good chance that your enterprise is already familiar with AWS’s infrastructure and approach.

Features:

  • AWS Prescriptive Guidance with a phased approach for migrating thousands of workloads
  • Third-party migration tooling ecosystem with AWS machine learning
  • AWS Migration Acceleration Program
  • AWS Migration Competency Partners
  • AWS Managed Services

Top Pro: AWS is highly experienced with migrating thousands of workloads at a quick pace and is one of the largest solutions on the market, making it a strong option for larger enterprises.

Top Con: Some users have had trouble maintaining or integrating their existing on-prem solutions if they do not rework them into AWS-branded solutions.

Microsoft Azure Migrate Microsoft Azure Logo

Microsoft Azure Migrate is another top tool on the cloud migration market, primarily focused on moving workloads like Windows, SQL and Linux Server, databases, data, web apps, and virtual desktops into the Azure cloud.

Azure Migrate offers several compelling features, but perhaps its strongest offering is its security reputation. With millions of dollars annually invested in cybersecurity research and development, more than 3,500 security experts on-staff, and more security compliance certifications than any other cloud provider, Azure Migrate is a strong solution for effectively migrating and maintaining the security of your data.

Features:

  • Centralized migration repository with end-to-end tracking and insights
  • Azure cost optimization features and tools
  • Agentless data center discovery, Azure readiness analysis, cost estimation, app modernization, and app dependency visualization
  • Support for key migration workloads like Windows, SQL and Linux Server, databases, data, web apps, and virtual desktops
  • Migrations available to Azure Virtual Machines, Azure VMware Solution, Azure App Service, and Azure SQL Database

Top Pro: It’s considered user-friendly, with an intuitive dashboard, discovery features, and several available resources and user guides from Microsoft.

Top Con: Certain analytics are limited, particularly on CPU, memory usage, and true cost estimates.

Cisco AppDynamics Appdynamics Cisco Logo

AppDynamics was an independent software company for application and business performance monitoring until it was acquired by security giant, Cisco, in 2017. Their combined forces have developed a cloud migration tool that not only emphasizes top-tier security features and insights, but also emphasizes end-to-end visibility for its customers. With features like AI root cause analysis, anomaly detection, and business performance metrics, AppDynamics is a great solution for enterprises that want to understand what’s happening in their digital transformation efforts at a granular level.

Features:

  • Pre- and post-migration business performance metrics
  • Dashboards for real-time service level agreement compliance insights
  • AppDynamics flow maps for user experience troubleshooting
  • AI root cause analysis and anomaly detection for monitoring
  • Cloud resource allocation management through Cisco Intersight Workload Optimizer and Cisco CloudCenter

Top Pro: AppDynamics Business iQ makes it easy to proactively assess the technical and business success metrics of a cloud migration.

Top Con: Deployment of this migration, especially when migrating legacy hardware and software, is difficult for many users.

Dynatrace Dynatrace Logo

Dynatrace is not only considered a top cloud migration platform, but also one of the foremost cloud monitoring leaders in the industry. Clients praise the user-friendliness of Dynatrace’s security-motivated features, as well as the user interface that makes it easy to create and understand data visualizations.

Dynatrace has recently begun to rebrand itself as a global company that specializes in “software intelligence,” or a more immersive understanding of how digital transformation should happen in the cloud. They focus their software intelligence expertise on advancing the Autonomous Cloud, which is their goal of implementing AI automation to make NoOps cloud operations possible.

Features:

  • Automated root-cause analysis
  • Automatic linking of application services before and after migration
  • Interactive dependency map creation
  • Data visualizations for detailed performance baseline information
  • Cloud-native monitoring and observability into containers

Top Pro: Root-cause analysis and monitoring features are quick and effective.

Top Con: Dynatrace is a more expensive tool than many others on the market, and some users have highlighted that their pricing approach could be more transparent.

Google Migrate for Compute Engine Google Cloud Logo

Google Migrate for Compute Engine is a MaaS (migration-as-a-service) offering that minimizes the client-side software agents and in-house expertise necessary to migrate to the cloud. Their usage-driven analytics and Cloud API work toward a common goal: simplifying cloud migrations ensuring that the migration is right-sized for the customer’s needs.

Customers that migrate to Google Cloud can use Migrate for Compute Engine at no added cost. However, Google Cloud does charge for add-ons like Compute Engine instances, Cloud Storage, Cloud Monitoring, Cloud Logging, and networking bandwidth, which some users need for a fully effective migration.

Features:

  • “As a service” interface in Cloud Console
  • Cloud API for in-house migration builds and migration automation
  • Usage-driven analytics and built-in utilization reports
  • Advanced replication migration technology runs in the background
  • VM groups enabled in Google Cloud Console

Top Pro: The test-clone capability makes pre-migration validation simpler and allows it to happen directly in an isolated cloud environment, where it can’t disrupt production workload testing.

Top Con: The solution offers limited network, container, and VM configuration capabilities.

Flexera One Flexera Logo

Cloud migration is only one feature of the Flexera One platform, which also focuses on IT visibility, asset management, cloud modernization, and cost optimization. The platform offers compelling visualization features, but what makes it unique is its focus on cost optimization. With features that focus on reducing IT waste and underutilization, this platform helps its customers to optimize their cloud spend and the tools that they use.

Flexera One also offers a helpful customer support network and an online community forum, making troubleshooting with Flexera a simpler task than with many other cloud migration platforms.

Features:

  • Cloud migration planning
  • Cloud cost assessment
  • Workload placement
  • Cloud cost optimization
  • Optimal sizing and workload placement based on application stack contents

Top Pro: Dashboards and data visualizations on the platform are diverse and user-friendly, with features such as Stack Digest and Geolocation.

Top Con: Some users have difficulties with bigger vendor products going unrecognized after the migration, and do not have fixed service level agreements (SLAs) to contend with the problem.

Corent SurPaaS MaaS Migrate Corent Logo

Corent SurPaaS’s MaaS Migrate solution is a particularly strategic choice for hybrid migration and other customization needs. Any combination of public, private, and on-premises clouds can be added to your cloud migration strategy. Most significantly, users can choose specific application workloads that should be migrated to a preferred public cloud, while critical application workloads are designated to their private cloud. 

MaaS Migrate is able to decipher and separate workloads according to their appropriate locations because of features like a detailed data center assessment, automated advanced multicloud migration and data synchronization, automated application re-architecting and re-platforming, automated application modernization to serverless and PaaS services, and automated migration to containers and Kubernetes.

Features:

  • Lift and shift migration to prevent data loss
  • Integration with Azure Migrate
  • Convert raw application workloads into containers or migrate existing containers with container migration
  • Adaptive workload library of workload migration plans
  • Zero-Point Synchronization for file and database syncing

Top Pro: Smart migration with Smart Analysis uses built-in scripting to help users customize their automated Cloud migration process and makes it possible to plan for multicloud and hybrid migrations.

Top Con: Corent does not appear to offer any user forums for troubleshooting.

Carbonite Migrate Carbonite Logo

OpenText’s Carbonite Migrate is a nondisruptive migration platform that enables users to move workloads to and from a mixture of physical, virtual, and cloud-based platforms Carbonite Migrate takes nondisruption seriously, with frequent data replication, unlimited environmental testing, and cloud orchestration workflows ensuring that users experience near-zero downtime during migration.

Carbonite is one of the most flexible tools on the market, working with a wide variety of operating systems and hypervisor native integrations. Most significantly, Carbonite works with four different cloud platforms: Microsoft Azure, Amazon Web Services (AWS) and AWS Outpost, VMware vCloud Director, and Google Cloud. This platform-agnosticism makes Carbonite Migrate an excellent choice for enterprises that fear vendor lock-in.

Features:

  • Scalable continuous replication that uses minimal bandwidth
  • Free from hypervisor, cloud vendor, or hardware lock-in
  • AES 256-bit encryption
  • Fully automated cloud orchestration workflows or SDK DIY methods available
  • Automated data copies and configuration settings on the target server for limited downtime

Top Pro: The tool is considered easy to use and offers several configuration options.

Top Con: This is not considered the most effective migration resource for multicloud deployment.

NetApp Cloud Volumes Service for AWS NetApp Logo

NetApp’s Cloud Volumes Service for AWS is one of many cloud migration solutions that NetApp offers, though this particular solution provides managed services for AWS cloud migration. This solution shines in the areas of storage and massive data processing needs, which makes it a good option for industries like oil and gas, media and entertainment, and finance that need to migrate large databases and HPC applications.

Features:

  • Multiprotocol support for NFS, SMB, and dual protocols
  • Guaranteed SLAs
  • Cloud Sync tool for quick, secured data synchronization
  • Shared persistent storage with high throughput and low latency
  • Three performance tiers available for different workload processing needs

Top Pro: NetApp offers strong advanced data management features, such as Cloud Sync, rapid clones, data encryption, and snapshot copies.

Top Con: Some users have commented on NetApp’s limited documentation and the extended time it takes to work with customer support.

IBM Turbonomic 8 Turbonomic Logo

Turbonomic 8 is the latest cloud migration platform from Turbonomic that now offers single-instance infrastructure scalability, more intuitive UI/UX, and a better reporting framework for application resource management. The platform has also leaned into offering more cloud deployment options, allowing customers to choose amongst on-premises, AWS Cloud, Azure Cloud, SaaS, and Kubernetes.

Turbonomic was acquired by IBM in June 2021, fulfilling a greater goal to develop a more comprehensive AIOps strategy for hybrid clouds. This will be an important platform to watch in the coming years as it leans into its AIOps implementation. 

Features:

  • Achieved Amazon Web Services (AWS) Migration Competency status in 2018
  • Migration available to AWS and Azure clouds
  • “What-if” modeling feature
  • Performance data warehouse and network performance monitoring at scale
  • Uses application-aware historical utilization data to optimize VM/instance types for all migrated resources

Top Pro: The guided step-by-step wizard helps users choose between lift and shift and optimized migration strategies.

Top Con: Although maintenance costs are considered fairly reasonable, some users have commented on the high cost of licensing this solution.

Also Read: Transforming Networks: From Virtualization to Cloudification

Why Migrate to the Cloud?

Many enterprises have their own motivations for transitioning to the cloud, but these are some of the most common reasons and most frequently realized benefits that come from migrating to a cloud setup:

Scalability of Storage and Other Infrastructural Needs

Public and private clouds are designed for scalability, making it possible to grow your application, user, and data count without any additional hardware or software purchases necessary for storage space. 

Cloud-Based Disaster Recovery Solutions

Disaster recovery can be an expensive and complex process, but cloud infrastructure makes disaster recovery preparation an intuitive part of everyday work. Especially with features like scalability and machineless data backups, you can easily protect large amounts of sensitive data on the cloud.

Third-Party Management

When so much of your organization’s data and systems are hosted on a third-party company’s cloud platform, significantly fewer managerial responsibilities fall on you. They will handle the majority of security and update needs, and your team will spend less time managing and updating things like data center hardware.

Fewer Hardware Expenses and Liabilities

Beyond the pains of managing data center hardware, migrating to the cloud also decreases expenses related to hardware purchases, certification renewal, updates, and other hardware management needs that can become costly.

Real-Time Collaboration for On-Premise and Remote Users

Cloud infrastructure is heavily focused on quick processing and real-time updates. With a focus on making quick changes visible to all users, real-time collaboration and searchability are possible for all enterprise users, regardless of if they work on-premise or remotely.

Security of Data is not so Machine-Dependent

In the data center, on-premises network model, most data is stored on physical machines or hardware. This approach can get clunky and expensive to manage, but more significantly, it can become nearly impossible to keep up with all of the security needs on each piece of hardware, leaving this data vulnerable. With cloud infrastructure, features like cloud backup, automation, and managed services make it easier to optimize security requirements for your most sensitive data.

Learn More About Cloud Security: Top Cloud Security Companies & Solutions

The post Best Enterprise Cloud Migration Tools & Services appeared first on Enterprise Networking Planet.

]]>
Effective Cloud Migration Strategies for Enterprise Networks https://www.enterprisenetworkingplanet.com/os/cloud-migration-strategies-enterprise-networks/ Fri, 02 Jul 2021 16:44:58 +0000 https://www.enterprisenetworkingplanet.com/?p=21227 For companies planning to move their operations to the cloud, here is what to consider to set up a clear migration plan.

The post Effective Cloud Migration Strategies for Enterprise Networks appeared first on Enterprise Networking Planet.

]]>
The number of workloads running in the cloud has exploded in the last few years, and the coronavirus epidemic is set to drive this figure even higher. In 2017, cloud workloads represented 86% of all workloads worldwide, according to Statista, and this figure is set to grow to over 90% by the end of the year. 

Migration Planning

For companies still planning to move their operations to the cloud, what’s needed is a clear migration plan. This involves establishing the reasons for moving applications to the cloud, determining which applications and their dependencies will benefit from being moved to the cloud (or replaced with cloud-native applications), deciding which cloud to move to, and then working out the likely resources in the cloud that will be needed and the cost of these cloud resources.

Network Resources

Another area that needs considering is the likely network resources that will be needed to support users, working in corporate offices or remotely, while they access the applications and data that are moved to the cloud.

This is important because a migration to the cloud will likely lead to a significant increase in WAN traffic as data is moved to and from the cloud, although LAN traffic will not necessarily fall significantly. That means it may be necessary to make arrangements to increase the effective bandwidth of WAN connections to the cloud, either by increasing physical links or by using various WAN optimization techniques and (probably) hardware. Adding a  level of redundancy may also be a prudent course of action. 

Network Security

Organizations also may have to consider changing the way that they manage network security when employees are accessing applications in the cloud remotely or from within the corporate network. That may well involve using a SASE solution, with network security controls provided as a service from different access points outside the corporate network.

Also read: Taking the Unified Threat Management Approach to Network Security

Migration Strategies

When it comes to individual applications, or, more realistically, groups of interdependent applications and their data, what’s needed are different migration strategies depending on their particular attributes and requirements.

In general, organizations need to pick from one of six different migration strategies, known as the six Rs of cloud migration: Retiring, Retaining, Rehosting, Replatforming, Repurchasing, and Refactoring/Re-architecting.

Retiring

The simplest way to handle the migration of an application is simply to get rid of it. During the assessments needed to establish whether an application is suitable for migration to the cloud it is likely that some applications which are no longer needed will be surfaced. These applications can simply be retired, providing a handy monetary saving which can be set against the one-off migration costs for other applications.

Retaining 

Another simple way to handle migration is not to migrate at all, but rather to leave the application where it is currently running in the data center. There are a number of reasons why this might be appropriate: 

  • The cost of migrating an application to the cloud may be too high
  • It may be worth waiting some years until the hardware it is running on has depreciated
  • It may need to remain in the data center for other reasons such as performance, security, or regulatory requirements. 

Rehosting 

Sometimes known as “lift and shift”, this forklift solution involves moving physical servers (and virtual servers) onto an IaaS platform which directly mimics the setup in the data center, including servers, storage, and networking infrastructure. Rehosting is popular with conservative or risk averse organizations, or ones that want to make an initial move to the cloud before starting to rearchitect their operations significantly. 

Replatforming  

This strategy is often used where large organizations have legacy systems of many different types that are too complex simply to lift and shift. Instead, various adjustments and accommodations have to be made so that the systems can be run on virtual machines in the cloud. Although this can be costly, it provides an opportunity to move such systems to the cloud without too much difficulty, while being able to take advantage of cloud benefits, such as lower costs and better security.

Repurchasing 

Instead of adapting existing applications to fit the cloud, another strategy is simply to abandon them and use something new that has been designed to operate in the cloud. This will frequently involve switching non-mission-critical functions. such as CRM or HR, to purpose-built SaaS platforms after moving the related data from existing on-premises applications

Refactoring/Rearchitecting 

This last cloud migration strategy is the most complicated, but the one that is likely to yield the biggest benefits. Essentially it involves making significant changes or, more likely, rebuilding applications from the ground up to work as cloud-native applications or collections of microservices, often running in containers

This kind of rebuild allows organizations to gain the full benefits of cloud scalability, redundancy, accessibility, and lower costs. However, it is also the most expensive to implement and it requires the most time, and therefore many organisations choose to refactor/rearchitect only after they have made an initial “lift and shift” migration to the cloud. 

Read next: How Data Centers Must Evolve in the Cloud First Era

The post Effective Cloud Migration Strategies for Enterprise Networks appeared first on Enterprise Networking Planet.

]]>
Virtualization Software Comparison: IBM PowerVM vs Oracle VM VirtualBox https://www.enterprisenetworkingplanet.com/os/ibm-powervm-vs-oracle-vm-virtualbox/ Fri, 02 Jul 2021 16:34:59 +0000 https://www.enterprisenetworkingplanet.com/?p=21223 Virtualization software is an invaluable tool for optimizing computer hardware and improving productivity. Here is how IBM PowerVM and Oracle VM VirtualBox compare.

The post Virtualization Software Comparison: IBM PowerVM vs Oracle VM VirtualBox appeared first on Enterprise Networking Planet.

]]>
Virtualization enables you to use the full capacity of computer hardware by distributing its capabilities among many users or environments. In essence, it refers to the procedure of running a virtual instance of a computer system in a layer abstracted from the underlying hardware resources. This virtual instance is referred to as a virtual machine (VM).

To the applications running on top of a VM, it appears as if they are running on a dedicated system, where the operating system (OS), libraries and programs are unique to the guest system and not connected to the underlying host OS.   

Virtualization software enables you to reduce capital and operating costs, minimize or eliminate downtime, increase information technology (IT) productivity, efficiency, responsiveness and agility, simplify data center management, benefit from disaster recovery and greater business continuity and quickly provision applications and resources.

Here is all you need to know about IBM PowerVM and Oracle VM VirtualBox.

IBM PowerVM Overview

IBM PowerVM Enterprise Edition is a server virtualization software without limits. The software allows enterprises to consolidate multiple workloads onto fewer systems, increase server utilization and reduce costs.

IBM PowerVM Enterprise Edition provides a server virtualization environment for Advanced Interactive Executive (AIX), aIBM i nd Linux applications built upon the advanced Reliability, Availability and Scalability (RAS) features and performance of IBM’s Power Systems platform.

The software offers enterprise-level security, the ability to scale out or scale up, automatic deployment of VMs and storage and increased efficiency.

Also read: Virtualization Software Comparison: Red Hat Virtualization vs. Proxmox VE

IBM PowerVM Features

Here are the main features of IBM PowerVM:

  • IBM PowerVM Enterprise Edition comes with a built-in capability for active memory sharing between logical partitions (LPARs) in a shared memory pool.
  • By using Live Partition Mobility, you can migrate an active or inactive LPAR from one system to another.
  • Management tools such as Hardware Management Console (HMC), Integrated Virtualization Manager (IVM) and Power Virtualization Center (PowerVC) help to manage and aggregate resources.
  • Micro-partitioning technology allows you to allocate processors to LPARs in increments of 0.01.
  • Power Virtualization Performance (PowerVP) provides detailed, real-time information about virtualized workloads. With the help of PowerVP, you can analyze performance bottlenecks, understand how virtualized workloads use resources and make informed decisions about VM placement and resource allocation.
  • An LPAR that is programmed for Remote Restart can be restarted on a different physical server in case of a server outage.
  • Thin Provisioning and Thick Provisioning help with storage management.
  • You can use PowerVM NovaLink to quickly provision several VMs on PowerVM servers at a reduced cost.

Oracle VM VirtualBox Overview

Oracle VM VirtualBox is an enterprise-ready virtualization solution. The software is a powerful cross-platform solution for x86 and x64-based systems. This means that you can run the software on Windows, Mac OS X, Linux and Solaris x86 and x64 computer systems and a plethora of guest OSs.

The software is available as open-source or pre-built binaries for the mentioned OSs. The software’s latest release, Oracle VM VirtualBox 6.1.22, was released in April 2021. Oracle VM VirtualBox is being actively developed with continuous releases and an ever-increasing list of features.

 While the software is open for all to contribute to, Oracle ensures it meets professional quality criteria.

Oracle VM VirtualBox Features

Here are the primary features of Oracle VM VirtualBox:

  • You can run the software on multiple x86 and x64-based OSs and guest OSs.
  • The software enables you to run more than one OS simultaneously. For example, you can run Windows software on Linux or Mac OS X without having to reboot to use it.
  • The virtualization solution allows for easier software installations. You can pack a complex software setup (appliance), like a complete mail server solution into a VM.
  • With the help of snapshots, you can save a particular state of a VM. You can revert to that state if something goes wrong. You can create any number of snapshots and delete snapshots to reclaim disk space.
  • You can easily import and export VMs using the Open Virtualization Format (OVF).
  • By clicking on a button in the graphical user interface, you can start a VM and control the machine remotely, or from the command line.  
  • The virtualization software is available free of cost.

Also read: Virtualization Software Comparison: Nutanix AHV vs. Citrix Hypervisor

IBM PowerVM vs Oracle VM VirtualBox

We prepared a table to compare IBM PowerVM and Oracle VM VirtualBox on a head-to-head basis: 

Features IBM PowerVM Oracle VM VirtualBox
Likelihood to Recommend  
Server Virtualization  
Management Console  
Live Virtual Machine (VM) Migration  
Hypervisor-Level Security  
Meets Requirements  
Ease of Use and Setup  
Quality of Support  
Ease of Doing Business With  
Return on Investment (ROI)  
Overall Features  

IBM PowerVM Enterprise Edition does not support the Windows OS. This forces IT administrators to contend with multiple virtualization software and licenses. It must also be noted that IBM limits PowerVM Enterprise Edition to Power Systems hardware.

While this is an advantage in terms of integration, it means that IT administrators with servers from different vendors must buy and manage different virtualization software for their virtual servers.

On the other hand, Oracle VM VirtualBox is an open-source, enterprise-level virtualization software that runs on Windows, Mac OS X, Linux and Solaris hosts, as well as many guest OSs.

While IBM PowerVM Enterprise Edition offers better server virtualization, live VM migration and hypervisor-level security, it comes with its limitations. Study both virtualization software solutions in detail and opt for the one that best meets your criteria.

Read next: Best Server Virtualization Software of 2021

The post Virtualization Software Comparison: IBM PowerVM vs Oracle VM VirtualBox appeared first on Enterprise Networking Planet.

]]>
Networking 101: What is Data Governance? https://www.enterprisenetworkingplanet.com/os/networking-101-what-is-data-governance/ Wed, 07 Apr 2021 01:18:20 +0000 https://www.enterprisenetworkingplanet.com/?p=9979 Data governance is a two-tiered approach to managing data security and management. It’s the design and application of policies that ensure the quality of your data while also adhering to data handling and distribution legislation. Put another way, you handle data with the software tools, guidelines, and networks in the enterprise. Data governance refers to […]

The post Networking 101: What is Data Governance? appeared first on Enterprise Networking Planet.

]]>
Data governance is a two-tiered approach to managing data security and management. It’s the design and application of policies that ensure the quality of your data while also adhering to data handling and distribution legislation.

Put another way, you handle data with the software tools, guidelines, and networks in the enterprise. Data governance refers to the overarching framework that incorporates these (and more) measures according to the law. It involves the need to formalize terms and formats that describe your data to ensure fidelity over time and workflows based on strict rules of use and access.

Instituting data governance can solve problems around data discovery such as:

  • Repurposing and making use of unstructured data
  • Data cleansing, including removing unused tables in database files
  • Integration technologies to fold into other systems.

Learn more: Data Governance Best Practices

Data Governance Imperatives

We all use data from different sources in different ways that’s saved in different formats for different software applications. Inconsistencies arise, and without an overview of your data management they might never be resolved. In addition to embarrassment, poor data management can cost you money, complicating data integration programs and compromising business development reporting and opportunities. Without data governance, such issues might even go undetected for years.

There’s also a legal imperative. Not having good quality data can put you out of step with compliance regulation, making it harder to meet service-level agreements (SLAs) and may even lead to prosecution.

The most sweeping data governance law thus far is the European Union’s 2016 General Data Protection Regulation, which gives EU citizens unprecedented access and control over their data.

One of the GDPR’s central fulcrums was seen as the right to be forgotten, where everyone has the right to erasure of personal data under a raft of conditions and circumstances. That imposes a monetary cost on the enterprise in the form of a program to repurpose data for customer access and removal, and steep fines for non-compliance.

Also read: How to Comply with GDPR

Benefits of Data Governance

Data governance is intended to break down barriers. Different stores of data can all combine to make business and workflows within the enterprise and between companies smoother, more efficient, and more secure.

As a company grows, disparate systems handle and process data by different departments, and at a certain level of staff numbers or revenue it can become unwieldy. Transactions are processed and business is conducted in something of a vacuum, with no centralized management environment.

The point of data governance is to bring all those systems and all that information into line, so everyone across the enterprise can engage with any other department or system; and, often with stakeholders outside as well. Management gets a clearer, at-a-glance picture of the health of the entire digital asset base and can be assured they comply with regulation that affects their sector or geographic region.

Other benefits of data governance will follow:

  • It will cost less to manage and use data.
  • The quality of your data will make it a more valuable asset in itself. For example, if you have data-sharing agreements with other businesses or business units.
  • It will be easier to investigate for analysis, be it revenue-generating or otherwise.

Data governance will also give you and your enterprise better decision-making power about the directions of your organization. Following the data tells the story of what’s going on with supply and income unimpeded.

Also read: Tools to Better Manage GDPR

Getting Started with Data Governance

Different business units in the enterprise will have different views on how their information is stored, used, and accessed, so implementing data governance is like launching a rocket — most of the hard work will come as soon as you pull the trigger, but it will get easier as you pick up speed.

The data governance plan ultimately has to come from the top, but it mustn’t be simply edicts on how things will be done. Instead, it should be based on engaging with and listening to department heads about their needs and goals. They’re the ones that use the information, after all, so they’ll know the best methods to wrangle it. The job of the data governance committee or officer is to massage those needs to comply with the policies and legislation data governance sets out.

Selling data governance to company leadership can be a challenge, including corporate boards who might not be clear on its business value. Data governance isn’t simply a reactive process because of laws and rules. It should be proactive, to take advantage of newer and expanded revenue streams.

Examples of how your enterprise missed the boat on important opportunities can also be helpful; such information highlights how unstructured, insecure, siloed, and bad quality data might negatively impact your business.

It’s a new world where networks are so pervasive that data travels everywhere and fulfil endless purposes. With so much of it being processed even without human input, we need a clear way forward to manage and disseminate data. Data governance is the answer.

Read next: Simplifying Data Management with Hybrid Networks

The post Networking 101: What is Data Governance? appeared first on Enterprise Networking Planet.

]]>
Ubuntu Fan Aims to Simply Container Networking https://www.enterprisenetworkingplanet.com/os/ubuntu-fan-aims-to-simply-container-networking/ Mon, 29 Oct 2018 13:00:00 +0000 https://www.enterprisenetworkingplanet.com/uncategorized/ubuntu-fan-aims-to-simply-container-networking/ Most people think of Ubuntu as primarily a Linux server and cloud technology effort. Ubuntu also has some networking capabilities that it develops on its own, including the Fan container networking project. There are multiple open source software-defined networking (SDN) efforts in the market today that are more well known than Fan, including the Tungsten […]

The post Ubuntu Fan Aims to Simply Container Networking appeared first on Enterprise Networking Planet.

]]>
Most people think of Ubuntu as primarily a Linux server and cloud technology effort. Ubuntu also has some networking capabilities that it develops on its own, including the Fan container networking project.

There are multiple open source software-defined networking (SDN) efforts in the market today that are more well known than Fan, including the Tungsten Fabric and OVN, among others. Fan, however, takes a different approach than other SDN models.

“Fan is a zero-configuration SDN,” Mark Shuttleworth, CEO of Canonical Inc and founder of Ubuntu, said. “What you trade is the ability to live migrate an IP address for simplicity.”

Fan takes an overlay network address space and maps it mathematically to an underlay address space. Shuttleworth explained that if a container is trying to get to a particular overlay address, it can be calculated, as opposed to being determined via a lookup method.

While the concept of live migration was at one time a popular tool within Virtual Machine (VM)-based environments, Shuttleworth said that in Kubernetes container environments, live migration isn’t typically necessary.

“You never want to live migrate a container IP address because you just shoot one and then grow another [container],” he said. “The Fan gives you an instant SDN with no central configuration, no database and no lookup mechanism.”

Fan has been a useful feature for Ubuntu in a number of areas, including LXD clustering. LXD is Ubuntu’s open source hypervisor for containers. Overall, Shuttleworth emphasized that Fan is a simpler way to get started with SDN.

“It’s just a super tasteful, little idea,” he said.

Watch the full video interview with Mark Shuttleworth below:

Sean Michael Kerner is a senior editor at EnterpriseNetworkingPlanet and InternetNews.com. Follow him on Twitter @TechJournalist.

The post Ubuntu Fan Aims to Simply Container Networking appeared first on Enterprise Networking Planet.

]]>
Cisco Evolves IOS XR Network Operating System with Linux https://www.enterprisenetworkingplanet.com/os/cisco-evolves-ios-xr-network-operating-system-with-linux/ Wed, 25 Nov 2015 22:32:00 +0000 https://www.enterprisenetworkingplanet.com/uncategorized/cisco-evolves-ios-xr-network-operating-system-with-linux/ At the core of Cisco’s big routers has long been the IOS-XR network operating system. IOS-XR is now evolving, thanks to a rebasing on Linux and the inputs of Cisco’s hyperscale web partners. Kevin Wollenweber, director of product management for Cisco’s service provider segment, explained that the new IOS-XR 6.0 release provides improved visibility into […]

The post Cisco Evolves IOS XR Network Operating System with Linux appeared first on Enterprise Networking Planet.

]]>
At the core of Cisco’s big routers has long been the IOS-XR network operating system. IOS-XR is now evolving, thanks to a rebasing on Linux and the inputs of Cisco’s hyperscale web partners.

Kevin Wollenweber, director of product management for Cisco’s service provider segment, explained that the new IOS-XR 6.0 release provides improved visibility into a network using a feature called telemetry. Wollenweber explained that in the past, many network devices used old approaches, such as SNMP traps, that probe a network in order to get information.

“What we’ve done with telemetry is we have built a publisher/subscriber model where devices push out information at regular intervals,” Wollenweber said.

Additionally, IOS-XR provides more programmability to enable a higher degree of network automation. Technologies such as Puppet and Chef for orchestration are now also enabled for automation.

“We built an infrastructure that allows people to run their own applications in Linux containers on the router itself,” Wollenweber said.

Cisco is using Linux Containers (LXC) as the container technology. Wollenweber explained that IOS-XR is now based on a Linux infrastructure, which enables more toolchains and standard interfaces.

Wollenweber explained that the move to Linux for IOS-XR has been ongoing. He noted that when IOS-XR first shipped in 2004, it was based on the QNX micro-kernel. Cisco has now taken all the benefits it built into the QNX based IOS-XR and moved it into a 64-bit Linux infrastructure.

“The 64-bit Linux infrastructure is the de facto standard that is being used across the industry today,” Wollenweber said. “So it gives us more development tools, more tool chains and also more access into the third party development ecosystem.”

Cisco first began using a Linux-based IOS-XR version in early 2014 on one of its routers and is now expanding to a new portfolio of NCS routers, including the NCS 1000, 5000 and 5,500.

Multiple large-scale web vendors like Facebook have recently started to try and build their own networking infrastructure by way of the Open Compute Project’s whitebox networking efforts. Wollenweber said that the OCP efforts are largely about enabling agility and improved automation as well as integration with common tooling.

“A lot of the problems that we’re trying to solve through the IOS-XR are the same,” Wollenweber said.

Sean Michael Kerner is a senior editor at Enterprise Networking Planet and InternetNews.com. Follow him on Twitter @TechJournalist.

The post Cisco Evolves IOS XR Network Operating System with Linux appeared first on Enterprise Networking Planet.

]]>