Litton Power, Author at Enterprise Networking Planet https://www.enterprisenetworkingplanet.com/author/lpower/ Wed, 18 Jan 2023 16:42:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Data Center Technology Trends https://www.enterprisenetworkingplanet.com/data-center/data-center-trends/ Fri, 19 Nov 2021 20:15:00 +0000 https://www.enterprisenetworkingplanet.com/?p=21876 The demand for data centers continues to grow, pushed in part by the pandemic workplace, prodded along by internet-enabled TVs and other smart devices, and nudged even further by the changing shape and size of digital information. Slouches don’t last long in the tech industry, and all the players are racing to adapt to a […]

The post Data Center Technology Trends appeared first on Enterprise Networking Planet.

]]>
The demand for data centers continues to grow, pushed in part by the pandemic workplace, prodded along by internet-enabled TVs and other smart devices, and nudged even further by the changing shape and size of digital information. Slouches don’t last long in the tech industry, and all the players are racing to adapt to a changing landscape. As 2021 comes to a close, we look toward the new year for signs of how the data economy will satisfy its hungry consumers.

A Brief Look at the Present

2021 was a rebound year from 2020, when infrastructure spending was stifled by 10% due to an industry-wide cash flow shortage sparked by the pandemic. The demand was there. The money was not. 

In a 2021 report on digital infrastructure, real estate analyst CBRE found that new data facilities broke ground at a 42% gain over the year prior, and data center providers made several major land purchases this year, readying themselves for the next phase of development. Northern Virginia and Dallas/Ft. Worth saw spectacular growth in inventory, exceeding 225% inventory expansion since 2015. Surprisingly, this nearly doubled the growth rate in Silicon Valley. However, data centers aren’t just trying to satisfy a few zip codes in Bay Area California. The United States is a hotbed of growth for the edge computing market, pulling data centers as close to consumption as possible.

Emerging Data Center Trends

While some parts of the world have slid back into lockdowns and remote work environments, others have resumed normal operations in person and on campus. The effect of the former is still catapulting change in how data reaches its consumers. Here are some of the ways Big Tech is trying to keep up.

1.      Automation

From robots that automatically retrieve cold-storage data tapes, to roving security robots maintaining a safe perimeter around a facility, automation is coming to data centers—in many ways it’s already here. Some of the remote-work inertia generated by the pandemic has created an opening to introduce robotics into data centers, and few companies are as well equipped or motivated to experiment in that field than Big Tech. 

Indeed, IBM experimented with Roomba vacuum cleaners, equipping the robot with sensors that would measure data center temperatures and humidity readings. AOL claimed several years ago to have created a small data center that ran completely without staffing. Google put industrial robots to work to automate the destruction of decommissioned hard drives at a rate faster than humans could do so. 

As the scale of data centers increases, the total volume of hardware and software that needs to be managed will far outpace the capabilities of human staff. In the near term, robotics can pick up the slack and become another tool in a human’s kit. In the long term, a fully automated server center could operate at higher temperatures, reducing cooling and operational costs.

Also read: NetOps vs DevOps: Bringing Automation to the Network

2.      Hyperscalability

Hyperscale data centers have emerged to satisfy the big data needs of large enterprises, and cloud and hosting providers such as AWS, Microsoft Azure, or Google Cloud. Unlike retail data centers, hyperscale centers typically house one client, which would explain why Amazon, Google, and Microsoft occupy 50% of the total hyperscale market. By focusing on the needs of a single client, data can be allocated across a facility in such a manner that higher-demand data with more intensive CPU activities can be environmentally zoned with appropriate temperature controls. 

Workloads can also be more efficiently balanced across servers to displace heat production or more efficiently and reliably distribute energy. Don’t let the name fool you—these centers are not distinguished by their size. Instead, hyperscale data centers are defined by its client maximally using the facility’s floor space and resources.

Amazon made waves when it vertically integrated all the logistical components of its supply chain, from warehousing to trucking. Now it’s doing the same again, except with ones and zeros—building its own hyperscale data centers for its line of AWS products. And it’s far from alone, with a projected industry-wide growth in hyperscale construction through 2022, particularly in China.

3.      Sustainability

“Going green” is more than an industry buzz slogan. Microsoft has set an aggressive sustainability initiative with the ultimate aim of becoming carbon negative by 2030. This will require a multi-faceted approach, examining everything from energy production to water usage. And while Microsoft is attempting to adopt a 100% reliability on renewable energy, that strategy simply cannot work without deep wells of energy storage on-site to hedge against fluctuations in wind and solar. Some data centers are already implementing such measures, by installing massive-capacity experimental batteries from companies such as Tesla. Some data center owners are more transparent than others about their environmental impacts, but many have adopted a forward-stance on working toward reducing harm and “going green.”

4.      Edge Computing

Manufacturers, energy companies, electric vehicles, consumers, among many others are driving the increased demand for edge computing. In contrast to hyperscale data centers, providers will need to become smaller and more decentralized to serve the edge market.

Some companies, such as Dell, are taking this to the extreme, with the advent of Micro-Modular Data Centers (MMDCs). These are small, portable, fully self-contained data centers designed to serve the needs of one customer at a time. They often include their own cooling, power, and backup hardware, making them a turnkey solution to a company’s data center needs. Like any solution, MMDCs occupy a sweet spot of cost effectiveness for their use case, but technologies like these are bringing the edge ever closer.

Also read: Micro Data Centers are Evolving the Edge

5.      Increased Security

For years, Washington D.C. has been warning government employees and other governments about the dangers of hardware-embedded threats to privacy and security. As cyber threats grow more pervasive, and attackers continue to target data centers—veritable treasure troves of information—these centers are feeling the heat and responding to it. 

Google introduced chip-level security into its data centers several years ago, then open sourced the project in 2019 to deliver the same types of security measures to the industry at large. Dubbed Project OpenTitan, it aims to create maximal transparency and security at the chip level, along with additional features such as self-testing for memory tampering on every boot of the chip. Google has partnered with data giants such as Western Digital and Seagate to develop the standard and deploy the technology into 2022 and well beyond.

Also read: What is Cloud Security Posture Management (CSPM)?

6.      Healthy Competition

Tech is a sink or swim world, and few companies know how to swim as well as Intel. The CPU market for servers has historically belonged to chip manufacturers, but recent deals between AMD and Facebook’s parent company, Meta, give its primary competitor a bigger bite of the pie. The steady advancement of chip technology isn’t the only factor in this deal—much of it has to do with the relative availability of AMD hardware during a global chip shortage. But the AMD and Intel feud is a tale as old as time, and it will surely continue well into the future. Meanwhile, NVIDIA is making its own plays, venturing into the market with its first data center CPU, an ARM-based chip dubbed “Grace.” Of course, we’ll be well past 2022 before we get a sense of how the competition plays out. 

Read next: Effectively Implementing AI as a Service

The post Data Center Technology Trends appeared first on Enterprise Networking Planet.

]]>
Object Storage vs. Block Storage: Which is Right for Your Enterprise? https://www.enterprisenetworkingplanet.com/data-center/object-storage-vs-block-storage/ Wed, 20 Oct 2021 13:30:00 +0000 https://www.enterprisenetworkingplanet.com/?p=21689 Data management and storage are integral parts of managing enterprise networks. Here is how object and block storage compare.

The post Object Storage vs. Block Storage: Which is Right for Your Enterprise? appeared first on Enterprise Networking Planet.

]]>
The total volume of our stored data continues to explode as the type of data we store diversifies. Object storage and block storage are two methodologies for handling data, but while both systems have their strengths, neither is a panacea. Which is right for your enterprise? Let’s take a closer look to find out.

What is Block Storage?

The older of the two formats – though by no means outdated – is Block Storage. This system breaks down data into individual blocks, assigning each block an address, and distributing them efficiently across a storage medium. In a traditional file system, data would be stored in a hierarchy of file folders, but under a block paradigm, no hierarchy is necessary. When a file is called for retrieval, the server locates each block by its unique identifier, collects them, and reassembles the file. These blocks can even be distributed across multiple networked systems, unburdening one single system from the full workload. The upshot to this approach is it allows multiple users quick paths to each block, enabling data to be recalled in a fast and efficient manner. However, while each block has its own unique identifier, the file system is limited in what metadata can be associated with the data as a whole. This leaves the role of metadata handling up to ancillary softwares and databases.

There are cloud service providers that offer block storage solutions, but block excels in low-latency environments such as a local network. It writes, retrieves, and modifies files quickly by design. Changes to data can be done at the block level, meaning not every block is required to be fully retrieved by the system to make modifications to a file. This makes block storage best suited for databases that require frequent access, and high-performance applications, particularly across fiber optic connections. Furthermore, cloud implementations can be cost prohibitive, as users often pay for even unused storage capacity in a block-based system.

Lastly, block has a long history and is well-supported, and enterprises may find it useful (or even necessary) for running legacy softwares that are unfamiliar with newer protocols.

Also read: The Future of Network Management with AIOps

What is Object Storage?

Modern problems require modern solutions. Archival databases are getting bigger, file sizes are ballooning, and the benefits of high-speed internet coupled with deep wells of storage allow services like YouTube to host an absolutely colossal amount of data. While block storage can serve these needs, these are problems best addressed by a completely different paradigm. Enter Object Storage: a data storage methodology working on the conceptual-inverse of block storage.

Rather than breaking data into constituent fragments, object storage treats data as a whole, but adds two additional layers of information: a unique identifier and metadata. With these three pieces in hand, the final “object” is assembled. The resultant object is then stored on a flat address space — meaning no hierarchy is required — where it is retrieved by its unique identifier.

This method provides advantages to scalability, since new objects can be dropped into storage ad infinitum, without regard to complications to the overall file system. From a cloud perspective, this can be more cost-friendly as well, where customers typically only pay for the storage they are using. The addition of metadata tags also provides for advanced searchability and analytics — aspects that lend themselves well to the retrieval of information from large quantities of data.

On the downside, editing object-based files can be time and process intensive. Even a minor edit to a 20GB file will require essentially recreating the entire file and reproducing the object at large. By comparison, block storage enables edits to be made to small components of a file, reducing the time required to complete even small tasks.

However, archived databases and large files are often static in nature. They may need to be recalled and viewed, but their modification needs are minimal. Whereas block storage is best suited for files that require frequent modification and applications often in use, object storage is the solution for large-scale, static data.

Providers of Block and Object Storage

Tech giants and smaller vendors alike often offer both solutions for enterprise clients. Here are some of the primary players in the field of cloud-based object and block storage:

  • IBM – Object Storage: Pricing is tiered based on how frequently data is accessed and how much capacity is required.
  • IBM – Block Storage: Pricing based on volume, with added fees for snapshots. Monthly price starts at $0.05 per 0.25 IOPS/GB.
  • Amazon S3: The Simple Storage Service, or “S3,” is a tiered, feature-rich object-based service, with pricing starting as low as $0.00099/GB, and as high as $0.023/GB, depending on frequency of access.
  • Amazon EBS: The Elastic Block Store (EBS) is a high-performance block storage solution that includes a free tier of 30GB.
  • Microsoft Blob Storage: Object storage dubbed “blob,” with archival pricing starting at $0.00099/GB and premium pricing at $0.15/GB across all levels of data storage.
  • Azure Disk Storage: Block storage on an Azure Virtual Machine, with the lowest, 4GB disk size starting at $0.60/GB/month.
  • Google Cloud Persistent Disk: Block storage with simplified pricing tiers, starting at $0.040/GB/month.
  • Google Cloud Storage: Competitively priced object storage with a standard storage offering at $0.02/GB/month.

Choosing the Right Storage

At a certain scale, many companies will employ both solutions in a hybrid effort to maintain their data for regulatory compliance or archival reasons, while keeping hot data nearby in blocks. It’s important when choosing a storage solution to cut through the marketing talk and assess which method is the right fit for your needs.

Read next:Best NAS Software for Enterprise Storage 2021

The post Object Storage vs. Block Storage: Which is Right for Your Enterprise? appeared first on Enterprise Networking Planet.

]]>
Are Companies Protecting Employee Data? https://www.enterprisenetworkingplanet.com/data-center/are-companies-protecting-employee-data/ Tue, 21 Sep 2021 20:55:25 +0000 https://www.enterprisenetworkingplanet.com/?p=21560 Protecting employee data while outsourcing aspects of your processes can be difficult. Here is why reading the fine print is important.

The post Are Companies Protecting Employee Data? appeared first on Enterprise Networking Planet.

]]>
While data breaches and cybercrimes make regular headlines, we are quick to forget that the preponderance of our private data is legally bought, sold, or freely traded without our notice or objection. Consumers typically associate this type of data harvesting with big tech giants, social media companies, and cell phone manufacturers, but at this point, any third party who handles your data should be suspected of selling it: from email providers who scan your messages for marketing data to credit card companies who sell your transaction history. This is no longer just a consumer problem. Employers also need to be vigilant about protecting their employee data, because that trove of information may be more vulnerable than previously believed.

The Third-Party Problem

In the summer of 2021, financial software giant Intuit — maker of TurboTax and QuickBooks — announced it would automatically share client payroll data with consumer credit reporting agency Equifax, Inc. The move affects 1.4 million small businesses and millions more employees, and encompasses sensitive information like payment history, social security numbers, dates of birth, home addresses, phone numbers, and of course first and last names. Intuit offers this service to expedite “…verification of employment and income info when applying for things like loans, credit, or public aid,” but aggregated sets of payroll data make prime targets for thieves, and Equifax has seen its fair share of data breaches. For those concerned about this type of exposure, Equifax has offered an opt-out feature for both employees and employers.

This kind of mass collection from the workplace is nothing new. In fact, companies, universities, and even government agencies have offered this data directly to Equifax for years, in exchange for the outsourcing of employment verification of former workers. This may come as unwelcome news to many employees who might be surprised to learn that their weekly pay stub information is available for purchase by any interested party.

But employees play their part too, submitting their resumes to recruiters without much thought of the consequences. Third-party recruiters bring talent and jobs together, but virtually all of them share resume data with affiliated sites, and who can say how broad the circle gets beyond these. A person’s resume contains any number of details that might be compromising if used to answer security questions from, say, one’s own bank (i.e. , “Where did you go to college?” “What was your first job?” “What town did you grow up in?”). There isn’t much an employer can do to curtail what a prospective employee chooses to share on these services, but employers can opt for the “apply on the company website” feature, bringing resumes directly to the employer rather than serving them through the third-party provider.

Privacy-minded employers should carefully consider the privacy policies of the third-party services they enlist. Benefits and health insurance providers have a tightrope walk to prevent running afoul of HIPAA, but their use of employee data should also fall into polite scrutiny. There’s no harm in asking for a copy of a company’s privacy policy, and many are available on a service’s website.

Also read: Five Tips for Managing Compliance on Enterprise Networks

Why Does Data Protection Matter?

In 2015, hackers compromised a database of US security clearance background check forms. These extensive forms document down the most granular level of detailed information on millions of clearance holders’ lives, along with the lives of their friends and families. It was the type of hack that never should happen, and yet, just like breaches of the Equifax database, it did. The sad reality is, no matter how securely it is stored, data can be stolen, and the onus is on each of us to be the bulwark of our personal information, sharing only what is absolutely necessary, and only with people who need it.

Even something as seemingly innocuous as connecting a name with a cellphone number can put at risk security systems such as two-factor authentication. And once-sacred social security numbers, used by banks, credit card companies, and the IRS to validate a person’s identity, have become so ubiquitous they might as well be considered public information. This didn’t happen by accident. It was the consequence of decades of data sharing between marketers, employers, and data brokers, with little evident concern that the information might find its way into the hands of identity thieves.

CPRA: Progress in California

Last year, California passed the nation’s first employee privacy protection laws, giving Californians notice and opt-out of sales of their private data by their employer. These rights extend to employees, job applicants, and independent contractors, giving those entering the workplace an expectation of privacy for the first time in decades. California-based businesses should apprise themselves of the California Privacy Rights Act (CPRA), because failures to comply can bring penalties up to $7,500 per incident. Many of these protections do not go into effect until January 1, 2023, to give employers time to prepare for compliance.

It’s a good start, but the rest of the country lacks such an omnibus employee privacy protection bill. Nonetheless, employers should show initiative and respect their own employees’ data, with or without legislative influence. 

Read next: Best Data Governance Tools for Enterprise 2021

The post Are Companies Protecting Employee Data? appeared first on Enterprise Networking Planet.

]]>
Enterprise Service Management and ITSM: What is the Difference? https://www.enterprisenetworkingplanet.com/data-center/esm-itsm-difference/ Thu, 02 Sep 2021 16:57:48 +0000 https://www.enterprisenetworkingplanet.com/?p=21487 ESM and ITSM offer paths to improving efficiency and productivity across IT teams. Here is how both systems work together.

The post Enterprise Service Management and ITSM: What is the Difference? appeared first on Enterprise Networking Planet.

]]>
Veterans of corporate culture have grown accustomed to the taste of alphabet soup – some may even like it – but the rest of us are just trying to keep up with the lingo. For many people, ESM, or Enterprise Service Management, may be a new three-letter-word, but it seeks to broaden the role of its older and more familiar cousin, IT Service Management, or ITSM. These two closely related concepts form governing philosophies that help companies manage inefficiencies, cut costs, and improve customer experience. But between the two, what’s the difference?

ITSM: Putting the End User First

At its core, ITSM is a framework of best practices to keep an IT team laser-focused on facilitating company goals. Whether it’s creating a knowledge base so users can quickly troubleshoot their own problems, implementing a ticketing system to process and prioritize issues as they arise, or creating a comprehensive plan to ensure smooth transfer of data between old and new hardware, ITSM best practices guide the way forward.

Harken back to the air travel experience of yesteryear. Sure, the seats were bigger and the security was less intrusive, but we take for granted the profound improvements in customer experience that have transpired over the last decade. Check-in is now seamless. You can do it from your air carrier’s phone app days in advance of your trip. No longer do you need to print out a ticket; it’s on your phone. If there’s a delay in your flight, you’ll be notified instantaneously on your phone. Have a question about what you’re allowed to take on the plane? There’s an FAQ for that on your phone. Concerned that your checked bags didn’t make it onto your flight? You can get instant notification of your bag status on – you guessed it – your phone.

New technology may have enabled all of these improvements to your experience, but they didn’t just magically occur. They took a team of software developers driven by a singular vision to put every scrap of important data in the customer’s hand, giving them complete ownership and control over their trip. This serves multiple purposes: it improves customer satisfaction, it cuts down lines at the check-in counters, it reduces operational costs, and it can even reduce the strain on security. Everybody wins.

Your organization’s IT is likely doing the same thing, though you may not be aware of it. Think about it the next time you order a laptop through your company’s online portal. Invisibly to you, an automatic approval check is sent to your manager. Once that check is passed, the ticket is forwarded to Procurement, who purchases the laptop, receives it, and shuffles it off to IT for proper configuration, who then delivers it to your desktop. All of these unseen processes fall under the rubric of your company’s ITSM procedures and were architected to ensure you receive your new hardware quickly and efficiently — without the need for 20 phone calls and emails to a dozen people along the way. Yes, it sounds simple, but much of what we think of as simple today was instituted years ago by someone insisting there was a better way to do things.

Many of those better ways have been formalized into various standards of ITSM. The most popular is the Information Technology Infrastructure Library (ITIL), which prescribes IT best practices in a series of five books, covering such topics as management of equipment, regulatory compliance, expectation management, reducing IT costs, and more. ITIL has been adopted by massive companies like Disney and successful tech startups like Spotify.

Also read: Top Risk Management Tools for Enterprise 2021

ESM: Think Bigger

Enterprise Service Management is a framework that seeks to apply ITSM practices toward the entire organization. Human Resources may operate their exclusive database, and have very little overlap with, say, Site Security, Facilities, or Procurement. But if the automation principles of ITSM were put to work between these organizations, when a new employee is onboarded, a request for their new computer and phone hardware would automatically be sent from HR to Procurement, a request would be sent to Site Security for a new badge, and Facilities would simultaneously receive the employee’s vehicle information so they can issue a parking pass.

Suddenly one data entry cascades into a series of automated processes between departments. Prior to implementing ESM-driven practices, HR would write and distribute weekly emails to all these agencies summarizing the upcoming new hires and their needs. This may only take minutes, or it may take a full hour every Monday, but collectively the efforts will comprise hundreds of man hours a year. Automation of routine processes can cut these costs down and free staff up to work on more important tasks.

Under ITSM best practices, problems are solved on an individual, ticket-based system. But if several tickets demonstrate a common cause, those tickets are grouped, elevated, and the root issue must be treated, as opposed to just the symptoms. The same practice could be observed under an ESM framework, as a company’s Customer Service department identifies common tickets from incoming calls, groups them, then notifies the proper department of a potential hardware defect in their latest line of CPUs.

As organizations grow their departments become more siloed, and deliberate effort is required to bring them back together. There’s even an entire industry of ESM software aimed at uniting company departments and giving them the same set of tools.

It’s All in the Execution

Whether you’re looking to integrate ITSM best practices within your IT team, or bring ESM to the broader organization, remember that one size does not fit all. Common implementations such as ITIL can be substantial investments of training, consulting fees, certification, and software licensing. Those costs won’t net any benefits to a ten-person marketing firm, but they will pay dividends to a company with staffing in the hundreds or thousands. As your business grows, your tools, strategies, and best practices should grow with it.

Read next: Employing SIEM in the Network Security Fight

The post Enterprise Service Management and ITSM: What is the Difference? appeared first on Enterprise Networking Planet.

]]>
Protecting Subdomains Against Security Threats https://www.enterprisenetworkingplanet.com/data-center/subdomains-security-threats/ Fri, 27 Aug 2021 15:54:12 +0000 https://www.enterprisenetworkingplanet.com/?p=21447 A subdomain takeover occurs when a cybercriminal takes over the subdomain of a targeted domain. Here is how to prevent such cyberthreats.  Websites have grown far beyond their humble origins, becoming platforms for countless applications that serve an organization’s every need. From email to eCommerce, from support to security, many services are now hosted under […]

The post Protecting Subdomains Against Security Threats appeared first on Enterprise Networking Planet.

]]>
A subdomain takeover occurs when a cybercriminal takes over the subdomain of a targeted domain. Here is how to prevent such cyberthreats. 

Websites have grown far beyond their humble origins, becoming platforms for countless applications that serve an organization’s every need. From email to eCommerce, from support to security, many services are now hosted under the umbrella of a company’s website. Subdomains, which are short prefixes to a site’s URL, are easy identifiers to route traffic to the appropriate service, such as mail.example.com, or support.example.com. While useful, these subdomains increase an organization’s attack surface against cybercriminals, and once compromised they can be vectors for a malicious actor to render even further harm.

The problem has become so widespread even major organizations such as have found themselves vulnerable to these types of threats. As your organization expands its services and subdomains, are you taking the proper steps to ensure they are fully secure?

What is a Subdomain Takeover?

The most common scenario for a subdomain takeover occurs when an organization abandons a third-party service, only to have its subdomain reclaimed by an attacker through that service. For example, a company selling woodworking supplies registers the subdomain sales.example.com with its domain registrar, but the actual eCommerce transactions are routed through a third-party service like SquareSpace. Now, SquareSpace is hosting a sales portal for the woodworking supplier under the subdomain sales.example.com.

The company later withdraws from selling online and focuses instead on selling its products in the physical store. Although the company no longer uses SquareSpace, it has neglected to remove the subdomain redirects to and from the service. At this juncture, a savvy attacker now has an opportunity to reactivate the dead linkage from the outside service, seizing the subdomain and potentially hosting content under the veneer of the woodworking supplier’s name.

Also read: Managing Security Across MultiCloud Environments

Assessing the Damage

The damage a hacker can inflict on a compromised website is wide-ranging, from posting pictures of dancing cats to creating phishing forms capable of gathering employee login credentials, then leveraging that information to further infiltrate company systems. Even worse, imagine a fully-cloned eCommerce site designed to capture your customers’ credit card data, along with other personal information. Such a site could become a vehicle for identity or data theft, or propagate malicious software, all at the cost of your company’s good name.

In 2020, Microsoft found itself under media scrutiny for the revelation that many of its subdomains may have been taken over, a situation made possible by “Enterprise sprawl and a lack of internal domain controls,” according to one cybersecurity expert. The consequences put at risk the safety of Microsoft customer data, such as usernames and passwords. Companies failing to secure their online assets may also face serious reputational harm and loss of business.

Preventing a Subdomain Attack

White, black, and gray hat hackers employ sophisticated tools and methodologies to locate system vulnerabilities, and even use simple lists of the most commonly used subdomains in search of abandoned ones. While there are dozens of groups and individuals who actively pursue vulnerable subdomains in the interests of securing them, companies should not rely on the good will of others when protecting their own sites. In fact, cybersecurity researchers recently informed hundreds of companies of their vulnerable subdomains, and found that only 31% of those organizations took steps to improve security in the following six months.

The best prevention is to go right to the heart of the problem. An organization’s IT team should conduct regular, systemic reviews of DNS entries to identify errors in DNS name resolution, which are indicative of a vulnerability that needs to be closed. As off-site services are no longer being used by an organization, their corresponding DNS records should be expunged as a matter of procedure. When partnering with new services, vet these vendors carefully for their security policies and proactivity against subdomain takeovers. Finally, as new services are being onboarded, the final step of that process should be the creation of the corresponding DNS record to avoid the existence of orphan records for even a brief time.

Frequent vetting of DNS entries can become burdensome for large organizations with robust web application needs, but the task is necessary to protect company reputation, customer data, and employee data. Manual reviews of DNS entries can be complemented with or replaced by automated web security audits known as Attack Surface Management services. These companies monitor your web assets for potentially risky exposures and advise your organization on how to reduce the attack surface open to cybercriminals.

Further automation is available to users of Microsoft’s Azure suite with a PowerShell tool that automatically identifies dangling DNS entries that point to unused Azure resources. Azure Defender is another tool that actively detects such entries and provides security alerts in real-time. If that’s not enough, many companies post bounties for white hats to find vulnerabilities before the bad guys do. One white hat subdomain takeover cost Starbucks $2,000, but may have saved them hundreds of thousands in potential damages otherwise.

Thwarting Future Attacks

Quick and decisive action is needed to thwart a subdomain takeover. If a problem is detected, contact the service provider of the hijacked subdomain and inform them of the fraudulent activity being conducted in your company’s name. Cleanse DNS records of offending entries. Your IT may need to conduct a forensic audit of all possible harm that was inflicted by the attack to determine what data may have been stolen, then inform unhappy customers of the breach. Often, companies fallen prey to data theft will provide free identity protection services — at the company’s expense, of course. The vastly preferable option is, naturally, prevention.

Read next: Establishing Server Security Best Practices

The post Protecting Subdomains Against Security Threats appeared first on Enterprise Networking Planet.

]]>