Picus Labs | 15 MIN READ

CREATED ON June 11, 2025

Supporting Vulnerability Scanners in Modern Age

What Are Vulnerability Scanners?

A vulnerability scanner is an automated security tool that identifies known weaknesses, misconfigurations, and outdated software across IT systems such as servers, networks, applications, and cloud environments. It compares system data against regularly updated vulnerability databases to help organizations detect and prioritize security risks before attackers can exploit them.

How Vulnerability Scanners Work: From Discovery to Reporting

Network and Asset Discovery

The scanning process typically begins with network and asset discovery. During this phase, the scanner maps the environment by identifying live hosts, IP address ranges, operating systems, open ports, and running services. This is achieved through techniques such as TCP/UDP port scanning, service fingerprinting, and protocol-level interrogation.

By constructing a detailed inventory of devices and services, the scanner establishes the context necessary for accurate vulnerability detection. Asset discovery also enables segmentation-aware scanning strategies and helps detect shadow IT or unauthorized systems.

Data Collection: Authenticated vs. Unauthenticated

After identifying assets, the scanner collects security-relevant data. This can occur in two distinct modes:

Unauthenticated (non-credentialed) scanning mimics an external attacker’s perspective. It interacts only with exposed services over the network, collecting banner information, service responses, and basic metadata. 

While this method offers broad coverage with minimal configuration, it is limited in depth and tends to produce more false positives.

Authenticated (credentialed) scanning uses administrative access (e.g., SSH for Linux, WMI or WinRM for Windows) to log into systems and extract detailed internal information. This may include the list of installed software, system configurations, running processes, registry values, and patch status. 

Authenticated scans provide higher accuracy, better context, and more reliable vulnerability detection.

Vulnerability Detection and Signature Matching

At the core of a scanner is a vulnerability matching engine. This component compares the collected system data against one or more vulnerability databases. These databases may include:

  • Public repositories such as the National Vulnerability Database (NVD), which catalog CVEs (Common Vulnerabilities and Exposures).

  • Vendor-specific advisories that describe product-specific vulnerabilities and misconfigurations.

  • Proprietary threat intelligence feeds maintained by scanner vendors.

Matching is typically signature-based, where specific conditions (e.g., software version X on OS Y with configuration Z) are evaluated against known vulnerability fingerprints. Some tools incorporate heuristic and inference-based techniques to detect issues even when version numbers are unavailable, using behavioral patterns or system artifacts.

Risk Scoring and Categorization

Once vulnerabilities are identified, scanners assign severity scores to support prioritization efforts.

The majority of tools still lean on established, global frameworks like Common Vulnerability Scoring System (CVSS), which scores vulnerabilities based on exploitability, impact, and access complexity.


Some extend this with probabilistic models like Exploit Prediction Scoring System (EPSS), which estimate the likelihood of exploitation in the wild based on telemetry and threat intelligence data.

While both models have proven foundational for large-scale vulnerability management, their generic, global-only scoring logic often lacks environmental context, such as asset exposure, compensating controls, or business criticality. 

These limitations are increasingly relevant as organizations move toward context-aware and threat-informed prioritization approaches, which we explore further in subsequent work.

Reporting and Remediation Guidance

After completing the scan, the tool generates structured reports detailing affected assets, the vulnerabilities found, associated risk levels, and recommended remediation actions. Reports are often customizable and may be exported in formats compatible with vulnerability management platforms, ticketing systems, or compliance frameworks (e.g., PCI DSS, NIST 800-53).

Some scanners also include patch verification features, allowing teams to rescan systems and validate that fixes have been successfully applied.

How Do Vulnerability Scanners Prioritize Vulnerabilities?

To help prioritize remediation, scanners mainly use legacy scoring frameworks like CVSS and EPSS. These models provide a risk rating based on factors such as attack complexity, impact, and exploit likelihood.

NIST recently introduced a new LEV metric to indicate the likelihood that a vulnerability has already been exploited in the wild. For more details, including its advantages and limitations, visit our technical blog.

Common Vulnerability Scoring System (CVSS)

CVSS assigns each vulnerability a severity score from 0.0 to 10.0, based on how it can be exploited and its potential impact.

It considers factors like:

  • How the attack can be launched (e.g., over the network or locally)

  • Whether special privileges or user interaction are required

  • What the impact is on confidentiality, integrity, and availability

  • Whether the attack affects system boundaries (scope)

Severity is then categorized:

Low: 0.1-3.9
Medium: 4.0-6.9
High: 7.0-8.9
Critical: 9.0-10.0

Importantly, CVSS reflects a global, standardized view of the vulnerability’s inherent technical risk, often based on a worst-case scenario

It does not account for whether the vulnerability is being actively exploited in the wild, or how relevant it is to a specific organization’s environment or threat landscape.

How Scanners Use the CVSS Framework?

Vulnerability scanners incorporate CVSS by mapping detected vulnerabilities, usually identified via CVE identifiers, to the corresponding CVSS scores maintained in public databases such as NVD (National Vulnerability Database). When a vulnerability is discovered on an asset, the scanner fetches its base CVSS score and vector string, which describes the technical characteristics of the vulnerability (e.g., AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H).

Scanners typically do the following:

  • Retrieve CVSS Base Scores: These reflect the inherent severity of the vulnerability, independent of any specific environment.

  • Display Severity Ratings: Based on CVSS categories (Low, Medium, High, Critical), scanners tag and prioritize vulnerabilities in reports and dashboards.

  • Apply Risk Weighting Rules: Some tools allow custom weighting or adjustment of severity based on asset importance, though this is separate from CVSS itself.

  • Link to Exploit and Patch Data: CVSS scores are often enriched with additional metadata, such as exploit availability, patch references, or EPSS predictions, to assist in triaging.

It's important to note that most scanners do not calculate CVSS scores themselves. Instead, they ingest them from trusted databases, ensuring standardization across tools, but at the cost of local context.

Exploit Prediction Scoring System (EPSS)

EPSS estimates the likelihood that a vulnerability will be exploited in the next 30 days, using real-world data. It produces a score between 0 and 1, where higher means more likely to be exploited.

It factors in:

  • How long the vulnerability has been public

  • Whether the system is exposed to the internet

  • Whether public exploits exist

  • Historical exploit trends and metadata from CVSS, NVD, and threat intelligence sources

EPSS helps teams focus on what’s likely to be attacked, filling the gap left by CVSS’s theoretical model.

How Scanners Use the EPSS Framework?

Technically, scanners incorporate EPSS in the following ways:

  • Query or ingest EPSS scores from the EPSS API or partner threat feeds, mapping each CVE to its latest predicted score.

  • Display risk probability alongside CVSS scores, often labeling vulnerabilities with high EPSS values (e.g., >0.5) as "likely to be exploited soon."

  • Filter and sort vulnerabilities based on EPSS thresholds, allowing security teams to prioritize patching efforts toward those with both high severity and high exploit probability.

  • Augment dashboards and reports with EPSS-derived trends, helping teams identify which issues may require faster remediation despite lower CVSS ratings.

Because EPSS is updated regularly based on real-world telemetry, including exploit kit activity, network exposure, and public PoC availability, it enables scanners to deliver a more adaptive and time-sensitive risk picture, helping bridge the gap left by static scoring systems like CVSS.

Types of Vulnerability Scanners and Leading Vendors (Updated for 2025)

Network Vulnerability Scanners

Scan networks for open ports, exposed services, weak protocols (e.g. SMBv1, Telnet), and remotely exploitable vulnerabilities.

Vendors: Tenable Nessus, Qualys VMDR, Rapid7 InsightVM, OpenVAS (Greenbone)

Host-Based Vulnerability Scanners

Assess individual systems using agents or credentials to detect outdated software, patch gaps, local misconfigurations, and weak permissions.

Vendors: Qualys Cloud Agent, Rapid7 Insight Agent, Tripwire Enterprise, Tenable Nessus Agents

Web Application Scanners

Test web applications and APIs for common vulnerabilities such as SQL injection, XSS, CSRF, SSRF, and broken authentication.

Vendors: Burp Suite, Invicti (formerly Netsparker), Acunetix, OWASP ZAP

Cloud Vulnerability Scanners

Evaluate cloud infrastructure for misconfigurations, exposed resources, identity risks, and compliance issues across AWS, Azure, and GCP.

Vendors: Wiz, Orca Security, Prisma Cloud, Qualys CloudView

Container Scanners

Scan container images for vulnerable OS packages, language libraries, and configuration flaws in registries or during CI/CD.

Vendors: Trivy, Snyk, Clair, Anchore

Source Code Scanners (SAST)

Analyze source code to detect security flaws early in the development lifecycle.

Vendors: SonarQube, Fortify, Checkmarx, GitHub CodeQL

Vulnerability Management Challenges in Modern Environments

Volume Overload and Alert Fatigue

Vulnerability management in today’s enterprise environments faces a growing paradox: more data, less clarity. 

In 2024 alone, over 40,000 new vulnerabilities were disclosed, a trend that shows no signs of slowing. 

Scanning tools, which rely heavily on global scoring systems like CVSS and EPSS, classify more than 60% of these vulnerabilities as high or critical, often without factoring in whether they’re actually exploitable in the context of a specific environment.

This creates an avalanche of alerts. A routine scan can yield thousands of findings, many labeled urgent based on theoretical risk rather than real-world threat relevance. The result? Alert fatigue becomes the norm, not the exception. Security teams spend valuable time chasing issues that may pose little or no practical risk, while truly dangerous exposures risk being buried in the noise.

Vulnerability scanners were built for asset coverage, not for threat-aware prioritization. They excel at detection but often fall short in answering the critical question: 

"What should we fix first, and why?" 

As environments scale and attackers grow more precise, this gap becomes increasingly costly, both in time and exposure.

Theoretical Risk vs. Real-World Exploitability

Vulnerability scanners often rely on CVSS scores to rate severity, but these reflect theoretical risk rather than actual context. 

A CVSS 9.8 on an isolated internal system may be flagged as critical, while a CVSS 6.5 on a public-facing server could be ranked lower despite being more exposed.

This mismatch leads to misaligned remediation priorities. Scanners highlight what is severe in theory, not what is most exploitable in practice, leaving teams to sift through alerts that may not reflect true risk.

Lack of Control Awareness

Vulnerability scanners often flag issues based solely on software version or CVE presence, without considering whether existing security controls are effectively mitigating the risk. 

A vulnerability might be marked as critical even if a firewall, WAF, or endpoint protection is already blocking known exploit attempts.

In many cases, vendor-provided mitigations or minor configuration changes can significantly reduce risk, but vulnerability scanners are inherently limited in their ability to account for these context-specific defenses. 

This is not by oversight, but by design. They evaluate vulnerabilities based on known signatures and version data, not the real-time effectiveness of layered protections already in place.

As a result, teams may be pushed to fix issues that are already contained, while more relevant exposures go unnoticed. Without visibility into actual control effectiveness, scan results can overstate urgency and lead to inefficient use of remediation resources.

Scanner Fatigue

Security teams often spend significant time investigating vulnerabilities that pose little to no real-world risk. This is an inherent challenge in how scanners operate, they surface issues based on known CVEs, without the ability to determine whether a vulnerability is truly exploitable in a given context. 

In fact, multiple studies show that fewer than 1% of CVEs are ever exploited in the wild, yet most are still surfaced as critical findings. The result is mounting fatigue, as teams are overwhelmed with alerts while the truly risky exposures may be overlooked.

Real-World Scenario: Why Static Risk Ratings Can Be Misleading

Consider two vulnerability findings identified by a scanner:

  • Server A runs internal analytics software with a CVSS 9.8 vulnerability. However, it sits behind a well-configured Intrusion Prevention System (IPS), requires authentication, and is actively monitored with logging and alerting in place.

  • Server B is an internet-facing web application with a CVSS 6.5 vulnerability that allows unauthenticated SQL injection, potentially exposing sensitive customer data.

A traditional scanner would flag Server A as the higher priority due to its critical severity rating. But in reality, Server B is far more exposed and exploitable, with no access restrictions and a vulnerability directly reachable from the public internet.

This scenario illustrates how static scoring overlooks defensive posture and real-world context. 

Server A’s risk is largely mitigated through layered controls, while Server B’s lower-scored flaw presents an immediate path for exploitation. Effective vulnerability management requires more than just reading scores, it demands understanding the environment, attacker behavior, and control effectiveness.

Where Vulnerability Scanners Fall Short

Despite being essential for identifying known issues, traditional vulnerability scanners have inherent limitations that reduce their effectiveness for real-world risk assessment:

No Exploit Validation

No exploit validation. Scanners match version signatures against global CVE databases. They rely on generic threat data rather than your specific setup.

They never run proof-of-concept attacks in your environment. You can't confirm which vulnerabilities are actually exploitable locally. This doesn't negate their value, but you need to support scanner findings with adversarial exposure validation solutions.

No Awareness of Security Controls

Traditional vulnerability scanners don’t verify whether your firewalls, EDR/XDR agents, WAFs or IPS actually block attacks, nor do they confirm if your IDS or SIEM detect and alert on threats that bypass preventive measures. 

The result? Security teams get overwhelmed by hundreds, even thousands, of high-severity alerts, when in reality only about 2-3% of those vulnerabilities can penetrate your multi-layer defenses.

No Environmental and Asset Context

Vulnerability scanners treat every finding the same, flagging a low-exposure lab server with the same urgency as a critical, internet-facing database. They do not asset exposure, application criticality and existing compensating controls. 

As a result, you'll waste resources chasing low-risk issues while high-impact gaps slip through the cracks, leaving your most valuable assets dangerously unprotected.

No Threat Simulation

Traditional vulnerability scanners can’t mimic real-world attacker tactics or carry out end-to-end campaigns against your environment. They don’t chain together multiple steps, lateral movement, privilege escalation, data exfiltration, to reveal how threats actually traverse your defenses. 

Without this simulation, you stay blind to gaps in detection, prevention, and response, leaving your team unprepared for the way sophisticated adversaries operate.

To move from simple detection to confident decision-making, organizations need more than static scan results. They need an additional layer: Exposure Validation, a process that actively tests whether threats can succeed within the context of your real-world environment.

Supporting Vulnerability Scanners with Exposure Validation

Exposure Validation enhances traditional vulnerability scanning by validating whether identified vulnerabilities can actually be exploited in your environment. Through safe, controlled attack simulations, it differentiates between “theoretical” risks and real, “exploitable” exposures.

It answers two critical questions that scanners can’t:

  • Can this vulnerability be exploited right now?

  • Will your existing controls detect or block the attack?

By continuously testing your defenses, Exposure Validation provides a live, evidence-based view of control effectiveness, moving beyond static risk scores to reveal what truly matters.

Core Technologies of Exposure Validation

Exposure Validation leverages three complementary pillars.

Breach and Attack Simulation (BAS) 

BAS simulates and emulates real-world attack techniques (TTPs) to evaluate the effectiveness of security controls in detecting and preventing threats. 

By replicating adversary behavior across environments, it validates whether vulnerabilities identified by scanners are truly exploitable, turning static findings into actionable insights that help prioritize genuine business risks.

In addition, many BAS vendors with good ROI offer continuously updated threat libraries enriched with the latest cyber intelligence, including malware, threat actor campaigns, and CVE exploits. 

Leveraging these, BAS simulates end-to-end exploitation campaigns based on recent incidents, working alongside vulnerability scanners to intelligently prioritize vulnerabilities most likely to be exploited.

Automated Penetration Testing

Unlike static internal scanning, automated penetration testing adopts an assume-breach mindset, mimicking sophisticated attackers who have already bypassed perimeter defenses and aim to cause significant impact, such as deploying ransomware across the domain or destroying critical data. It simulates an attacker with a clear objective.

This approach tests an adversary’s ability to exploit internal vulnerabilities not in isolation but by chaining them together, such as privilege escalation, lateral movement, and credential dumping, to reach critical assets with elevated privileges. By doing so, it validates the real-world impact of vulnerabilities and helps prioritize remediation based on the attack paths they enable.

Importantly, automated penetration testing supports and enhances the output of vulnerability scanners by validating which identified vulnerabilities are truly exploitable within the context of attacker behavior and organizational topology. 

This integration enables security teams to move beyond static CVSS scores and prioritize risks more intelligently, focusing remediation efforts on vulnerabilities that pose the greatest threat.

Attack Surface Management (ASM)

ASM extends the reach of vulnerability scanners by continuously discovering and mapping exposed assets, both known and unknown, across internal and external environments. Integrated into the Exposure Validation workflow, ASM ensures that vulnerabilities are evaluated in the context of asset exposure, business value, and attacker accessibility. 

This visibility allows security teams to assign priority not just based on severity, but on real-world exploitability and exposure level.

Picus Exposure Validation for Smarter Prioritization

Picus Exposure Validation bridges the gap between vulnerability assessment and real-world risk by combining scanner findings with safe, automated attack simulations. 

When a CVE is detected, Picus tests whether it can actually bypass your defenses, validating exploitability in your specific environment, not just in theory. This validation-first approach eliminates false positives, confirms real risk, and ensures that remediation efforts target exposures that genuinely matter. 

The logic is simple.

  • If an attack is successful, the exposure is prioritized. 

  • If blocked, it's deprioritized, bringing clarity to scanner results without replacing them.

Therefore, Picus doesn't replace vulnerability scanning or de-value it; it clarifies scanners' output for smarter prioritization.

Built on three pillars of technologies that are outlined by Gartner for Adversarial Exposure Validation, Picus Security Validation Platform delivers a continuous, threat-informed view of your exposure.

The Picus Platform not only identifies which vulnerabilities are exploitable but also quantifies their risk using the Picus Exposure Score, a contextual metric that reflects control effectiveness, asset criticality, and real-world exploitability.

With Picus, security teams reduce noise, focus on what's actionable, and prove the effectiveness of their defenses with data, not assumptions.

Conclusion

Vulnerability scanners are vital. They offer visibility and help uncover flaws early. But they’re only the first step.

Without validation, scanners may mislead, overwhelm, or miss critical gaps. Adding exposure validation helps teams:

  • Focus on what’s exploitable

  • Reduce alert fatigue

  • Optimize remediation workflows

In today’s threat landscape, it’s not just about finding vulnerabilities. It’s about proving which ones matter. The future of vulnerability management lies in pairing broad detection with real-world validation.

Scan smart. Validate what matters. Fix what counts.

Table of Contents

Discover More Resources