« ENISA 暗号化されたトラフィック分析に関する報告書(ユースケース分析) | Main | オランダでは、COVID-19 コンタクト・トレーシング・アプリが国民的議論を巻き起こしているようですね。。。 »

2020.04.25

US NSAとAU ASDが共同で、攻撃者によって脆弱なWebサーバにWebシェルを展開されているという報告書を出していますね。。。

こんにちは、丸山満彦です。

米国国家安全保障局(NSA)とオーストラリア信号局(ASD)が、Webシェルを展開するために脆弱なWebサーバを悪用する攻撃者に対する警告についての共同報告書を出していますね

US National Security Agency

・2020.04.22 Detect & Prevent Cyber Attackers from Exploiting Web Servers via Web Shell Malware

AU Australian Signal Directorate / Autralian Cyber Security Centre

・2020.04.23 Detect and prevent web shell malware

[PDF] [Donsloaded]Detect and Prevent Web Shell Malware


-----

Detect and Prevent Web Shell Malware

Summary

Mitigating Actions (DETECTION)

  • “Known-Good” Comparison
  • Web Traffic Anomaly Detection
  • Signature-Based Detection
  • Unexpected Network Flows
  • Endpoint Detection and Response (EDR) Capabilities
  • Other Anomalous Network Traffic Indicators

Mitigating Actions (PREVENTION)

  • Web Application Update Prioritization
  • Web Application Permissions
  • File Integrity Monitoring
  • Intrusion Prevention
  • Network Segregation
  • Harden Web Servers

Mitigating Actions (RESPONSE and RECOVERY)

Appendix

  • A: Scripts to Compare a Production Website to a Known-Good Image
  • B: Splunk® Queries for Detecting Anomalous URIs in Web Traffic
  • C: Internet Information ServicesTM (IIS) Log Analysis Too
  • D: Network Signatures of Traffic for Common Web Shells
  • E: Identifying Unexpected Network Flows
  • F: Identifying Abnormal Process Invocations in Sysmon Data
  • G: Identifying Abnormal Process Invocations with Auditd
  • H: Commonly Exploited Web Application Vulnerabilities
  • I: HIPS Rules for Blocking Changes to Web Accessible Directories

Detect and Prevent Web Shell Malware

 

Summary

Cyber actors have increased the use of web shell malware for computer network exploitation [1][2][3][4]. Web shell malware is software deployed by a hacker, usually on a victim’s web server. It can be used to execute arbitrary system commands, which are commonly sent over HTTP or HTTPS. Web shell attacks pose a serious risk to DoD components.

Attackers often create web shells by adding or modifying a file in an existing web application. Web shells provide attackers with persistent access to a compromised network using communication channels disguised to blend in with legitimate traffic. Web shell malware is a long-standing, pervasive threat that continues to evade many security tools.

Cyber actors deploy web shells by exploiting web application vulnerabilities or uploading to otherwise compromised systems. Web shells can serve as persistent backdoors or as relay nodes to route attacker commands to other systems.

Attackers frequently chain together web shells on multiple compromised systems to route traffic across networks, such as from internet-facing systems to internal networks [5].

It is a common misperception that only internet-facing systems are targeted for web shells. Attackers frequently deploy web shells on non-internet facing web servers, such as internal content management systems or network device management interfaces. Internal web applications are often more susceptible to compromise due to lagging patch management or permissive security requirements.

Though the term “web shells” is predominantly associated with malware, it can also refer to web-based system management tools used legitimately by administrators. While not the focus of this guidance, these benign web shells may pose a danger to organizations as weaknesses in these tools can result in system compromise. Administrators should use system management software leveraging enterprise authentication methods, secure communication channels, and security hardening.

 

Mitigating Actions (DETECTION)

Web shells are difficult to detect as they are easily modified by attackers and often employ encryption, encoding, and obfuscation. A defense-in-depth approach using multiple detection capabilities is most likely to discover web shell malware. Detection methods for web shells may falsely flag benign files. When a potential web shell is detected, administrators should validate the file’s origin and authenticity. Detection techniques include:

 

“Known-Good” Comparison
Web shells primarily target existing web applications and rely on creating or modifying files. The best method of detecting these web shells is to compare a verified benign version of the web application (i.e., a “known-good”) against the production version. Discrepancies should be manually reviewed for authenticity. Additional information and scripts to enable known-good comparison are available in Appendix A and are maintained on https://github.com/nsacyber/Mitigating-Web-Shells.

When adjudicating discrepancies with a known-good image, administrators are cautioned against trusting timestamps on suspicious systems. Some attackers use a technique known as “timestomping” [6] to alter created and modified times in order to add legitimacy to web shell files. Administrators should not assume that a modification is authentic simply because it appears to have occurred during a maintenance period. However, as an initial triage method, administrators may choose to prioritize verification of files with unusual timestamps.

 

Web Traffic Anomaly Detection
While attackers often design web shells to blend in with normal web traffic, some characteristics are difficult to imitate without advanced knowledge. These characteristics include user agent strings and client Internet Protocol (IP) address space. Prior to having a presence on a network, attackers are unlikely to know which user agents or IP addresses are typical for a web server, so web shell requests will appear anomalous. In addition, web shells routing attacker traffic will default to the web server’s user agent and IP address, which should be unusual in network traffic. Uniform Resource Identifiers (URIs) exclusively accessed by anomalous user agents are potentially web shells. Finally, some attackers neglect to disguise web shell request “referer [sic] headers”1 as normal traffic. Consequently, requests with missing or unusual referer headers could indicate web shell presence. Centralized log-querying capabilities, such as Security Information and Event Management (SIEM) systems, provide a means to implement this analytic. If such a capability is not available, administrators may use scripting to parse web server logs to identify possible web shell URIs. Example Splunk®2 queries (Appendix B), scripts for analyzing log data (Appendix C), and additional information about detecting web traffic anomalies are maintained at
https://github.com/nsacyber/Mitigating-Web-Shells.

 

Signature-Based Detection

From the host perspective, signature-based detection is unreliable because web shells may be obfuscated and are easy to modify. However, some cyber actors use popular web shells (e.g., China Chopper, WSO, C99, B374K, R57) with minimal modification. In these cases, fingerprint or expression-based detection may be possible. A collection of Snort®3 rules to detect common web shell files, scanning instructions, and additional information about signature-based detection are maintained at https://github.com/nsacyber/Mitigating-Web-Shells.

From the network perspective, signature-based detection of web shells is unreliable because web shell communications are frequently obfuscated or encrypted. Additionally, “hard-coded” values like variable names are easily modified to further evade detection. While unlikely to discover unknown web shells, signature-based network detection can help identify additional infections of a known web shell. Appendix D provides a collection of signatures to detect network communication from common, unmodified or slightly modified web shells sometimes deployed by attackers. This list is also maintained at https://github.com/nsacyber/Mitigating-Web-Shells.

 

Unexpected Network Flows

In some cases, attackers use web shells on systems other than web servers (e.g., workstations). These web shells operate on rogue web server applications and can evade file-based detection by running exclusively in memory (i.e., fileless execution). While functionally similar to a traditional Remote Access Tool (RAT), these types of web shells allow attackers to easily chain malicious traffic through a uniform platform. These types of web shells can be detected on wellmanaged networks because they listen and respond on previously unused ports. Additionally, if an attacker is using a perimeter web server to tunnel traffic into a network, connections would be made from a perimeter device to an internal node. If administrators know which nodes on their network are acting as web servers, then network analysis can reveal these types of unexpected flows. A variety of tools including vulnerability scanners (e.g., Nessus®4), intrusion detection systems (e.g., Snort®), and network security monitors (e.g., Zeek™5 [formerly “Bro”]) can reveal the presence of unauthorized web servers in a network. Maintaining a thorough and accurate depiction of expected network activity can enhance defenses against many types of attack. The Snort® rule in Appendix E and maintained at https://github.com/nsacyber/Mitigating-Web-Shells can be tailored for a specific network to identify unexpected network flows.

 

Endpoint Detection and Response (EDR) Capabilities

Some EDR and enhanced host logging solutions may be able to detect web shells based on system call or process lineage abnormalities. These security products monitor each process on the endpoint including invoked system calls. Web shells usually cause the web server process to exhibit unusual behavior. For instance, it is uncommon for most benign web servers to launch the ipconfig utility, but this is a common reconnaissance technique enabled by web shells. EDRs have different automated capabilities and querying interfaces, so organizations are encouraged to review documentation or discuss web shell detection with the vendor. Appendix F illustrates how Sysmon’s enhanced process logging data can be used to identify process abnormalities in a Microsoft® Windows®6 environment. Similarly, Appendix G illustrates how auditd can be used to identify process abnormalities in a Linux®7 environment. Guidance for these identifying process abnormalities in these environments is also maintained at https://github.com/nsacyber/Mitigating-Web-Shells.

 

Other Anomalous Network Traffic Indicators

Web shell traffic may exhibit other detectable abnormal characteristics depending on the attacker. In particular, unusually large responses (possible data exfiltration), recurring off-peak access times (possible non-local work schedule), and geographically disparate requests (possible foreign operator) could indicate URIs of potential web shells. However, these characteristics are highly subjective and likely to flag many benign URIs. Administrators may choose to implement these detection analytics if the baseline characteristic is uniform for their environment.

 

Mitigating Actions (PREVENTION)
Preventing web shells should be a priority for both internet-facing and internal web servers. Good cyber hygiene and a defense-in-depth approach based on the mitigations below provide significant hardening against web shells. Prevention techniques include:

 

Web Application Update Prioritization

Attackers sometimes target vulnerabilities in internet-facing and internal web applications within 24 hours of a patch release. Update these applications as soon as patches are available. Whenever possible, enable automatic updating and configure frequent update cadence (at least daily). Deploy manual updates on a frequent basis when automatic updating is not possible. Appendix H lists some commonly exploited vulnerabilities.

 

Web Application Permissions

Web services should follow the least privilege security paradigm. In particular, web applications should not have permission to write directly to a web accessible directory or modify web accessible code. Attackers are unable to upload a web shell to a vulnerable application if the web server blocks access to the web accessible directory. To preserve functionality, some web applications require configuration changes to save uploads to a non-web accessible area. Prior to implementing this mitigation, consult documentation or discuss changes with the web application vendor.

 

File Integrity Monitoring

If administrators are unable to harden web application permissions as described above, file integrity monitoring can achieve a similar effect. File integrity software can block file changes to web accessible directories or alert when changes occur. Additionally, monitoring software has the benefit of allowing certain file changes but blocking others. For example, if an internal web application handles only Portable Document Format (PDF) files, integrity monitoring can block uploads without a “.pdf” extension. Appendix I provides a set of Host Intrusion Prevention System (HIPS) rules for use with McAfee®8 Host Based Security System (HBSS) to enforce file integrity on web accessible directories. These rules, implementation instructions, and additional information about file integrity monitoring are maintained at https://github.com/nsacyber/Mitigating-Web-Shells.

 

Intrusion Prevention

Intrusion Prevention Systems (IPS) and Web Application Firewalls (WAF) each add a layer of defense for web applications by blocking some known attacks. Organizations should implement these appliances to block known malicious uploads. If possible, administrators are encouraged to implement the OWASP™9 Core Rule Set, which includes patterns for blocking certain malicious uploads. As with any signature-based blocking, attackers will find ways to evade detection, so this approach is only one part of a defense-in-depth strategy. Note that IPS and WAF appliances may block the initial compromise but are unlikely to detect web shell traffic.
To maximize protection, security appliances should be tailored to individual web applications rather than using a single solution across all web servers. For instance, a security appliance configured for an organization’s content management system can include application specific rules to harden targeted weaknesses that should not apply to other web applications. Additionally, security appliances should receive updates to enable real time mitigations for emerging threats.

 

Network Segregation

Network segregation is a complex architectural challenge that can have significant benefits when done correctly. Network segregation hinders web shell propagation by preventing connections between unrelated network segments. The simplest form of network segregation is isolating a demilitarized zone (DMZ) subnet to quarantine internet-facing servers.

Advanced forms of network segregation use software-defined networking (SDN) to enable a Zero Trust10 architecture, which requires explicit authorization for communication between nodes. While web shells could still affect a targeted server, network segmentation prevents attackers from chaining web shells to reach deeper into an organization’s network. For additional information about network segregation, see Segregate Networks and Functions [7] on nsa.gov.

 

Harden Web Servers

Secure configuration of web servers and web applications can prevent web shells and other compromises. Administrators should block access to unused ports or services. Employed services should be restricted to expected clients if possible. Additionally, routine vulnerability scans can help to identify unknown weaknesses in an environment. Some host-based security systems provide advanced features, such as machine learning and file reputation, which provide some protection against web shells. Organizations should take advantage of these advanced security features when possible.

 

Mitigating Actions (RESPONSE and RECOVERY)

While some web shells do not persist, running entirely from memory, and others exist only as binaries or scripts in a web directory, still others can be deeply rooted with sophisticated persistence mechanisms. Regardless, they may be part of a much larger intrusion campaign. A critical focus once a web shell is discovered should be on how far the attacker penetrated within the network. Packet capture (PCAP) and network flow data can help to determine if the web shell was being used to pivot within the network, and to where. If such a pivot is cleaned up without discovering the full extent of the intrusion and evicting the attacker, that access may be regained through other channels either immediately or at a later time.

---------

Appendix

  • A: Scripts to Compare a Production Website to a Known-Good Image
  • B: Splunk® Queries for Detecting Anomalous URIs in Web Traffic
  • C: Internet Information ServicesTM (IIS) Log Analysis Too
  • D: Network Signatures of Traffic for Common Web Shells
  • E: Identifying Unexpected Network Flows
  • F: Identifying Abnormal Process Invocations in Sysmon Data
  • G: Identifying Abnormal Process Invocations with Auditd
  • H: Commonly Exploited Web Application Vulnerabilities
  • I: HIPS Rules for Blocking Changes to Web Accessible Directories
-----

1 “Referer” is an HTTP header specified in Internet Engineering Task Force RFC 7231
2 Splunk is a registered trademark of Splunk, Inc.
3 Snort is a registered trademark of Cisco Technologies, Inc.
4 Nessus is a registered trademark of Tenable Network Security, Inc.
5 Zeek is a trademark of the Zeek Project
6 Microsoft and Windows are registered trademarks of the Microsoft Corporation
7 Linux is a registered trademark of the Linux Foundation
8 McAfee is a registered trademark of McAfee, LLC
9 OWASP is a trademark of the OWASP Foundation
10 Zero Trust is a model where both internal and external resources are treated as potentially malicious and thus each system verifies all access



 

 

|

« ENISA 暗号化されたトラフィック分析に関する報告書(ユースケース分析) | Main | オランダでは、COVID-19 コンタクト・トレーシング・アプリが国民的議論を巻き起こしているようですね。。。 »

Comments

Post a comment



(Not displayed with comment.)




« ENISA 暗号化されたトラフィック分析に関する報告書(ユースケース分析) | Main | オランダでは、COVID-19 コンタクト・トレーシング・アプリが国民的議論を巻き起こしているようですね。。。 »