NETRESEC Network Security Blog - Tag : ASCII-art

rss Google News

Optimizing IOC Retention Time

Are you importing indicators of compromise (IOC) in the form of domain names and IP addresses into your SIEM, NDR or IDS? If so, have you considered for how long you should keep looking for those IOCs?

An IoT botnet study from 2022 found that 90% of C2 servers had a lifetime of less than 5 days and 93% had a lifetime shorter than 14 days. Additionally, a recent writeup from Censys concludes that the median lifespan of Cobalt Strike C2 servers is 5 days. Both these studies indicate that IP and domain name indicators are short lived, yet many organizations cling on to old IOCs for much longer than so. Monitoring for too many old indicators not only costs money, it can even inhibit detection of real intrusions.

Pyramid of Pain

David J. Bianco's pyramid of pain illustrates how much pain is caused to an adversary if their malicious actions are hindered by defenders blocking various indicators.

IOC Pyramid of Pain

Indicators at the bottom of the pyramid are trivial for an adversary to replace, which is why IOCs like hashes and IP addresses tend to be very short lived. The indicator types close to the pyramid’s base are also the ones that typically are shared in threat intel feeds.

It would be nice to detect adversaries with indicators higher up in David’s pyramid of pain, but it is often difficult to craft reliable detection mechanisms for such indicators. The simple indicators at the bottom, like domains, IPs and hashes, are on the other hand very exact and practical for us defenders to work with. So in this sense the pyramid of pain applies to defenders as well, where indicators at the bottom are trivial to use and the complexity increases as we go further up. In this sense the Pyramid of Pain can also be viewed as the “Pyramid of Detection Complexity”.

Chasing Ghosts

Many IOCs are “dead” even before you get them. This IOC delay is particularly noticeable when looking at writeups of malware and botnets published by security researchers, in which the IPs and domain names shared in their IOC lists often haven’t been used for several weeks by the time the report gets published. It’s understandable that it takes time to reverse engineer a new piece of malware, compose a blog post and to get the writeup approved for publishing. Nevertheless, those old IOCs often get picked up, redistributed, and used as if they were fresh indicators. Recorded Future observed a 33-day average lead time between when their scans found a C2 server and when it is reported in other sources. Thirty-three days! Only a very small fraction of those C2 servers can be expected to still be active by the time those IOCs are shared by various threat intel providers. All the other C2 servers are dead indicators, or “Ghost IOCs” as I like to call them. Attempting to detect such Ghost IOCs in live network traffic is a waste of resources.

IOC Costs

Most network intrusion detection systems (IDS) only support a certain number of active rules or signatures, which effectively limits the number of IOCs you can look for. Other products charge the user based on the number of monitored IOCs. As an example, Tostes et al.’s paper covering the shelf life of an IOC says that Azure Sentinel Threat Intelligence charges $2.46 per ingested GB. This type of direct cost can then be compared to the cost of not alerting on a known malicious IOC that has timed out. Simplified comparisons like this typically result in poor and misleading guidance to keep using IOCs for much longer than necessary, often hundreds of days or even perpetually.

One important factor that is missing in such simplified reasoning is the cost for false positive alerts, which increases significantly when monitoring for ghost IOCs. An IP address that is being used as a botnet C2 server one week might be running a totally legitimate service another week. The risk for false positives is smaller for domain names compared to IP addresses, but hacked websites is one example where domain names often cause false positive alerts as well. One common cause for such domain IOC false positives is the frequent misuse of hacked websites for malware distribution. Such illicit use is often detected and rectified within a couple of days, in particular when this happens to a website belonging to a medium to large sized company or organization. Alerting on DNS based IOCs for a long time after last confirmed sighting therefore also increases the risk for false positives.

The boy who cried wolf

The cost of false positives is difficult to estimate, as it involves the time analysts spend in vain trying to verify if a particular alert was a false positive or not. Too many false positives can also cause alert fatigue or alarm burnout, much in the same way as in Aesop’s ancient fable about the boy who cried wolf.

This implies that monitoring for too many old IOCs actually increases the risk of actual sightings of malicious indicators being missed or overlooked due to alert fatigue. This line of thinking might seem contradictory and is often overlooked in research related to the shelf life of indicators. I would personally prefer to use an IOC scoring model that aims to reduce the number of false positives rather than using one that attempts to minimize the direct cost of IOC monitoring.

Pruning old IOCs

RFC 9424 states that “IOCs should be removed from detection at the end of their life to reduce the likelihood of false positives”. But how do we know if an IOC has reached end of life?

I’ve previously mentioned that researchers have found that most C2 servers are used for 5 days or less. The optimal number of days to monitor for an IOC depends on several factors, but in general we’re talking about a couple of weeks since last sighting for an IP address and maybe a little longer for domain names. The obvious exception here is DGA and RDGA domains, which typically have a lifetime of just 24 hours.

The research paper Taxonomy driven indicator scoring in MISP threat intelligence platforms introduces an interesting concept, where a score is calculated for each IOC based on factors like confidence, age and decay rate. The IOC can be considered dead or expired when the score reaches zero or if it goes below a specified threshold. The following graph from the MISP paper shows an example of how the score for an IP address IOC could decay over time.

Figure 8 from MISP paper

Image: Figure 8 from Taxonomy driven indicator scoring in MISP threat intelligence platforms

Recommendations

  • When publishing IOCs, make sure to also include a date for when it was last confirmed to be active, aka “last seen”.
  • Ask your threat intel vendor if they can provide a last seen date for each of their indicators, or if they have some other way to determine the freshness of their indicators.
  • Ensure that the IOCs you monitor for are pruned based on age or freshness to reduce the risk (and cost) for false positives.
  • Prioritize long lived IOCs over short lived ones, for example by striving to use indicators higher up in the pyramid of pain, but only when this can be done without increasing the false positive rate.

Posted by Erik Hjelmvik on Thursday, 06 November 2025 12:05:00 (UTC/GMT)

Tags: #IOC #IDS #ASCII-art

Short URL: https://netresec.com/?b=25Be9dd


How to set PCAP as default save file format in Wireshark

Did you know that there is a setting in Wireshark for changing the default save file format from pcapng to pcap?

In Wireshark, click Edit, Preferences. Then select Advanced and look for the capture.pcap_ng setting. Change the value to FALSE if you want Wireshark to save packets in the pcap file format. You have to double-click the “TRUE” text to change it into “FALSE”.

capture.pcap_ng in Wireshark Preferences

This setting can also be accessed from the Capture tab in Preferences.

Disable pcapng in Wireshark Preferences

I recently learned about this setting from Sake Blok when he commented on my feature request to have Wireshark use pcap as default savefile format instead of pcapng. I have a feeling that this feature request will not be accepted though, since it has received several downvotes. That’s why I’m trying to spread the word about this setting instead, so that everyone who prefers the pcap file format over pcapng can change the default behavior in their own Wireshark installation.

This setting doesn’t affect command line tools, like dumpcap, tshark, mergecap etc. So if you want to capture packets with dumpcap to a pcap file then you need to use the -P switch like this:

dumpcap -P -i eth0 -w dump.pcap

Other command line tools in the Wireshark suite, like tshark and mergecap, require that you instead specify -F pcap like this:

mergecap -F pcap -w out.pcap in1.pcap in2.pcap

What’s Wrong with PCAP-NG?

Why all this fuss about using PCAP instead of PCAP-NG? Well, it turns out that most Wireshark users are happily unaware of just how much metadata there is in the pcapng files they share online. This metadata typically contains information about the CPU of their computer, the exact version and build of their operating system as well as the name of the network interface on which the capture was performed. For Windows users the network interface details even contain a GUID that usually is a world-unique identifier.

I was once even able to identify a person, who had anonymously shared a pcapng file online, by inspecting metadata in the shared capture file github.pcapng. Here's the metadata in that capture file:

Metadata in a PcapNG file showed in NetworkMiner Professional's capture file properties

This screenshot shows the output from the “Show Metadata” functionality in NetworkMiner Professional. There's also a great way to show pcapng metadata in Wireshark: Open the pcapng file, click View, Reload as File Format/Capture (Ctrl+Shift+F).

Mergecap

The previously mentioned command line tool mergecap, which joins multiple capture files into one, outputs pcapng files by default. In fact, if it is tasked to merge two pcap files (having no metadata), it then creates a pcapng file containing the packets from the two input pcap files enriched with metadata about the computer running mergecap. This metadata is typically information about the operating system as well as the version of mergecap that was used.

Mergecap ASCII flowchart Metadata in PcapNG file created with mergecap

Providing an output file with the “.pcap” suffix to mergecap will not help, mergecap still generates a pcapng file. You have to use the -F pcap switch to have it generate a pcap file without metadata about your operating system.

What do Wireshark Users Want?

I recently conducted two unscientificpolls, where I asked which savefile format Wireshark should use as default.

Poll results from X and Mastodon: 51 voted for pcap and 35 voted for pcapng

In total the polls got 86 votes, where 51 voted for pcap and 35 preferred pcapng. I don't want to draw any real conclusions from these results though, primarily due to the low number of participants but also because there might be a bias among the people who were reached by these polls.

Looking Ahead

I reach out to people I know every now and then when I notice that they are sharing pcapng files containing potentially sensitive metadata. They then have to decide if they are okay with this or if they want to go through the process of replacing the pcapng files with pcap files. In many cases they choose the latter, which can be quite tricky if that involves removing files from GitHub.

I eventually got tired of doing this, especially when I realized that even very skilled Wireshark users often don’t know that pcapng files store metadata about their computers. Reminding people to select the “pcap” format every time they save a capture file doesn’t seem to be the solution. I therefore hope that this blog post can help Wireshark users avoid accidentally sharing unnecessary metadata in the future.

For more information about the pcapng format, please visit pcapng.com.

Update 2025-02-26

I have now created two new feature requests for Wireshark. The first request is to add settings in Wireshark that allow users to control what metadata that get saved to pcapng files. The other one is to have Wireshark select output file format based on filename suffix instead of always creating a pcapng file.

Posted by Erik Hjelmvik on Tuesday, 25 February 2025 10:33:00 (UTC/GMT)

Tags: #wireshark #PCAP #pcap-ng #dumpcap #metadata #ASCII-art

Short URL: https://netresec.com/?b=2523d40


Blocking Malicious sites with a TLS Firewall

Over 90 percent of all web traffic is encrypted nowadays, which is great of course. However, as HTTP and DNS traffic gets encrypted, defenders have a more difficult time blocking malicious network traffic. One solution to this problem is to use a TLS firewall, which effectively blocks encrypted connections to known bad websites.

DNS Firewalls and Sinkholes

DNS firewalls and DNS sinkholes, like pihole and RPZ firewalls, are simple yet effective solutions that can prevent users from connecting to malicious websites. They work by acting as recursive name servers that deny clients from resolving known-bad domain names. However, more and more DNS traffic is becoming encrypted with DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH), where clients send DNS queries inside an end-to-end encrypted connection directly to a DNS provider. This prevents many DNS based security solutions, like DNS firewalls, from inspecting the queried hostnames.

One way around this problem is to block the actual connections to known-bad domains instead of preventing clients from resolving them. For outgoing TLS connections, such as HTTPS, this can be done with a TLS Firewall.

TLS Firewalls

A TLS firewall inspects client TLS handshakes and extracts the requested server name from the Server Name Indication (SNI) extension. This hostname is generally sent unencrypted in HTTPS traffic (even if you use TLS 1.3), which allows the hostname to be inspected without having to break the TLS encryption. The TLS firewall then checks if the hostname is a known bad or malicious website, in which case the connection is either closed or the user gets redirected to a warning page.

Blocklists

There are several blocklists with malicious domain names, including commercial services as well as freely available lists from ThretFox, CERT Polska and others. These blocklists are often created for DNS firewalls and sinkholes, but they can also be leveraged by TLS firewalls to identify and block traffic to malicious websites.

PolarProxy

PolarProxy can be used as a TLS firewall simply by loading a ruleset that blocks connections to malicious domains.

PolarProxy block/inspect/bypass ASCII

PolarProxy has the capability to decrypt and inspect what’s inside the TLS encryption, but this feature is not needed when acting as a TLS firewall. The hostname the client wants to connect to is generally provided in the SNI without encryption, so PolarProxy doesn’t have to use the “inspect” action when acting as a TLS firewall. When running in “firewall mode” PolarProxy performs the “block” action for connections to known malicious domains and the “bypass” action for all other TLS traffic. Because of this there is no need for configuring clients to trust PolarProxy’s root certificate in TLS firewall deployments, unless you add a custom rule that decrypts and inspects certain traffic. In fact, if PolarProxy is deployed as a transparent forward proxy in this TLS firewall mode, then zero client configuration is required. This means that managed as well as unmanaged devices, including BYOD, embedded devices, appliances etc., will be protected!

Transparent TLS Firewall (Linux)

Network ASCII drawing

If your network has a Linux based firewall that uses iptables, then you’ll be able to run PolarProxy as a transparent TLS firewall directly on your Linux firewall with this command:

./PolarProxy -p 10443,80,443 --ruleset https://raw.githubusercontent.com/Netresec/PolarProxy/main/rulesets/ruleset-block-malicious.json

You then need to configure the iptables firewall to redirect HTTPS traffic from your network to PolarProxy (see "Routing Option #1" in the PolarProxy documentation for more details).

  • sudo iptables -I INPUT -i eth1 -p tcp --dport 10443 -m state --state NEW -j ACCEPT
  • sudo iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 443 -j REDIRECT --to 10443

Congratulations, your firewall now blocks outgoing HTTPS connections from local clients to known malicious websites!

PolarProxy can also be run in a container using Docker or Podman.

HTTPS Proxy TLS Firewall (Windows)

It’s even possible to run PolarProxy directly on a Windows PC and configure the local proxy settings to send outgoing traffic through PolarProxy. Use the following command to start PolarProxy as a HTTP CONNECT proxy server on port 8080 with a TLS firewall ruleset:

PolarProxy.exe --httpconnect 127.0.0.1:8080 --ruleset https://raw.githubusercontent.com/Netresec/PolarProxy/main/rulesets/ruleset-block-malicious.json

Then configure the Windows PC to use a proxy server on 127.0.0.1 on port 8080.

Windows proxy server exceptions

Add the following exceptions to the Windows proxy settings to ensure that PolarProxy can download the ruleset and blocklists:

raw.githubusercontent.com;*.abuse.ch;hole.cert.pl;zonefiles.io;github.com

Click “Save”.

One side effect of running PolarProxy as an HTTP connect proxy (with --httpconnect) is that this mode only allows TLS encrypted traffic to pass through the proxy. This means that plaintext HTTP traffic that Windows forwards to PolarProxy on port 8080 will be blocked. You’ll see error messages like “Request method "GET" is not supported by HTTP CONNECT proxy” in PolarProxy’s output if it is started with the “-v” argument.

A workaround for this side effect is to run inetcpl.cpl (Window’s old school Internet Properties), select “Connections” tab and click the “LAN settings” button.

Windows inetcpl.cpl connections

Then click the “Advanced” button in the Proxy server section of the LAN Settings window to configure which protocols that should run through the proxy.

Windows LAN settings

Uncheck “Use the same proxy server for all protocols” and remove the proxy settings for everything except “Secure”, which is HTTPS traffic and clock “OK”.

Windows proxy settings: only https

The Windows PC should now only forward HTTPS traffic to PolarProxy’s TLS firewall.

Pro Tip

Enter the following value as “Proxy IP address” directly in the modern “Edit proxy server” settings in Windows 10/11 to only proxy HTTPS traffic without using the legacy inetcpl.cpl settings:

http://https=127.0.0.1

Finally, I’d like to point out that the Windows proxy settings only affect outgoing traffic from applications that respect the proxy settings configured on the operating system. Pretty much every legitimate application will respect these settings and connect through PolarProxy, but there is no guarantee that malware will. This is why a transparent proxy deployment is recommended, such as the one described for the Linux deployment using iptables.

For more information about using PolarProxy as a TLS Firewall and the ruleset JSON format, please visit our TLS Firewall page.

Posted by Erik Hjelmvik on Monday, 27 January 2025 10:45:00 (UTC/GMT)

Tags: #PolarProxy #ThreatFox #ASCII-art

Short URL: https://netresec.com/?b=2515cf0


PolarProxy 1.0 Released

I am thrilled to announce the release of PolarProxy version 1.0 today! Several bugs that affected performance, stability and memory usage have now been resolved in our TLS inspection proxy. PolarProxy has also been updated with better logic for importing external root CA certificates and the HAProxy implementation has been improved. But the most significant addition in the 1.0 release is what we call the “TLS Firewall” mode.

TLS Firewall

PolarProxy now supports rule based logic for determining if a session should be allowed to pass through, get blocked or if the TLS encrypted data should be inspected (i.e. decrypted and re-encrypted) by the proxy. This rule based logic can be used to turn PolarProxy into a TLS firewall. As an example, the ruleset-block-malicious.json ruleset included in the new PolarProxy release blocks traffic to malicious domains in abuse.ch’s ThreatFox IOC database as well as traffic to web tracker domains listed in the EasyPrivacy filter from EasyList. This ruleset also includes an allow list in order to avoid accidentally blocking access to legitimate websites.

PolarProxy TLS Firewall - block malicious, inspect suspicious, bypass legitimate

PolarProxy’s ruleset logic isn’t limited to just domain names. It is also possible to match traffic based on JA3 or JA4 hashes as well as application layer protocol information provided in the ALPN extension of a client’s TLS handshake.

For more information on the ruleset format and how to use PolarProxy as a TLS firewall, see here:
https://www.netresec.com/?page=TlsFirewall

Linux, macOS and Windows builds of the new PolarProxy release can be downloaded from here:
https://www.netresec.com/?page=PolarProxy

Posted by Erik Hjelmvik on Thursday, 02 May 2024 07:00:00 (UTC/GMT)

Tags: #PolarProxy #TLS #inspect #bypass #ThreatFox #ASCII-art

Short URL: https://netresec.com/?b=2451e98


TLS Redirection and Dynamic Decryption Bypass in PolarProxy

PolarProxy is constantly being updated with new features, enhanced performance and bug fixes, but these updates are not always communicated other than as a short mention in the ChangeLog. I would therefore like to highlight a few recent additions to PolarProxy in this blog post.

Custom TLS Redirection

One new feature in PolarProxy is the --redirect argument, which can be used to redirect TLS traffic destined for a specific domain name to a different domain. This feature can be used to redirect TLS-encrypted malware traffic going to a known C2 domain to a local HTTPS sandbox instead, for example INetSim.

PolarProxy --redirect malware-c2.com:inetsim.local --leafcert noclone

This --redirect argument will cause PolarProxy to terminate outgoing TLS traffic to malware-c2.com and redirect the decrypted traffic into a new TLS session going to inetsim.local instead. The “--leafcert noclone” argument forces PolarProxy to generate a fake X.509 certificate for “malware-c2.com” rather than sending a clone of the certificate received from the INetSim server to the malware implant.

Note: You also need to specify a proxy mode, such as -p for transparent proxy or --socks for SOCKS proxy, to make the command above work.
PolarProxy TLS redirect

The --redirect argument can also be used to perform domain fronting, which is a clever method for hiding the true destination of HTTPS based communication, in order to circumvent censorship or for other reasons conceal who you’re communicating with. The following command can be used to set up a local socks proxy that redirects traffic destined for YouTube to google.com instead:

PolarProxy --socks 1080 --redirect youtube.com,www.youtube.com,youtu.be:google.com

A browser configured to use PolarProxy as a SOCKS proxy will send HTTPS requests for youtube.com to PolarProxy, which then decrypts the TLS layer and re-encrypts the HTTP communication in a new TLS session directed at google.com instead. Someone who monitors the outgoing traffic from PolarProxy will assume that this is normal Google traffic, since the SNI as well as certificate will be for google.com. On the server side however, after having decrypted the TLS layer, Google will kindly forward the client’s original HTTP request for youtube.com to an endpoint that serves the content for YouTube.

Dynamic TLS Decryption Bypass

PolarProxy is designed to block TLS connections that it can’t decrypt, except for when the server’s domain name is explicitly marked for decryption bypass with the “--bypass” command line argument. However, as of recently PolarProxy also supports dynamic TLS decryption bypass using a form of fail-open mode. When this fail-open mode is enabled, PolarProxy attempts to intercept and decrypt proxied TLS traffic, but allows connections to bypass decryption if the same client-server pair has previously rejected PolarProxy’s certificate. This method is convenient when monitoring network traffic from applications that enforce certificate pinning or for some other reason can’t be configured to trust PolarProxy’s root CA – provided that it’s acceptable to let traffic that can’t be decrypted to pass through untouched rather than blocking it, of course.

The following command line option configures PolarProxy to allow new TLS connections to bypass decryption for one hour (3600 seconds) after previously having failed to decrypt traffic between the same client and server.

--bypassonfail 1:3600

A simple way to verify this fail-open feature is to do a simple test with curl. It doesn’t matter if the client you’re testing on is Windows, Linux or macOS, since PolarProxy as well as curl is available for all three platforms.

PolarProxy --bypassonfail 1:3600 --socks 1080
curl --socks4 localhost -I https://example.com
curl: (60) SSL certificate problem: unable to get local issuer certificate

curl --socks4 localhost -I https://example.com
HTTP/2 200
content-encoding: gzip
accept-ranges: bytes
age: 593298
cache-control: max-age=604800
content-type: text/html; charset=UTF-8
date: Mon, 27 Feb 2023 14:29:46 GMT
etag: "3147526947"
expires: Mon, 06 Mar 2023 14:29:46 GMT
last-modified: Thu, 17 Oct 2019 07:18:26 GMT
server: ECS (nyb/1DCD)
x-cache: HIT
content-length: 648

Web browsers that don’t trust PolarProxy’s root CA will display a certificate warning the first time they visit a website that PolarProxy tries to decrypt traffic for.

Firefox certificate warning

But once the dynamic bypass has kicked in the user will no longer see a certificate warning when visiting the same website again, since traffic between that client and server is now end-to-end encrypted.

Handling of non-TLS traffic and Better Logging

Other new features in PolarProxy is the “--nontls” argument, which can be used to specify how to handle connections that doesn’t use TLS. The default action is to block non-TLS connections, but they can also be allowed to pass through (if the target host is known) or to forward the connection to a specific host and port. There is even a “--nontls encrypt” argument, which can be used to encrypt traffic that isn’t already TLS-encrypted before forwarding it to a specific host. This feature can be used as an alternative to stunnel to wrap traffic from applications that lack TLS support inside a TLS tunnel.

PolarProxy now also produces less output to stdout, unless -v is used, and error messages have been improved to be more specific and easier to understand.

Posted by Erik Hjelmvik on Tuesday, 28 February 2023 13:42:00 (UTC/GMT)

Tags: #PolarProxy #TLS #redirect #bypass #fail-open #SNI #ASCII-art

Short URL: https://netresec.com/?b=23275c9


What is PCAP over IP?

PCAP over IP

PCAP-over-IP is a method for reading a PCAP stream, which contains captured network traffic, through a TCP socket instead of reading the packets from a PCAP file.

A simple way to create a PCAP-over-IP server is to simply read a PCAP file into a netcat listener, like this:

nc -l 57012 < sniffed.pcap

The packets in “sniffed.pcap” can then be read remotely using PCAP-over-IP, for example with tshark like this (replace 192.168.1.2 with the IP of the netcat listener):

nc 192.168.1.2 57012 | tshark -r -

But there’s an even simpler way to read PCAP-over-IP with Wireshark and tshark, which doesn’t require netcat.

wireshark -k -i TCP@192.168.1.2:57012
tshark -i TCP@192.168.1.2:57012

The Wireshark name for this input method is “TCP socket” pipe interface, which is available in Linux, Windows and macOS builds of Wireshark as well as tshark.

PCAP-over-IP in Wireshark's Pipe Interfaces

It is also possible to add a PCAP-over-IP interface from Wireshark's GUI. Open Capture/Options, Manage Interfaces, Pipes Tab and then enter a Local Pipe Path such as TCP@127.0.0.1:57012 and click OK. This setting will disappear when you close Wireshark though, since pipe settings don't get saved.

Live Remote Sniffing

Sniffed traffic can be read remotely over PCAP-over-IP in real-time simply by forwarding a PCAP stream with captured packets to netcat like this:

tcpdump -U -w - not tcp port 57012 | nc -l 57012
dumpcap -P -f "not tcp port 57012" -w - | nc -l 57012
PCAP-over-IP with tcpdump, netcat and tshark

Tcpdump is not available for Windows, but dumpcap is since it is included with Wireshark.

Note how TCP port 57012 is purposely filtered out using BPF when capturing in order to avoid a snowball effect, where the PCAP-over-IP traffic otherwise gets sniffed and re-transmitted through the PCAP-over-IP stream, which again gets sniffed etc.

A more sophisticated setup would be to let the service listening on TCP port 57012 spawn the sniffer process, like this:

nc.traditional -l -p 57012 -c "tcpdump -U -w - not port 57012"

Or even better, let the listening service reuse port 57012 to allow multiple incoming PCAP-over-IP connections.

socat TCP-LISTEN:57012,reuseaddr,fork EXEC:"tcpdump -U -w - not port 57012"

Reading PCAP-over-IP with NetworkMiner

We added PCAP-over-IP support to NetworkMiner in 2011 as part of NetworkMiner 1.1, which was actually one year before the TCP socket sniffing feature was included in Wireshark.

Live remote sniffing with NetworkMiner 2.7.3 using PCAP-over-IP

Image: Live remote sniffing with NetworkMiner 2.7.3 using PCAP-over-IP

NetworkMiner can also be configured to listen for incoming PCAP-over-IP connections, in which case the sniffer must connect to the machine running NetworkMiner like this:
tcpdump -U -w - not tcp port 57012 | nc 192.168.1.3 57012

This PCAP-over-IP feature is actually the recommended method for doing real-time analysis of live network traffic when running NetworkMiner in Linux or macOS, because NetworkMiner’s regular sniffing methods are not available on those platforms.

Reading Decrypted TLS Traffic from PolarProxy

PolarProxy

One of the most powerful use-cases for PCAP-over-IP is to read decrypted TLS traffic from PolarProxy. When PolarProxy is launched with the argument “--pcapoverip 57012” it starts a listener on TCP port 57012, which listens for incoming connections and pushes a real-time PCAP stream of decrypted TLS traffic to each client that connects. PolarProxy can also make active outgoing PCAP-over-IP connections to a specific IP address and port if the “--pcapoveripconnect <host>:<port>” argument is provided.

In the video PolarProxy in Windows Sandbox I demonstrate how decrypted TLS traffic can be viewed in NetworkMiner in real-time with help of PCAP-over-IP. PolarProxy’s PCAP-over-IP feature can also be used to read decrypted TLS traffic from PolarProxy with Wireshark as well as to send decrypted TLS traffic from PolarProxy to Arkime (aka Moloch).

Replaying PCAP-over-IP to an Interface

There are lots of great network monitoring products and intrusion detection systems that don’t come with a built-in PCAP-over-IP implementation, such as Suricata, Zeek, Security Onion and Packetbeat, just to mention a few. These products would greatly benefit from having access to the decrypted TLS traffic that PolarProxy can provide. Luckily we can use netcat and tcpreplay to replay packets from a PCAP-over-IP stream to a network interface like this:

nc localhost 57012 | tcpreplay -i eth0 -t -

But for permanent installations we recommend creating a dedicated dummy interface, to which the traffic gets replayed and sniffed, and then deploy a systemd service that performs the replay operation. See our blog post Sniffing Decrypted TLS Traffic with Security Onion for an example on how to deploy such a systemd service. In that blog post we show how decrypted TLS traffic from PolarProxy can be replayed to a local interface on a Security Onion machine, which is being monitored by Suricata and Zeek.

Nils Hanke has also compiled a detailed documentation on how decrypted TLS packets from PolarProxy can be replayed to Packetbeat and Suricata with help of tcpreplay.

In these setups netcat and tcpreplay act as a generic glue between a PCAP-over-IP service and tools that can sniff packets on a network interface, but there are a few drawbacks with this approach. One drawback is that tcpreplay requires root privileges in order to replay packets to an interface. Another drawback is that extra complexity is added to the solution and two additional single point of failures are introduced (i.e. netcat and tcpreplay). Finally, replaying packets to a network interface increases the risk of packet drops. We therefore hope to see built-in PCAP-over-IP implementations in more network monitoring solutions in the future!

FAQ for PCAP-over-IP

Q: Why is it called “PCAP-over-IP” and not “PCAP-over-TCP”?

Good question, we actually don’t know since we didn’t come up with the name. But in theory it would probably be feasible to read a PCAP stream over UDP or SCTP as well.

Q: What is the standard port for PCAP-over-IP?

There is no official port registered with IANA for PCAP-over-IP, but we’ve been using TCP 57012 as the default port for PCAP-over-IP since 2011. The Wireshark implementation, on the other hand, uses TCP port 19000 as the default value.

Q: Which software comes with built-in PCAP-over-IP servers or clients?

The ones we know of are: Arkime, NetworkMiner, PolarProxy, tshark and Wireshark. There is also a PCAP-over-IP plugin for Zeek (see update below).

Q: Is there some way to encrypt the PCAP-over-IP transmissions?

Yes, we recommend encrypting PCAP-over-IP sessions with TLS when they are transmitted across a non-trusted network. NetworkMiner’s PCAP-over-IP implementation comes with a “Use SSL” checkbox, which can be used to receive “PCAP-over-TLS”. You can replace netcat with socat or ncat in order to establish a TLS encrypted connection to NetworkMiner.

Q: Is there a tool that can aggregate multiple PCAP-over-IP streams into one?

No, none that we’re aware of. However, multiple PCAP-over-IP streams can be merged into one by specifying multiple PCAP-over-IP interfaces in dumpcap and then forwarding that output to a netcat listener, like this:

dumpcap -i TCP@10.1.2.3:57012 -i TCP@10.4.5.6:57012 -w - | editcap -F pcap - - | nc -l 57012

Update 2023-04-13

Erich Nahum has published zeek-pcapovertcp-plugin, which brings native PCAP-over-IP support to Zeek.

Erich's plugin can be installed as a zeek package through zkg.

zkg install zeek-pcapovertcp-plugin

After installing the plugin, a command like this reads a PCAP stream from a remote source:

zeek -i pcapovertcp::192.168.1.2:57012

Posted by Erik Hjelmvik on Monday, 15 August 2022 08:05:00 (UTC/GMT)

Tags: #PCAP-over-IP #PCAP #tcpdump #Wireshark #tshark #NetworkMiner #PolarProxy #Suricata #Zeek #Arkime #tcpreplay #netcat #ASCII-art

Short URL: https://netresec.com/?b=228fddf


How the SolarWinds Hack (almost) went Undetected

My lightning talk from the SEC-T 0x0D conference has now been published on YouTube. This 13 minute talk covers tactics and techniques that the SolarWinds hackers used in order to avoid being detected.

Video: Hiding in Plain Sight, How the SolarWinds Hack went Undetected

Some of these tactics included using DNS based command-and-control (C2) that mimicked Amazon AWS DNS traffic, blending in with SolarWind’s legitimate source code and handpicking only a small number of targets.

One thing I forgot to mention in my SEC-T talk though, was the speed at which the attackers were working to analyze incoming data from the trojanized installs and selecting organizations to target for stage two operations.

SolarWinds Hack Timeline

For example, just during June 2020 the attackers got more than 1300 new organizations that started beaconing in using the DNS-based C2. The beaconed data only included the organizations’ Active Directory domain name and a list of installed security applications. Based on this information the attackers had to decide whether or not they wanted to target the organization. We have previously estimated that less than 1% of the organizations were targeted, while the malicious backdoor was disabled for all the other 99% who had installed the trojanized SolarWinds Orion update.

SolarWinds C2 IP addresses

The attackers typically decided whether or not to target an organization within one week from when they started beaconing. This means that the attackers probably had several hundred organizations in queue for a targeting decision on any given week between April and August 2020. That's a significant workload!

Posted by Erik Hjelmvik on Monday, 18 October 2021 10:30:00 (UTC/GMT)

Tags: #SolarWinds #SEC-T #video #backdoor #SUNBURST #Solorigate #STAGE2 #Stage 2 #DNS #C2 #ASCII-art

Short URL: https://netresec.com/?b=21A27a0


Walkthrough of DFIR Madness PCAP

I recently came across a fantastic digital forensics dataset at dfirmadness.com, which was created by James Smith. There is a case called The Stolen Szechuan Sauce on this website that includes forensic artifacts like disk images, memory dumps and a PCAP file (well, pcap-ng actually). In this video I demonstrate how I analyzed the capture file case001.pcap from this case.

Follow Along in the Analysis

Please feel free to follow along in the analysis performed in the video. You should be able to use the free trial version of CapLoader and the free open source version of NetworkMiner to perform most of the tasks I did in the video.

Here are some of the BPF and Column Criteria filters that I used in the video, so that you can copy/paste them into CapLoader.

  • net 10.0.0.0/8
  • Umbrella_Domain =
  • not ip6 and not net 224.0.0.0/4
  • host 194.61.24.102 or host 203.78.103.109 or port 3389

ASCII Network Flow Chart

References and Links

Timeline

All events in this timeline take place on September 19, 2020. Timestamps are in UTC.

  • 02:19:26 194.61.24.102 performs RDP brute force password attack against DC01.
  • 02:21:47 RDP password brute force successful.
  • 02:22:08 194.61.24.102 connects to DC01's RDP service as Administrator. Duration: 9 sec.
  • 02:22:36 194.61.24.102 connects to DC01's RDP service as Administrator again. Duration: 30 min.
  • 02:24:06 DC01 downloads coreupdater.exe from 194.61.24.102 using IE11.
  • 02:25:18 DC01 establishes Metrepreter reverse_tcp connection to 203.78.103.109. Duration: 4 min.
  • 02:29:49 DC01 re-establishes Metrepreter reverse_tcp connection to 203.78.103.109. Duration: 23 min.
  • 02:35:55 DC01 connects to DESKTOP's RDP service Administrator (username in Kerberos traffic). Duration 16 min.
  • 02:39:58 DESKTOP download coreupdater.exe from 194.61.24.102 using MS Edge.
  • 02:40:49 DESKTOP establishes Metrepreter reverse_tcp connection to 203.78.103.109. Duration: 2h 58 min.
  • 02:56:03 194.61.24.102 connects to DC01's RDP service as Administrator one last time. Duration: 1 min 38 sec.
  • 02:56:38 DC01 re-establishes Metrepreter reverse_tcp connection to 203.78.103.109. Duration: 2h 42 min.

IOC's

  • IP : 194.61.24.102 (Attacker)
  • IP : 203.78.103.109 (C2 server)
  • MD5 : eed41b4500e473f97c50c7385ef5e374 (coreupdater.exe)
  • JA3 Hash : 84fef6113e562e7cc7e3f8b1f62c469b (RDP scan/brute force)
  • JA3 Hash : 6dc99de941a8f76cad308d9089e793d7 (RDP scan/brute force)
  • JA3 Hash : e26ff759048e07b164d8faf6c2a19f53 (RDP scan/brute force)
  • JA3 Hash : 3bdfb64d53404bacd8a47056c6a756be (RDP scan/brute force)

Wanna learn more network forensic analysis techniques? Then check out our upcoming network forensics classes in September and October.

Posted by Erik Hjelmvik on Friday, 09 July 2021 13:20:00 (UTC/GMT)

Tags: #PCAP #NetworkMiner #CapLoader #video #videotutorial

Short URL: https://netresec.com/?b=217dfc7

2021 February

Targeting Process for the SolarWinds Backdoor

2020 December

Capturing Decrypted TLS Traffic with Arkime

2020 March

Discovered Artifacts in Decrypted HTTPS

Reverse Proxy and TLS Termination

2020 January

Sniffing Decrypted TLS Traffic with Security Onion

2019 December

Installing a Fake Internet with INetSim and PolarProxy

2019 January

Video: TrickBot and ETERNALCHAMPION