An early report on Suricata

I’ve gone ahead and installed Suricata (an IDS / IPS system that does packet inspection) onto the Debian Raspberry Pi chip I use as my Daily Driver.

Why?

Because I kind of stalled out on doing the config work for Snort on the Alpine based DMZ box, I wanted something running fast (partly as I’ve gone too long without one to be comfortable), and I wanted to play with Suricata already. Besides, it does work on the box that is your daily driver too, it doesn’t need to be installed as an ‘on the side’ packet inspector…

To install I just did the usual for Debian: “app-get install suricata”. It complained. I had to do a “app-get update” first, then it installed fine. Just accepting the defaults, it collects a lot of information. I’ve done NO tuning (yet). It has a forest of tunable parameters, so that is going to “take a while”.

I did run it with:

suricata -c /etc/suricata/suricata-debian.yaml -s /etc/suricata/rules/dns-events.rules -i eth0

And it is running fine at the moment. Now in this one line you can see a few interesting things. First off, the default configuration file that suricata expects is /etc/suricata/suricata.yaml yet here it is suricata-debian.yaml. Why the gratuitous name change? Who knows… But you either need to call out that path name, or copy the default file to suricata.yaml file.

Next note that there are some pre-made rule sets available in /etc/surciata/rules. I choose one that, from the name, looks like it would inspect DNS things (and right now I have the hots to cut back gratuitous DNS traffic to places I didn’t request… so I’m hoping this helps.) Finally, I specified which interface to inspect. There’s LOT of other things you can ‘call out’ or speicifiy or configure or… It will likely be days before I have a set I like, and it is auto-started as a service or daemon, and and and… thus this early posting 1/2 done config… What I do from here on out will diverge ever more from what “other folks” may want to config, and it is important to point out that “out of the box” it does something sort of useful, so no reason to wait to play with it.

When launched, it immediately started to complain about a BUNCH of missing rules files. I’d thought I was saying “just use the one” but who knows. This is yet another “Dig Here” for me this evening.

19/11/2016 -- 14:24:21 -  - This is Suricata version 2.0.7 RELEASE
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_OPENING_RULE_FILE(41)] - opening rule file /etc/suricata/rules/botcc.rules: No such file or directory.
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_OPENING_RULE_FILE(41)] - opening rule file /etc/suricata/rules/ciarmy.rules: No such file or directory.
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_OPENING_RULE_FILE(41)] - opening rule file /etc/suricata/rules/compromised.rules: No such file or directory.
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_OPENING_RULE_FILE(41)] - opening rule file /etc/suricata/rules/drop.rules: No such file or directory.
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_OPENING_RULE_FILE(41)] - opening rule file /etc/suricata/rules/dshield.rules: No such file or directory.
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_OPENING_RULE_FILE(41)] - opening rule file /etc/suricata/rules/emerging-activex.rules: No such file or directory.
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_OPENING_RULE_FILE(41)] - opening rule file /etc/suricata/rules/emerging-attack_response.rules: No such file or directory.
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_OPENING_RULE_FILE(41)] - opening rule file /etc/suricata/rules/emerging-chat.rules: No such file or directory.
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_OPENING_RULE_FILE(41)] - opening rule file /etc/suricata/rules/emerging-current_events.rules: No such file or directory.
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_OPENING_RULE_FILE(41)] - opening rule file /etc/suricata/rules/emerging-dns.rules: No such ... 
[...]

There are places with loads of pre-made rule sets, so I likely just need to find them and do a big download, not make them all from scratch.

What is there out of the box?

root@R_Pi_DebJ_DD:/# ls -l /etc/suricata/rules
total 56
-rw-r--r-- 1 root root 13512 Mar  4  2015 decoder-events.rules
-rw-r--r-- 1 root root  1498 Mar  4  2015 dns-events.rules
-rw-r--r-- 1 root root  2872 Mar  4  2015 files.rules
-rw-r--r-- 1 root root  8339 Mar  4  2015 http-events.rules
-rw-r--r-- 1 root root  2380 Mar  4  2015 smtp-events.rules
-rw-r--r-- 1 root root 11879 Mar  4  2015 stream-events.rules
-rw-r--r-- 1 root root  4084 Mar  4  2015 tls-events.rules

So either I need to find all those other rule sets somewhere and add them, or figure out how to say “don’t look for those”, or both. These rules will likely keep me busy inspecting packets for a good while.

Now further down, there are more error messages. These are about duplicate rules, so I’m suspicious that there is some flag to say “Do Not Take The Default Rules” that I’ve left out, and that my specific call out of the DNS rules is a duplicate of the defaults. Note to self: RTFM… (Read The, ah, Friendly Manual) … on Suricata and figure out what command to really issue…

19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_DUPLICATE_SIG(176)] - Duplicate signature "alert dns any any -> any any (msg:"SURICATA DNS Unsollicited response"; flow:to_client; app-layer-event:dns.unsollicited_response; sid:2240001; rev:1;)"
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_INVALID_SIGNATURE(39)] - error parsing signature "alert dns any any -> any any (msg:"SURICATA DNS Unsollicited response"; flow:to_client; app-layer-event:dns.unsollicited_response; sid:2240001; rev:1;)" from file /etc/suricata/rules/dns-events.rules at line 2
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_DUPLICATE_SIG(176)] - Duplicate signature "alert dns any any -> any any (msg:"SURICATA DNS malformed request data"; flow:to_client; app-layer-event:dns.malformed_data; sid:2240002; rev:1;)"
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_INVALID_SIGNATURE(39)] - error parsing signature "alert dns any any -> any any (msg:"SURICATA DNS malformed request data"; flow:to_client; app-layer-event:dns.malformed_data; sid:2240002; rev:1;)" from file /etc/suricata/rules/dns-events.rules at line 4
[...]
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_NO_RULES(42)] - No rules loaded from /etc/suricata/rules/dns-events.rules
19/11/2016 -- 14:24:22 -  - [ERRCODE: SC_ERR_PCAP_CREATE(21)] - Using Pcap capture with GRO or LRO activated can lead to capture problems.
19/11/2016 -- 14:24:22 -  - all 7 packet processing threads, 3 management threads initialized, engine started.

Note at the bottom dns-events.rules had no rules loaded, which further supports the notion that I’ve duplicated a default; then it starts with 7 threads for packets and 3 for management. Probably way more than needed on a small Pi board…

So the rest of the evening will be spent figuring out what config options I want in the config file and what command options I want and how to invoke the command (or many copies if it?) ongoing… Tuning, configuration, and administration.

The output all goes into the /var/log area, so that needs to be somewhere with some space.

root@R_Pi_DebJ_DD:/var/log/suricata# ls -l
total 1000
-rw-r--r-- 1 root root 252127 Nov 19 14:43 eve.json
-rw-r--r-- 1 root root  78354 Nov 19 14:24 fast.log
-rw-r--r-- 1 root root   2922 Nov 19 14:27 http.log
-rw-r--r-- 1 root root 578678 Nov 19 14:43 stats.log
-rw-r--r-- 1 root root  99786 Nov 19 14:24 unified2.alert.1479594262

Already headed for a MB and I’ve only just launched it a couple of minutes ago on a system with an idle browser open and not much else going on.

What’s the stuff in there look like?

Here’s the smallest one:

root@R_Pi_DebJ_DD:/var/log/suricata# cat http.log
11/19/2016-14:24:58.389413 sr.symcd.com [**] / [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:55394 -> 23.5.251.27:80
11/19/2016-14:26:02.789491 weather.unisys.com [**] /gfs/gfs.php?inv=0&plot=1000&region=eu&t=9p [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:53909 -> 50.206.172.197:80
11/19/2016-14:26:02.900399 weather.unisys.com [**] /css/WMAX.v1.0.2.css [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:53910 -> 50.206.172.197:80
11/19/2016-14:26:02.881379 weather.unisys.com [**] /css/amstabs.css [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:53911 -> 50.206.172.197:80
11/19/2016-14:26:02.930470 weather.unisys.com [**] /javascript/animatedcollapse.js [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:53913 -> 50.206.172.197:80
11/19/2016-14:26:03.001182 weather.unisys.com [**] /images/unisys_logo_syt_2016.png [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:53914 -> 50.206.172.197:80
11/19/2016-14:26:03.110205 weather.unisys.com [**] /images/setupbutton-blue.png [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:53917 -> 50.206.172.197:80
11/19/2016-14:26:03.123673 weather.unisys.com [**] /images/WRN_Ambassador_logo_small.png [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:53916 -> 50.206.172.197:80
11/19/2016-14:26:03.192461 weather.unisys.com [**] /javascript/jquery.min.js [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:53912 -> 50.206.172.197:80
11/19/2016-14:26:03.000360 weather.unisys.com [**] /images/gobutton-blue.png [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:53915 -> 50.206.172.197:80
11/19/2016-14:26:04.133119 weather.unisys.com [**] /gfs/9panel/gfs_1000_9panel_eur.gif [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:53918 -> 50.206.172.197:80
11/19/2016-14:26:04.741269 weather.unisys.com [**] /favicon.ico [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:53919 -> 50.206.172.197:80
11/19/2016-14:27:51.602170 ocsp.digicert.com [**] / [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:47224 -> 72.21.91.29:80
11/19/2016-14:27:51.882760 ocsp.digicert.com [**] / [**] Mozilla/5.0 (X11; Linux armv7l; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1 [**] 10.1.1.13:47224 -> 72.21.91.29:80
root@R_Pi_DebJ_DD:/var/log/suricata# 

Does any of this matter? Likely not too much. There are some things I’d wonder about and want to ask myself if I wanted to block it, whatever it is. Most of it is from weather.unisys.com (I’ve bolded one of them) and is because I have opened a page with a bunch of images on it at http://weather.unisys.com/gfs/gfs.php?inv=0&plot=1000&region=eu&t=9p

Similarly the digicert (also bolded) is likely just checking than an https: cert is valid.

But that top line has “sr.symcd.com” and I have no idea who or what they are, so needs some investigation to decide “I asked for this in the page and want it” vs “this is crap and put a block in the DNS server”.

http://sb.symcd.com.ipaddress.com/

shows an address at Akamai

We found that the organization hosting sb.Symcd.com is Akamai Technologies in Cambridge, Massachusetts, United States.

A more detailed IP address report for sb.Symcd.com is below. At the time you pulled this report, the IP of sb.Symcd.com is 23.61.187.27 and is located in the time zone of America/New_York. The context of sb.Symcd.com is “Symcd” and could reflect the theme of the content available on the resource. More IP details of sb.Symcd.com are shown below along with a map location.

IP Address of Symcd is 23.61.187.27
Hostname:	sb.symcd.com
IP Address:	23.61.187.27
Host of this IP:	a23-61-187-27.deploy.static.akamaitechnologies.com
Organization:	Akamai Technologies
ISP/Hosting:	Akamai Technologies
Updated:	11/19/2016 11:56 AM
City:	Cambridge
Country:	United States
State:	Massachusetts
Postal Code:	02142
Timezone:	America/New_York
Local Time:	11/19/2016 05:54 PM

Now since they are a web cache service and cache all sorts of companies web pages for faster delivery web-wide, there are a normal and legitimate thing. So I’m not interested in blocking or diverting them. But at least now I know my system IS talking to them…

Believe it or not, you get to do something like that process for Every Single Thing that shows up and isn’t clear what it is doing or why it showed up. It can be a full time job… (Now you know why I’ve been slow about setting one of these things up… it is the start of the workload, not the end…)

There’s a nice helper package that sorts a lot of this out, called ‘barnyard’, that I’ve not installed yet. It’s next on my install list. I think it sorts through the “unified” alert file that’s binary data:

root@R_Pi_DebJ_DD:/var/log/suricata# file unified2.alert.1479594262 
unified2.alert.1479594262: data

The file fast.log is full of these:

root@R_Pi_DebJ_DD:/var/log/suricata# more fast.log
11/19/2016-14:24:44.645879  [**] [1:2200075:1] SURICATA UDPv4 invalid checksum [**] [Classification: (null)] [Priority: 3] {UDP} 10.1.1.
213:64294 -> 192.168.1.1:53
11/19/2016-14:24:44.674654  [**] [1:2200074:1] SURICATA TCPv4 invalid checksum [**] [Classification: (null)] [Priority: 3] {TCP} 10.1.1.
213:56508 -> 50.18.192.250:443
11/19/2016-14:24:44.701003  [**] [1:2200074:1] SURICATA TCPv4 invalid checksum [**] [Classification: (null)] [Priority: 3] {TCP} 10.1.1.
213:56508 -> 50.18.192.250:443

To me, it looks like something is making garbage packets (invalid checksum). Who is it at 50.18.192.250?

root@R_Pi_DebJ_DD:/var/log/suricata#  nslookup 50.18.192.250
Server:		127.0.0.1
Address:	127.0.0.1#53

Non-authoritative answer:
250.192.18.50.in-addr.arpa	name = ec2-50-18-192-250.us-west-1.compute.amazonaws.com.

Hmmmm… Amazon “aws”.com that AWS is Amazon Web Services IIRC and is their cloud server farm. Likely I’ve got an advert somewhere (probably from Amazon itself) or cookie beacon that’s making invalid packets (or some valid page is running on AWS? Wonder if WordPress runs on AWS?…) That’s particularly interesting since the only pages I have open now are a management window into my interior router (that I’m on…) and 6 panels of my WordPress site (where I don’t have ads on them…) 3 of them relate to comments and making this posting. Two others are “boiler plate” like the TV link. One is the Hillary thread. Perhaps something in one of the comments on it?

root@R_Pi_DebJ_DD:/var/log/suricata# grep -v checksum fast.log 
root@R_Pi_DebJ_DD:/var/log/suricata#

So looking for lines without checksum in them is a null result. OK…

Coming back a bit later, there’s another interesting one shows up:

11/19/2016-15:27:38.850012  [**] [1:2200029:1] SURICATA ICMPv6 unknown type [**] [Classification: (null)] [Priority: 3] {IPv6-ICMP} fe80:0000:0000:0000:0489:f034:d3aa:2639:143 -> ff02:0000:0000:0000:0000:0000:0000:0016:0
11/19/2016-15:27:38.850012  [**] [1:2200094:1] SURICATA zero length padN option [**] [Classification: (null)] [Priority: 3] {IPv6-ICMP} fe80:0000:0000:0000:0489:f034:d3aa:2639:143 -> ff02:0000:0000:0000:0000:0000:0000:0016:0

All the more interesting as I’ve got IPv6 shut off on my network… (Perhaps it is the spouse on her Mac… or maybe the Pi is trying to run IPv6 while the network doesn’t… At any rate, some IPv6 thing needs to be found and told to shut up…)

Moving on…

The eve.json file has what looks like a log of packets in it. One is the same as the Akamai IP. Another looks like it is going to the RPi DNS server, but on an odd port.:

{"timestamp":"2016-11-19T14:24:44.729821","event_type":"alert","src_ip":"10.1.1.13","src_port":56508,"dest_ip":"50.18.192.250","dest_po
rt":443,"proto":"TCP","alert":{"action":"allowed","gid":1,"signature_id":2200074,"rev":1,"signature":"SURICATA TCPv4 invalid checksum","ca
tegory":"","severity":3}}
{"timestamp":"2016-11-19T14:24:44.627565","event_type":"alert","src_ip":"10.1.1.13"," src_port":52305,"dest_ip":"192.168.1.1","dest_po
rt":53,"proto":"UDP","alert":{"action":"allowed","gid":1,"signature_id":2200075,"rev":1,"signature":"SURICATA UDPv4 invalid checksum","cat
egory":"","severity":3}}
{"timestamp":"2016-11-19T14:24:44.627565","event_type":"dns","src_ip":"10.1.1.13","src_port":52305,"dest_ip":"192.168.1.1","dest_port
":53,"proto":"UDP","dns":{"type":"query","id":23074,"rrname":"duckduckgo.com","rrtype":"A"}}
{"timestamp":"2016-11-19T14:24:44.627565","event_type":"dns","src_ip":"10.1.1.13","src_port":52305,"dest_ip":"192.168.1.1","dest_port
":53,"proto":"UDP","dns":{"type":"answer","id":23074,"rrname":"duckduckgo.com","rrtype":"A","ttl":84,"rdata":"54.215.176.19"}}

Checking on that UDP (no need for reply – hope it gets there no retransmit) type packet shows it to be a time stamp error packet:

https://redmine.openinfosecfoundation.org/issues/1715

Updated by Marcel de Groot 9 months ago

I’m also seeing this. Strange thing though:
On two machines (Debian Stretch), both with the same Suricata 3.0 compiled from source with NFQUEUE enabled, and the same kernel, also compiled from source (4.5.0-rc5), one writes the timestamp correctly and the other not:

On the one machine where this symptom plays Suricata is listening via iptables mangle on the FORWARD chain
On the other machine it listens via NFQUEUE in the mangle table on the INPUT and OUTPUT chain. Here the timestamps are correct.

00.000000 [**] [1:2200075:1] SURICATA …etc
vs
02/25/2016-14:31:10.631435 [**] [1:2522390:2498] …etc

I’ll try to check whether changing the NFQUEUE entry makes a difference.

So perhaps the “usual” issue of Time being a pill on the Pi as it has no hardware clock, or maybe I’ve got another option to set right for it…

Only one file left, the Stats.log file. Boy that sucker is growing fast.

-rw-r--r-- 1 root root 2604875 Nov 19 15:49 stats.log

2.6 Meg already… Wonder what’s in it (and how to prune it back…)

-------------------------------------------------------------------
Date: 11/19/2016 -- 14:24:30 (uptime: 0d, 00h 00m 09s)
-------------------------------------------------------------------
Counter                   | TM Name                   | Value
-------------------------------------------------------------------
capture.kernel_packets    | RxPcapeth01               | 0
capture.kernel_drops      | RxPcapeth01               | 0
capture.kernel_ifdrops    | RxPcapeth01               | 0
dns.memuse                | RxPcapeth01               | 0
dns.memcap_state          | RxPcapeth01               | 0
dns.memcap_global         | RxPcapeth01               | 0
decoder.pkts              | RxPcapeth01               | 0
decoder.bytes             | RxPcapeth01               | 0
decoder.invalid           | RxPcapeth01               | 0
decoder.ipv4              | RxPcapeth01               | 0
decoder.ipv6              | RxPcapeth01               | 0
decoder.ethernet          | RxPcapeth01               | 0
decoder.raw               | RxPcapeth01               | 0
decoder.sll               | RxPcapeth01               | 0
decoder.tcp               | RxPcapeth01               | 0
decoder.udp               | RxPcapeth01               | 0
decoder.sctp              | RxPcapeth01               | 0
decoder.icmpv4            | RxPcapeth01               | 0
decoder.icmpv6            | RxPcapeth01               | 0
decoder.ppp               | RxPcapeth01               | 0
decoder.pppoe             | RxPcapeth01               | 0
decoder.gre               | RxPcapeth01               | 0
decoder.vlan              | RxPcapeth01               | 0
decoder.vlan_qinq         | RxPcapeth01               | 0
decoder.teredo            | RxPcapeth01               | 0
decoder.ipv4_in_ipv6      | RxPcapeth01               | 0
decoder.ipv6_in_ipv6      | RxPcapeth01               | 0
decoder.avg_pkt_size      | RxPcapeth01               | 0
decoder.max_pkt_size      | RxPcapeth01               | 0
defrag.ipv4.fragments     | RxPcapeth01               | 0
defrag.ipv4.reassembled   | RxPcapeth01               | 0
defrag.ipv4.timeouts      | RxPcapeth01               | 0
defrag.ipv6.fragments     | RxPcapeth01               | 0
defrag.ipv6.reassembled   | RxPcapeth01               | 0
defrag.ipv6.timeouts      | RxPcapeth01               | 0
defrag.max_frag_hits      | RxPcapeth01               | 0
[...]
-------------------------------------------------------------------
Date: 11/19/2016 -- 15:51:50 (uptime: 0d, 01h 27m 29s)
-------------------------------------------------------------------
Counter                   | TM Name                   | Value
-------------------------------------------------------------------
capture.kernel_packets    | RxPcapeth01               | 30614
capture.kernel_drops      | RxPcapeth01               | 0
capture.kernel_ifdrops    | RxPcapeth01               | 0
dns.memuse                | RxPcapeth01               | 5779
dns.memcap_state          | RxPcapeth01               | 0
dns.memcap_global         | RxPcapeth01               | 0
decoder.pkts              | RxPcapeth01               | 30614
decoder.bytes             | RxPcapeth01               | 13760162
decoder.invalid           | RxPcapeth01               | 0
decoder.ipv4              | RxPcapeth01               | 30497
decoder.ipv6              | RxPcapeth01               | 13
decoder.ethernet          | RxPcapeth01               | 30614
decoder.raw               | RxPcapeth01               | 0
decoder.sll               | RxPcapeth01               | 0
decoder.tcp               | RxPcapeth01               | 27484
decoder.udp               | RxPcapeth01               | 2980
decoder.sctp              | RxPcapeth01               | 0
decoder.icmpv4            | RxPcapeth01               | 28
decoder.icmpv6            | RxPcapeth01               | 12
decoder.ppp               | RxPcapeth01               | 0
decoder.pppoe             | RxPcapeth01               | 0
decoder.gre               | RxPcapeth01               | 0
decoder.vlan              | RxPcapeth01               | 0
decoder.vlan_qinq         | RxPcapeth01               | 0
decoder.teredo            | RxPcapeth01               | 1
decoder.ipv4_in_ipv6      | RxPcapeth01               | 0
decoder.ipv6_in_ipv6      | RxPcapeth01               | 0
decoder.avg_pkt_size      | RxPcapeth01               | 449
decoder.max_pkt_size      | RxPcapeth01               | 1514
defrag.ipv4.fragments     | RxPcapeth01               | 0
defrag.ipv4.reassembled   | RxPcapeth01               | 0
defrag.ipv4.timeouts      | RxPcapeth01               | 0
defrag.ipv6.fragments     | RxPcapeth01               | 0
defrag.ipv6.reassembled   | RxPcapeth01               | 0
defrag.ipv6.timeouts      | RxPcapeth01               | 0
defrag.max_frag_hits      | RxPcapeth01               | 0
tcp.sessions              | Detect                    | 531
tcp.ssn_memcap_drop       | Detect                    | 0
tcp.pseudo                | Detect                    | 106
tcp.invalid_checksum      | Detect                    | 445
tcp.no_flow               | Detect                    | 0
tcp.reused_ssn            | Detect                    | 0
tcp.memuse                | Detect                    | 10028632
tcp.syn                   | Detect                    | 531
tcp.synack                | Detect                    | 481
tcp.rst                   | Detect                    | 384
dns.memuse                | Detect                    | 0
dns.memcap_state          | Detect                    | 0
dns.memcap_global         | Detect                    | 0
tcp.segment_memcap_drop   | Detect                    | 0
tcp.stream_depth_reached  | Detect                    | 0
tcp.reassembly_memuse     | Detect                    | 73457184
tcp.reassembly_gap        | Detect                    | 0
http.memuse               | Detect                    | 725196
http.memcap               | Detect                    | 0
detect.alert              | Detect                    | 470
flow_mgr.closed_pruned    | FlowManagerThread         | 545
flow_mgr.new_pruned       | FlowManagerThread         | 206
flow_mgr.est_pruned       | FlowManagerThread         | 1161
flow.memuse               | FlowManagerThread         | 6205704
flow.spare                | FlowManagerThread         | 10000
flow.emerg_mode_entered   | FlowManagerThread         | 0
flow.emerg_mode_over      | FlowManagerThread         | 0

Golly, looks like one of these status reports every 15 seconds or so… I think I need to find how to shut that off unless I’m curious…

The End

Well, that’s what I’ve got so far. Lots of stuff to sort through.

Hopefully this gives you an idea what kind of information you can get out of a system like this. All kinds of traffic analysis on what’s talking, to whom, and even some ideas about why.

Lots of work goes into just pruning out most of that information so only the “important bits” flow to the display point. I think the program named ‘barnyard’ does that. This “how to” guide installs all sorts of things (including barnyard and mysql and more…) so clearly more is expected. IIRC, barnyard takes the suricata output and puts it into the mysql database for easier report and alert preparation.

https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricata_Snorby_and_Barnyard2_set_up_guide

For me, what all this says is that I need to put the Suricata Logs and active areas onto a patch of ‘real USB disk’ and not on my SD card. First, it will fill the SD chip way too fast. Second, the wear rate on those bits will be mighty high. Third, it will take a performance hit from many small writes where an SD card really does a giant ‘write and refresh’ for each of them.

So, with that, I’m going to shut down Suricata, move the log location to a patch of Real Disk, and study up some on the options at launch and in the config file. (Probaly a day or three in the future as Sunday is not my most productive day). Heck, I might even read some of the manual pages and the install and configure guide; now that I’ve installed it and played with it a bit. ;-)

With that, I now return you to the world of political chaos and disorderly minds that is the bulk of the world…

Subscribe to feed

Advertisements

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , , , . Bookmark the permalink.

7 Responses to An early report on Suricata

  1. Larry Ledwick says:

    Poking around a bit to get an idea of what suricata is capable of, (designed to do) I found this documentation page — wow looks like lots of dials and buttons to play with. Starts off with command line option summary and then gets into the details.

    http://suricata.readthedocs.io/en/latest/command-line-options.html

    Seems like it will keep you busy for a while just finding out all the knobs you can turn. It sounds like a very capable system to work with if you understand things at the level you obviously do, but would take a good investment of time to really understand it from scratch. If you can find the right set of rules to start out with and then tune from there looks very useful.

  2. Larry Ledwick says:

    This has some basics on the stats log file if that helps.
    http://suricata.readthedocs.io/en/latest/configuration/suricata-yaml.html#stats

  3. E.M.Smith says:

    At present you’ve seen all I know about suricata settings, so anything helps ;-)

  4. E.M.Smith says:

    Well that was fun…

    I already made my first “tightening up” as a result of the use of Suricata. Those broken checksum lines for packets headed to “odd places” with companies I’d not heard of? Worked it out.

    I booted an absolutely vanilla Debian and installed suricata on it, then ran it. This assured it was nothing to do with my files in my home directory (not mounted here) and was not from any application (none running). When the packets still showed up, I suspected the ntp configuration.

    They are, in fact, valid UDP packets. They are from the ntp (Network Time Protocol) daemon and are for setting your time and keeping it in sync. I think I’ve mentioned that I want to have my own ntp server inside my network and expected my DMZ server to do that. By having your own, you prevent a lot of packets from “wandering off somewhere” to find out what time it is. If these originate from all your internal machines, they are all exposed as existing and where they are (and what kind to some extent) to the upstream time providers (and anyone sniffing the wire in between…)

    Well, turns out Debian sets a nice little round robin “pool” of all sorts of folks by default. Thus the constantly changing cast of characters that I’d never chosen to talk with…

    From /etc/ntpd.conf:

    # You do need to talk to an NTP server or two (or three).
    #server ntp.your-provider.example
    
    # pool.ntp.org maps to about 1000 low-stratum NTP servers.  Your server will
    # pick a different set every time it starts up.  Please consider joining the
    # pool: 
    server 0.debian.pool.ntp.org iburst
    server 1.debian.pool.ntp.org iburst
    server 2.debian.pool.ntp.org iburst
    server 3.debian.pool.ntp.org iburst
    

    So it picks a 1000 or so of your “closest stranger friends” to ask what time it is…

    Now that might be all great and wonderful, for my time server, or maybe not. (There are some ways to exploit ntp time requests… giving bogus responses, for example, can cause your scheduled scripts to run at the wrong times. Then there is the information leakage…)

    I replaced it with:

    # You do need to talk to an NTP server or two (or three).
    #server ntp.your-provider.example
    
    server 192.168.0.1
    server 192.168.0.254
    
    # pool.ntp.org maps to about 1000 low-stratum NTP servers.  Your server will
    # pick a different set every time it starts up.  Please consider joining the
    # pool: 
    #server 0.debian.pool.ntp.org iburst
    #server 1.debian.pool.ntp.org iburst
    #server 2.debian.pool.ntp.org iburst
    #server 3.debian.pool.ntp.org iburst
    

    where those two IP numbers are systems in my DMZ that ought to know what time it is.

    By doing this, all my internal systems can sync to one clock, that is then looking upstream to only specifically chosen time sources and only pestering them once for the whole internal network of machines.

    It also gets those broken packets out of my logger stream… I do still have a couple of such “bad checksum” packet streams, but they are clearly to my internal servers:

    root@RaPi_Temp:/var/log/suricata# tail -f fast.log 
    11/20/2016-09:34:07.139001  [**] [1:2200075:1] SURICATA UDPv4 invalid checksum [**] [Classification: (null)] [Priority: 3] {UDP} 10.1.1.10:123 -> 192.168.0.1:123
    11/20/2016-09:34:07.139074  [**] [1:2200075:1] SURICATA UDPv4 invalid checksum [**] [Classification: (null)] [Priority: 3] {UDP} 10.1.1.10:123 -> 192.168.0.254:123
    

    So that confirms it is just ntp trying to find a nice clock to ask the time, and I can ignore those ‘invalid checksum’ packets as an issue.

    Yes, only a tiny bit tighter, but that’s how you lock things down. One little increment at a time…

    Oh, and I checked the clock after a reboot and it is correct, so at least one of those two upstream does know the time and is willing to share…

  5. pg sharrow says:

    at least you now have all of those little annoying leaks draining through one small pipe, progress…pg

  6. E.M.Smith says:

    @P.G.:

    It isn’t just that they drain through one point, they are intermediated too. NO packet looking for time leaves to the greater world from the inside. They now all stop at my DMZ time server. Only it checks time outside, and I can tell it just what sites to ask (not pot luck from a pool).

    While it isn’t a lot of traffic, it is several packets per second per machine. Someone on a slow link or a ‘by the byte’ payment plan benefits from this intermediation the most. Just that much less on the wire. I know I’m on a fast wire now, but my habits were set when on a 56 kb modem :-) Besides, on my someday list is to duplicate this DMZ services model into a ‘lunchpail’ with a battery driven Pi and WiFi / internet via telco hotspot, so by-the-byte… Stopping the bill for ads I don’t want and gratuitous ntp (and more) will matter then. Then my Tablet will connect via the lunchpail when out and about, avoiding the swamp that is public WiFi… or via the public WiFi using the lunchpail as connection, but with my own private firewall and DNS in the lunchpail between me and the ick.

    Oh, I was also pleasantly surprised to see that the Alpine build comes as a time server. It makes sense when you figure it is targeted at router and appliance builds. (Though now I’m motivated to find out who it points to in its ntp.conf file… but as router / security folks, so far they have chosen things wisely, and I would expect the same of their ntp upstream choices.) Didn’t have to do any setup on it to get time from it.

    That’s part of how I know a choice was a good one. The surprises are positive and in the desired direction. For Debian, ever more of the surprises are negative or ‘wha?’… Like finding I must config ntp.conf as the default is a random pool, or suddenly being thrust into systemd against my will. There isn’t much risk from a bad ntp upstream, but there is some. It can be used to break time dependent things like cron and builds with timestamp driven choices.

    At Apple, way back when, we added a radio time source to one of our Vaxen so it got WWV time directly, then compared with ntp sources and adjusted skews for time delay in transit. IIRC, we had accuracy down in the fractional second to some number of milliseconds. Apple still runs a time source used by most Macs. Just one more little default security feature of the Mac build…

  7. E.M.Smith says:

    Well, this is an embarrassment… though a small one.

    Seems that the Alpine is NOT configured to be a time server by default. Long long long ago, I’d added a patch to this particular Debian chip such that it saved the time at shutdown and restored it at boot. The idea being to not start in 1970 every boot, but ‘only’ a day or so out of sync (so the time protocol could catch up). I’d forgotten that… So a rapid reboot was only a few seconds out of time sync and I didn’t notice.

    Now, a few days later, it was clear…

    So to make Alpine actually BE the time server I wanted, I needed to add 2 characters to one line. It stores the config file in a different place than Debian:

    dnspi:/etc/conf.d# cat /etc/conf.d/ntpd 
    # By default ntpd runs as a client. Add -l to run as a server on port 123.
    NTPD_OPTS="-N -l -p pool.ntp.org"
    

    The default looks just like that, but without the ‘-l’ in it… Added the -l, did a:

    dnspi:/etc/conf.d# /etc/init.d/ntpd restart
     * Caching service dependencies ...                                        [ ok ]
     * Starting busybox ntpd ...                                               [ ok ]
    dnspi:/etc/conf.d# date
    Wed Nov 23 08:38:05 UTC 2016
    

    And all was good (as of last night as you can tell by the date stamp).

    Over on the Debian card, I tested the result (after giving it a few minutes to catch up):

    root@R_Pi_DebJ_DD:/Pink/ext/home/chiefio# ntpq -p
         remote           refid      st t when poll reach   delay   offset  jitter
    ==============================================================================
    *dnspi           68.110.9.223     3 u    3   64   17    0.954   -8.076   1.665
     paladin.latt.ne 68.110.9.223     2 u   67   64    7   54.688   13.900  11.091
    root@R_Pi_DebJ_DD:/Pink/ext/home/chiefio# !nt
    ntpq -p
         remote           refid      st t when poll reach   delay   offset  jitter
    ==============================================================================
    +dnspi           68.110.9.223     3 u   64   64   17    0.954   -8.076   1.665
    *paladin.latt.ne 204.123.2.72     2 u   60   64   17   36.897    4.526   5.829
    

    You can see that, as of that moment, I had set it up to look at the ntp.pool as well as my inside box. I did that while I was debugging the Pi time issue since it really is important to have the right time. Now that ntpq is showing that I’m getting time service from my inside server, I can deprecate the other one.

    BTW, “busybox” on the Alpine doesn’t know about ntpq so you can’t use it there to query status. Oh Well. (No I don’t know if it can be added somehow, or if there is another tool, but it isn’t there by default). Regardless, to test your time service is working, really really working, it is better (on systems that support it) to run ntpq -p and see the status of your upstream servers.

Comments are closed.