[Snort-sigs] sid: 1113 - FPs

Jason security at ...704...
Fri Feb 28 21:17:13 EST 2003


In general it is going to fire for most directory traversal attempts 
regardless of vuln app, I personally prefer this behavior and would 
rather tune out the known good items.

"flow:to_server, established" should prevent falsing on returned content 
but each ../ reference in the returned content ultimately results in a 
GET ../whatever request so it still happens. If your HOME_NET and 
EXTERNAL_NET are set properly you will not false on outbound requests to 
servers either.

Using your example it is trivial to create a pass rule to eliminate it 
from inspection.

pass tcp any any -> $HTTP_SERVERS $HTTP_PORTS (msg:"allow relative links 
to the pictures directory"; content:"GET "; content:"../pics/"; 
distance:1; content:".jpg"; distance:1; flow:to_server,established;)

( I would have to double check but I think all requests for 
linked/included content on a page are tagged with a Referer so you could 
get even more specific )

pass tcp any any -> $HTTP_SERVERS $HTTP_PORTS (msg:"allow relative links 
to the pictures directory"; content:"GET "; content:"../pics/"; 
distance:1; content:".jpg"; distance:1; content:"|0D 0A|"; distance:1; 
content:"Referer: www.mydomain.com"; distance:1 within:256; 
flow:to_server,established;)

-J

Paul Schmehl wrote:
> This is another example of a rule that I hope can be improved upon:
> 
> web-misc.rules
> 
> alert tcp $EXTERNAL_NET any -> $HTTP_SERVERS $HTTP_PORTS (msg:"WEB-MISC
> http directory traversal"; flow:to_server,established; content: "../";
> reference:arachnids,297; classtype:attempted-recon; sid:1113;  rev:4;)
> 
> content: "../" will trigger on any webpage that uses relative paths to
> reference files in Unix directories.  We have tons of pages that do
> this, so we get many alerts.  However, the Arachnids description is very
> specific.  
> 
> "Contents: "../"  The packet offset is zero, meaning that we start
> looking for this content string in the start of the packet data. This is
> a case sensitive search."
> 
> The request must come with a GET in front of it in order to exploit the
> vulnerability.  In fact the packet trace that Arachnids shows to explain
> the exploit shows that clearly:
> 
> 07/08-12:29:21.103460 attacker:1737 -> target:80
> TCP TTL:64 TOS:0x10 ID:48175  DF
> *****PA* Seq: 0x8CDE2D5B   Ack: 0xD24163C   Win: 0x7FB8
> TCP Options => NOP NOP TS: 152351356 190236 
> 47 45 54 20 2F 63 67 69 2D 62 69 6E 2F 66 6F 6F  GET /cgi-bin/foo
> 62 61 72 2E 70 6C 3F 2F 62 6F 72 69 6E 67 2F 2E  bar.pl?/boring/.
> 2E 2F 2E 2E 2F 2E 2E 2F 65 74 63 2F 70 61 73 73  ./../../etc/pass
> 77 64 20 48 54 54 50 2F 31 2E 30 0A              wd HTTP/1.0.
> 
> It seems that this rule could be improved by using 
> 
> uricontent: "../"; depth: 512;
> 
> or
> 
> content: "GET /cgi-bin/"; nocase; offset: "0"; content: "../"; within:
> 50;
> 
> or something similar, to eliminate FPs from the body of an HTML
> document.
> 
> Here's a payload that shows a FP:
> 290 : 30 22 3E 3C 74 72 3E 3C 74 64 20 61 6C 69 67 6E   0"><tr><td align
> 2a0 : 3D 22 63 65 6E 74 65 72 22 20 63 6F 6C 73 70 61   ="center" colspa
> 2b0 : 6E 3D 22 32 22 20 62 67 63 6F 6C 6F 72 3D 22 23   n="2" bgcolor="#
> 2c0 : 30 30 30 30 30 30 22 3E 0A 3C 66 6F 6E 74 20 73   000000">.<font s
> 2d0 : 69 7A 65 3D 22 36 22 20 66 61 63 65 3D 22 41 72   ize="6" face="Ar
> 2e0 : 69 61 6C 22 3E 3C 62 3E 3C 69 6D 67 20 73 72 63   ial"><b><img src
> 2f0 : 3D 22 2E 2E 2F 2E 2E 2F 70 69 63 73 2F 6D 6C 6F   ="../../pics/mlo
> 300 : 67 6F 2E 6A 70 67 22 20 61 6C 69 67 6E 3D 22 6C   go.jpg" align="l
> 310 : 65 66 74 22 20 68 73 70 61 63 65 3D 22 30 22 20   eft" hspace="0" 
> 320 : 77 69 64 74 68 3D 22 38 35 22 20 68 65 69 67 68   width="85" heigh
> 330 : 74 3D 22 38 35 22 3E 3C 2F 62 3E 3C 2F 66 6F 6E   t="85"></b></fo
> 
> Am I starting to drive anyone nuts with this stuff?  I hope not.  I'm
> trying to contribute to improving the rules so they're more effective
> for everyone.  I just happen to have a *lot* of traffic to extract data
> from. :-)
> 





More information about the Snort-sigs mailing list