[Snort-devel] Recognizing Unicode in TCP stream

diphen at ...375... diphen at ...375...
Wed Apr 11 13:52:54 EDT 2001

Or, more accurately, in an HTTP request. I'm currently playing with ways
of doing it.

(These comments apply to the 1.7 release tarball.)

Looking through the Snort source for the http decode plugin, it looks
like there's just a comparison which is looking for escaped (with %)
values. If the values are one of [c0,c1,e0,f0,f8,fc] then snort claims
to have found a Unicode attack. Now, this seems a little prone to falsing,
if you ask me. (Which you didn't. :)

Also, I'm wondering why those values in particular were chosen. Looking
over Network ICE's page at
http://www.networkice.com/advice/intrusions/2000639/default.htm it looks
like the relevant values would be [2e,2f,5c,c0,c1,ae,af,9c,f0,80,81]. At
least if you're trying to detect a backtracking sort of attack against
IIS. If you're just trying to disguise the URL in general, I guess you
get into a whole other area of having to decode Unicode entirely, since
I suppose someone could encode not just /'s or .'s, but the whole URL.

Anyway, what I'm getting at is: Why did Snort do it this way, and do
people have opinions on how to do it better?



More information about the Snort-devel mailing list