Skip to content

Our (Signatureless) Approach to Web Application Security

Posted: February 6th, 2015 | Filed under: Web and Application Security

In a recent post, we focused on the problems with the signature-based security model. Signatures have been a staple of web application security and cyber security for some time, but are problematic in the sense that they don’t provide adequate protection in today’s landscape of ever-evolving threats.

Now, we want to explain how we approach web application security with our Web app firewall, ServerDefender.

(We encourage you to go back and read that article in full. It’s a good read, we promise!)

The Behavioral & Algorithm-based approach

Although we don’t use signatures, we still have a means for analyzing and determining whether or not a user is malicious.

Our method analyzes behavior by tracking the actions that occur over the course of a session. Activity is monitored by an algorithm that establishes what bad behavior looks like (we’ll touch on this more later), and should the user cause too many errors, the software will begin to take action. Users who repeatedly cause errors are going raise an alert and the software will begin to impede their site usage until they’re ultimately blocked temporarily, then permanently.

Behavioral scoring allows errors errors to broader or more generic (not signature matches but actual error states like 404s and 500s) because you’re not blocking every single error. By continuously tracking and monitoring and building up a sort of “threat profile” that discerns patterns and indicates misbehavior – even before anything that would match a threat signature is seen.

Whitelist vs. Blacklist = Greylist?

On top of behavioral scoring, we also employ a combination of blacklists and whitelists – a sort of greylist approach.

The signature model is inherently a blacklist approach to security. That means that everything is allowed by default unless it is on a ‘naughty list’ or list of malicious inputs or actions. This is dangerous because the default action is to allow, and only when something is known to be bad is it blocked.

The whitelist approach isn’t perfect either. This is the inverse of blacklisting, where everything is blocked by default unless it’s on a list approved inputs or actions. This might be easiest to think of as analogous to a list at an exclusive club or restaurant. With the whitelist approach only people with their name on the list are allowed in, while all others are turned away. The blacklist approach turns away anyone who is on a disallowed list, and lets everyone else in without discretion.

Here’s an example of one of the whitelists within ServerDefender’s controls. This particular example is not very permissive and shows how broad the controls can be.

Here’s an example of a blacklist in ServerDefender. This shows the specific resources that cannot be requested on a site, with all other resources being allowed.

The whitelist approach is inherently more secure, but more prone to false positives since the default action is block. However, our powerful and easy-to-use method for creating exceptions makes adding to whitelists entirely manageable.

Algorithmic Detection

We also look at a number of factors within a given user input and determines if there is an exploit contained in it. This is the algorithmic type of rule. It’s not based on a specific signature or set of signatures. Instead it looks for conditions that have to be met for a particular type of exploit to be effective, and blocks when those conditions are met. This makes it much more generic than signature based rules.

This does increase the false positive risk, so again, you do need good exception-management. This is something we built into ServerDefender in order to quickly loosen security controls for something very specific, while not compromising the overall security of the app. Plus, it is much more manageable to apply these occasional exceptions than to build up fully accurate whitelists field-by-field for an entire app, and keep them up to date as code changes.

Find the log for your false-positive by either entering the event ID, or filtering down to a specific set of parameters. Right-click and select ‘Add Input Exception’.

The add exception dialogue let’s you specify name, comment, what criteria to match, and the restrictions to make.

This allows errors to be broader or more generic (not signature matches but actual error states like 404s and 500s) because you’re not blocking on every single one. But if you track and build up a score or profile you can discern patterns that indicate misbehavior, even before anything that would match any actual threat ‘signature’ is thrown at the app. (e.g., too many ‘innocent’ looking errors from the same source, or with too great a frequency).


Signatures may make for a great business model, but they don’t make for a great security model. Signatures don’t account for unknown vulnerabilities, and are too easily bypassed in today’s world of advanced hackers. Our approach is and has always been to create tools that provide real security through algorithmic analysis and distrusting inputs.

If you have any questions about our approach to security, please feel free to reach out to our team. We’d love to chat!

No Comments »

The Problem with Signature-based Web App Security

Posted: January 29th, 2015 | Filed under: IIS & HTTP

In the real world we have the benefit of being present and able to see and analyze scenarios in real-time. In the cyber world, we rely on code and algorithms to handle millions of complex tasks every day without much real-time human intervention. Unfortunately, one of the tasks that we leave in the hands of technology is web security.

This is a problem because unlike humans, code and algorithms cannot decide what is good and bad. Not from a philosophical moral perspective (humans still struggle with that), but from a security standpoint. In order for web security tools to know if a user is doing something bad it needs to be programmed to know what specifically to look for. The way that many tools know how to detect malicious actions or activity it by using attack/threat signatures. This may make for a great business model for those who sell such products, since they can sell signature updates, etc, but it makes for a dangerous security model.

What are signatures, anyway?

So what are signature-based rules, exactly? One way to envision them are as a glossary of different threats that a security tool can reference to know whether or not it should take action against inputs. A signature is typically based on an exploit that has already occurred and has been documented. The signature details the way that the exploit works using a series of parameters to indicate the specific actions that occur during the exploit.

This model depends on matching inputs to a specific signature in order to block them. This can be likened to taking a fingerprint of someone who wants access to something and comparing it against a database of fingerprints of known criminals. This will work great for stopping frequent criminals, but will never stop the first-time offenders.

Example rule:
'Cross-site Scripting (XSS) Attack',id:'958016',tag:'OWASP_CRS/WEB_ATTACK/XSS',tag:'WASCTC/WASC-8',tag:'WASCTC/WASC-22',tag:'OWASP_TOP_10/A2',tag:'OWASP_AppSensor/IE1',tag:'PCI/6.5.1',logdata:'Matched Data: %{TX.0} found within %{MATCHED_VAR_NAME}: %{MATCHED_VAR}',severity:'2',setvar:'tx.msg=%{rule.msg}',setvar:tx.xss_score=+%{tx.critical_anomaly_score},setvar:tx.anomaly_score=+%{tx.critical_anomaly_score},setvar:tx.%{}-OWASP_CRS/WEB_ATTACK/XSS-%{matched_var_name}=%{tx.0}"

The problem with security tools that rely on fingerprint matching is that fingerprints are unique, and the fingerprints for attacks that haven’t happened yet don’t exist. That leaves a gaping hole in these tools ability to provide security.

Good luck stopping zero-days!

Since signatures only account for what has already been seen, they don’t do anything to account for what has yet to be seen. When a new, never-before-seen, zero day comes along the tool won’t find any matches in the signature database. Only after the exploit has been observed in the wild will vendors update their signature lists and send out an update to customers, who then typically need to add the update.

Yes, protecting against the known vulnerabilities that many script kiddies will try is indeed valuable. However, the script kiddies don’t pose the same threat as a well-trained hacker or even a hacker who possesses knowledge of a zero day vulnerability. Without any protection against new or unseen attacks, signature-based tools leave a wide attack surface that needs to be accounted for another way.

Signatures Everywhere!

Many organizations use automated security scanners to find vulnerabilities in their apps, and in turn can update their WAFs with the information learned from the scanner. So if the scan finds vulnerability ABC, then the security rules for that vulnerability can be automatically for the WAF to import (this isn’t the case with every tool, but is a feature than many tools highlight). The problem is that the scanner is using heuristics to find the vulnerabilities in the first place. Herein lies the vicious circle of signature-based security and the illusion of security.


Vulnerability scan run with signature-based tool > Rules created from scan > Rules imported into security tool > New scan rules released, prompting rescan >


With this type of system in place, you’re never really achieving protection against anything other than known vulnerabilities. It’s like basing all flight security off of the do-not-fly list, but not knowing you can/should stop the person with a dangerous looking item for – at the least – a further inspection.

Another issue with this approach is the constant need to update the rules. Not only does your security depend on a rule existing, but it also depends on you updating the rules (unless rules update automatically) immediately upon release. A breach could come down to: no rules existing to stop an exploit, or you not updating the rules in a timely manner.

A different approach

There are alternative ways to approach web application security, and not all vendors are using a signature-based model. Port80 Software takes an algorithmic and behavioral-based approach that combines whitelists and blacklists and completely ditches the signature model.

The signature approach is so common that when people come to evaluate our Web application firewall, ServerDefender, they often are confused by its non-use of signatures. People often ask, “Well, if it doesn’t use signatures, then how is it providing protection?” and “How often do you update signatures? And are they free?”

There are indeed alternative ways to approach web application security, and not all vendors are using a signature-based model. Port80 Software takes an algorithmic and behavioral-based approach that combines whitelists and blacklists and completely ditches the signature model.

Curious to see what our approach is? Learn about our innovative – signature-free – approach to application security.

No Comments »

Exploring the LogViewer in ServerDefender VP

Posted: November 15th, 2014 | Filed under: IIS & HTTP, Web and Application Security | Tags: , , , , , ,

Security You Can See

For the least few years, we have been developing ServerDefender VP, an advanced Web application firewall for IIS. One of the features that has been evolving along with ServerDefender VP is the LogViewer. This is the hub of the WAF where users can interact with and monitor malicious traffic hitting their site. Since there is so much to do within the LogViewer it sometimes becomes easy for a feature or two to be missed, so we’ve decided to explain some of the cool tricks its capable of.

What is the LogViewer?

The LogViewer is a tool that visualizes events (blocked threats and errors) that occur in your application and allows you take a variety of different actions on them with only a few clicks. When selecting an event users can see an array of data that pertains to it such as the referrer, user-agent, IP address, session ID, GET and POST data, and other critical information.

ServerDefender VP Web app firewall LogViewer

Click to enlarge.

What Can Actions Can I take on an Event?

There are several different actions that a user can take on an event in the LogViewer. The primary actions are for security settings (blocking IP addresses and creating exceptions), forensic tools (viewing all events by IP, comparing a session against IIS logs), and exporting reports.

ServerDefender VP LogViewer Actions

Click to enlarge.

Adding Exceptions

One of the key actions available to users from the LogViewer is the ability to add an exception to event, such as a false positive. Adding an exception on an event lets users specify new settings should the same event occur. This means that users can tell a blocked action to be allowed and configure new rules for the future.

ServerDefender VP Input Exception

Click to enlarge.


The LogViewer’s forensic tools enable users to gain further knowledge about an event and the session and IP behind it.

“View This Session in IIS Logs” displays the session logs with errors recorded by ServerDefender VP highlighted. This feature is useful to determine what occurred in a session prior to an error occurring and establishing the validity of an error, should there be any questions around it.

“View this IP Only” displays only the events in the LogViewer attributed to that IP address. This makes it easier to visualize the actions of a single IP address and understand its patterns, which can help users determine if the action they should take against the IP, if any.

Questions for Us? Ready to try?

The LogViewer is a powerful tool for viewing malicious traffic in your app and way to quickly react to events. If there’s anything else you’d like to learn about the LogViewer – or ServerDefender VP in general –  send us an email at or Tweet us @port80software. If you’d like to enjoy a 30-day free trial, go ahead and download now.

No Comments »

Patch Now: Schannel Vulnerability Poses Huge Threat

Posted: November 13th, 2014 | Filed under: IIS & HTTP

A critical vulnerability in Microsoft Schannel headlined the security bulletin released by Microsoft for November. The vulnerability is the latest in TLS vulnerabilities for 2014, and means that every major TLS stack has been impacted by a severe vulnerability this year alone, as reported by Ars Technica. The Schannel vulnerability is drawing comparisons to Heartbleed, as it similarly allows for remote code execution and data theft.

Needless to say, it is imperative that affected systems are patched immediately.

Microsoft Security Bulletin MS14-066 – Critical – Find & Install Patch

Secure Channel, also known as Schannel, is the standard security package used by SSL/TLS in Windows. The Schannel vulnerability impacts all versions of Windows dating back to Vista/Windows Server 2003.

After the Bash and Poodle Vulnerabilities earlier this year, we should not be surprised to see a vulnerability that has gone unpatched for an extended period of time. This vulnerability just underscores the fact that even very mature software may have serious bugs from time to time.

The complete November security bulletin can viewed here.

Looking for more details about the Schannel vulnerability (MS14-066)? Read More

No Comments »

How Your HTTP Errors Are Helping Hackers

Posted: October 23rd, 2014 | Filed under: Web and Application Security

What Errors Are

Error messages are a fairly standard part of the web that can provide useful information to developers to resolve issues or indicate to users that there is something wrong with a page. While smart developers and site admins will customize error messages to hide sensitive info, sometimes something as simple as a careless change to a configuration  file can expose verbose  HTTP errors – including 500-level errors that can contain normally-hidden details of your application. While this is okay for your developers to see – in order to resolve the error – these are not okay for external users to see.

Scenarios in Which You Might See an Error

Of course, errors are not desirable. Errors are more like the ugly blemish that haunt every app at some point and some are more serious than others. Detailed errors can provide contextual information pertaining to things like the server’s directory structure, the SQL queries being run, or the modules and libraries loaded by the application framework. By generating an error response, a hacker now has context for what creates a particular error state and also gains a little bit of extra knowledge about the site.

Why is This Useful?

Even seemingly unimportant or small bits of information can be very useful. With enough time and patience a hacker can use the initial leakage of information to probe further. Based on the knowledge gained from the initial error, they can dig deeper to see what other errors they can elicit. Much like a detective following a lead from a piece of evidence, a hacker can follow the knowledge gained from a piece of information to it’s conclusion. In all likelihood, they’ll come across another valuable piece of information via an error (if errors are completely unsuppressed) which will lead them down another path to explore and investigate. All this probing really puts a site at risk, as it increases the chances that a vulnerable piece of software (plugin, library, framework, etc.) is discovered. If a hacker can pinpoint that you’re using version A of X library with Y known vulnerability, then there is a very clear path to exploitation and causing serious damage.

Recon for Attack – Just like Real War

“Know your enemy” wrote Sun Tzu in the  Art of War, and most would agree that it’s unwise to launch an attack on a target without doing some reconnaissance to find points of weakness and points of strength. This principle applies to web technologies as well. Hackers can use error messages to probe and determine areas of weakness within their target. Giving hackers the ability to create errors without penalty is incredibly valuable as it gives them free reign to scout your site and gear up for attack. If areas of weakness or vulnerability are found during the scouting process, then the site can be added to a list of vulnerable sites, which then can be attacked by others in the community. If finding errors is the first line of attack, then hiding those errors should be the first line of defense. By preventing hackers from gathering accurate information about your site or app, you keep them from gaining an upper hand. Suppressing error messages is part of anti-reconnaissance and a solid defense-in-depth strategy.

An Internal Conflict

On the surface, detailed error messages can be useful for developers to debug issues. In fact, in many cases developers like these messages as it makes their jobs easier. Detailed messages often point right to the source of a problem, even indicating which line of code or which method is problematic and in which file. Just as this little snippet of information is invaluable to a developer doing some debugging, this information is useful to a hacker who is trying to cause trouble. Quickly, we can begin to see a conflict arising between a developer and a security professional:

  1. Dev wants detailed error messages in the app because this makes his/her life easier
  2. IT wants non-detailed error messages because this makes his/her life easier – and it protects the company
  3. Without detailed error messages, dev’s job becomes more difficult, and more time/company money is spent debugging code

While the sysadmin-developer divide is nothing new, this is a sensitive area because security is coming into play. That means that security should take precedence, but it doesn’t mean that developers’ jobs need be made more difficult.

Some interesting articles involving information leakage

Five Data Leak Nightmare OWASP on Information Leakage What is information leakage? 

Solution with ServerDefender VP

Luckily, this type of recon can be prevented and developers jobs don’t need to be made more difficult. Here at Port80, we spent a lot of time thinking of the best way keep verbose error details out of the hands of hackers. The solution we came up with is very simple: don’t show verbose error messages. Over the last few years we’ve developed a complete web application firewall called ServerDefender VP that offers the ability to handle errors. We developed methods to handle errors in two ways:

  1. Spot and prevent verbose 500 HTTP errors from being outwardly displayed
  2. Mask all errors with a generic error message, so all errors will look the same to would-be hackers

We also included the ability to whitelist IP addresses. This means that if a developer needs to debug something, the sysadmin can add their IP to a list of excluded IPs. This tells ServerDefender VP to let those IP addresses bypass the error handling controls, therefore allowing users to browse the site without error messages being suppressed.

How does it work?

These capabilities are default features in ServerDefender VP and are powerful ways to prevent reconnaissance. Here’s what happens when ServerDefender VP encounters a 5xx HTTP error:

  • User browses to a page
  • An HTTP error is caused and generated
  • ServerDefender VP catches the response before it’s posted
  • Instead of showing the HTTP error status code, SDVP sends a generic error response.  This can be not only a page that discloses no sensitive data, but even a response code that is normalized so that nothing can be inferred from it (e.g., 404 instead of 500).
  • The end user now knows something went wrong, and can even be shown a helpful site-customized experience to get them back on track, but not specifically what

This error message can be customized to be anything, but most importantly it ensures that no valuable reconnaissance information is leaked; the error is suppressed by ServerDefender VP and never sent to the client. This error handling technique takes away the first line of attack and means that hackers won’t be able to find clues that make it easier for them to hack you. We’ll leave you with one more piece of advice from Sun-Tzu, that sums up SDVP’s attitude toward data-hugry hackers: “Be extremely mysterious..thereby you can be the director of the opponent’s fate.

No Comments »