The End of Windows Server 2003 Support and How to Mitigate Risk If You Stay Past the Deadline

Posted: June 3rd, 2015
Filed under: IIS & HTTP
Tags:


July 14, 2015

That date spells the end of support for Windows Server 2003. July is quickly approaching (how is it already May?!) and for many this means some extra work is in order. Microsoft has been pushing migration from Windows Server 2003 for some time now, and undoubtedly there are millions of sites that have yet to be moved.

And we aren’t pointing blame or sitting on our high horse either. We are among those who are still using Server 2003, as some of our old sites still run on Server 2003 VMs. If we had our druthers, of course we wouldn’t be on 2003, but IT can be hectic and often times other tasks take priority.

For some, July 14th won’t be a perilous drop-dead deadline. It will be a line in the sand that they’ll easily step over without much immediate penalty. While we are in the camp of urging people to migrate, we are acknowledge many will let the deadline come and go without migrating. So how will those who don’t migrate be impacted?

One major impact is that they will be without the security bulletin updates that are crucial for keeping many organizations secure. In 2013, MSFT issued 37 critical security patches. And 2014 was especially serious for major security issues (list some bad ones) for which MSFT put out patches. After July 14, those bulletin updates will be no more. But that’s not all.

The Trouble Ahead

Bye-bye Patches

The end of support means the end of security patches. This means that should a major vulnerability be found in the future- like the SSL 3.0 vulnerability or MS1305 – that Microsoft won’t be backing you up with an emergency security patch. Given the number of major vulnerabilities seen over the last year, particularly in widely and long-used pieces of code, it’s a worrying proposition to continue without Microsoft’s security patches.

Hello Breaches

When the next major vulnerability to impact all versions of Windows Server comes along and finds itself in the headlines of every tech and infosec blog, expect hackers to be out there sniffing for you. Sites that advertise (link to article on how response headers indicate your OS version) they’re running Windows Server 2003 will be guaranteed wide-open targets. It likely won’t take long after support ends for a vulnerability to emerge that leaves those still on Server 2003 in a dangerous position. For all we know, someone could have knowledge of an undisclosed vulnerability that’s waiting to be unleashed after support ends.

No More Software Updates

At this point, other software you use on Windows Server 2003 may also stop receiving updates. When MSFT ends support for an OS, many developers follow suit soon thereafter. If you have (non-proprietary) software that your organization relies on then you may find yourself no longer able to run on the latest version because it won’t be released for Server 2003.

Compliance Issues

Certain compliances or regulations may require systems to be supported from the OS to software on top of it. Running the out-of-support Server 2003 could put you in trouble with the governance bodies that regulate compliances and put your company in line for penalties. Furthermore, a lot of these security compliances (i.e. HIPAA, PCI) exist to protect customer data and the like, making it that much worse to put customers at risk with an un-supported OS.

If You’re Temporarily Staying Put

First and foremost, you should understand the risks that come with staying on Windows Server 2003. As noted, it is a perilous task that could have a range of impacts:

  • Data breaches
  • Business applications breaking/not functioning properly
  • Security compliance penalties

Hanging onto Windows Server 2003 until July 14 doesn’t necessarily spell the end of the world, but it should be understood that there are very real risks in doing so.

If you do stay on Server 2003 past  July 14, you should ensure that you have a migration plan ready to act upon — or have one that is already being put into action.

ServerDefender

If you plan to migrate servers, but won’t make the July 14 deadline, Port80 may be able to help. Using ServerDefender VP may help mitigate certain security risks until migration is complete. Every month, a range of vulnerabilities are disclosed in Microsoft’s security bulletin. For vulnerabilities those that can impact the application layer, then ServerDefender can help to mitigate the risk of running an unsupported operating system until you migrate.

Protection While You Migrate and After

ServerDefender Web application firewall will keep your sites and web apps running on Windows Server 2003 secure from hackers looking to take advantage of the unsupported operating system. And when you complete your migration, you can take your ServerDefender license with you for improved security.

Secure Migrate Move SD
Add SD to your Windows Server 2003 to secure your sites and web applications post WS2003 end of life. With SD added, you can complete your migration from WS2003 with peace of mind. After your migration is complete, you can move SD from your WS2003 server to your new server for fresh and secure new start.

 

One last plea

Migrating to a newer Windows Server before July 14 has several advantages. But adding ServerDefenderVP to the mix shares two key benefits:

  1. It allows your organization to stay compliant with security regulations
  2. It assists you in controlling your security

If you have a plan in place, even if it means toeing the line on the WS2003 deadline, you will be better prepared to weather any potential security storms. If you have any questions about how this works, please feel free to reach out to info@port80software.com, or share some general thoughts in the comments below.

 

No Comments »

The Problem with Signature-based Web App Security

Posted: January 29th, 2015
Filed under: IIS & HTTP


In the real world we have the benefit of being present and able to see and analyze scenarios in real-time. In the cyber world, we rely on code and algorithms to handle millions of complex tasks every day without much real-time human intervention. Unfortunately, one of the tasks that we leave in the hands of technology is web security.

This is a problem because unlike humans, code and algorithms cannot decide what is good and bad. Not from a philosophical moral perspective (humans still struggle with that), but from a security standpoint. In order for web security tools to know if a user is doing something bad it needs to be programmed to know what specifically to look for. The way that many tools know how to detect malicious actions or activity it by using attack/threat signatures. This may make for a great business model for those who sell such products, since they can sell signature updates, etc, but it makes for a dangerous security model.

What are signatures, anyway?

So what are signature-based rules, exactly? One way to envision them are as a glossary of different threats that a security tool can reference to know whether or not it should take action against inputs. A signature is typically based on an exploit that has already occurred and has been documented. The signature details the way that the exploit works using a series of parameters to indicate the specific actions that occur during the exploit.

This model depends on matching inputs to a specific signature in order to block them. This can be likened to taking a fingerprint of someone who wants access to something and comparing it against a database of fingerprints of known criminals. This will work great for stopping frequent criminals, but will never stop the first-time offenders.

Example rule:
SecRule REQUEST_COOKIES|!REQUEST_COOKIES:/__utm/
|!REQUEST_COOKIES:/_pk_ref/|REQUEST_COOKIES_NAMES|ARGS_NAMES|ARGS|XML:/* "bgetparentfolderb"
"phase:2,rev:'2',ver:'OWASP_CRS/2.2.9',maturity:'8',accuracy:'8',capture,t:none,t:
htmlEntityDecode,t:compressWhiteSpace,t:lowercase,ctl:auditLogParts=+E,block,msg:
'Cross-site Scripting (XSS) Attack',id:'958016',tag:'OWASP_CRS/WEB_ATTACK/XSS',tag:'WASCTC/WASC-8',tag:'WASCTC/WASC-22',tag:'OWASP_TOP_10/A2',tag:'OWASP_AppSensor/IE1',tag:'PCI/6.5.1',logdata:'Matched Data: %{TX.0} found within %{MATCHED_VAR_NAME}: %{MATCHED_VAR}',severity:'2',setvar:'tx.msg=%{rule.msg}',setvar:tx.xss_score=+%{tx.critical_anomaly_score},setvar:tx.anomaly_score=+%{tx.critical_anomaly_score},setvar:tx.%{rule.id}-OWASP_CRS/WEB_ATTACK/XSS-%{matched_var_name}=%{tx.0}"

The problem with security tools that rely on fingerprint matching is that fingerprints are unique, and the fingerprints for attacks that haven’t happened yet don’t exist. That leaves a gaping hole in these tools ability to provide security.

Good luck stopping zero-days!

Since signatures only account for what has already been seen, they don’t do anything to account for what has yet to be seen. When a new, never-before-seen, zero day comes along the tool won’t find any matches in the signature database. Only after the exploit has been observed in the wild will vendors update their signature lists and send out an update to customers, who then typically need to add the update.

Yes, protecting against the known vulnerabilities that many script kiddies will try is indeed valuable. However, the script kiddies don’t pose the same threat as a well-trained hacker or even a hacker who possesses knowledge of a zero day vulnerability. Without any protection against new or unseen attacks, signature-based tools leave a wide attack surface that needs to be accounted for another way.

Signatures Everywhere!

Many organizations use automated security scanners to find vulnerabilities in their apps, and in turn can update their WAFs with the information learned from the scanner. So if the scan finds vulnerability ABC, then the security rules for that vulnerability can be automatically for the WAF to import (this isn’t the case with every tool, but is a feature than many tools highlight). The problem is that the scanner is using heuristics to find the vulnerabilities in the first place. Herein lies the vicious circle of signature-based security and the illusion of security.

 

Vulnerability scan run with signature-based tool > Rules created from scan > Rules imported into security tool > New scan rules released, prompting rescan >

 

With this type of system in place, you’re never really achieving protection against anything other than known vulnerabilities. It’s like basing all flight security off of the do-not-fly list, but not knowing you can/should stop the person with a dangerous looking item for – at the least – a further inspection.

Another issue with this approach is the constant need to update the rules. Not only does your security depend on a rule existing, but it also depends on you updating the rules (unless rules update automatically) immediately upon release. A breach could come down to: no rules existing to stop an exploit, or you not updating the rules in a timely manner.

A different approach

There are alternative ways to approach web application security, and not all vendors are using a signature-based model. Port80 Software takes an algorithmic and behavioral-based approach that combines whitelists and blacklists and completely ditches the signature model.

The signature approach is so common that when people come to evaluate our Web application firewall, ServerDefender, they often are confused by its non-use of signatures. People often ask, “Well, if it doesn’t use signatures, then how is it providing protection?” and “How often do you update signatures? And are they free?”

There are indeed alternative ways to approach web application security, and not all vendors are using a signature-based model. Port80 Software takes an algorithmic and behavioral-based approach that combines whitelists and blacklists and completely ditches the signature model.

Curious to see what our approach is? Learn about our innovative – signature-free – approach to application security.

No Comments »

Exploring the LogViewer in ServerDefender VP

Posted: November 15th, 2014
Filed under: IIS & HTTP, Web and Application Security
Tags: , , , , , ,


Security You Can See

For the least few years, we have been developing ServerDefender VP, an advanced Web application firewall for IIS. One of the features that has been evolving along with ServerDefender VP is the LogViewer. This is the hub of the WAF where users can interact with and monitor malicious traffic hitting their site. Since there is so much to do within the LogViewer it sometimes becomes easy for a feature or two to be missed, so we’ve decided to explain some of the cool tricks its capable of.

What is the LogViewer?

The LogViewer is a tool that visualizes events (blocked threats and errors) that occur in your application and allows you take a variety of different actions on them with only a few clicks. When selecting an event users can see an array of data that pertains to it such as the referrer, user-agent, IP address, session ID, GET and POST data, and other critical information.

ServerDefender VP Web app firewall LogViewer

Click to enlarge.

What Can Actions Can I take on an Event?

There are several different actions that a user can take on an event in the LogViewer. The primary actions are for security settings (blocking IP addresses and creating exceptions), forensic tools (viewing all events by IP, comparing a session against IIS logs), and exporting reports.

ServerDefender VP LogViewer Actions

Click to enlarge.

Adding Exceptions

One of the key actions available to users from the LogViewer is the ability to add an exception to event, such as a false positive. Adding an exception on an event lets users specify new settings should the same event occur. This means that users can tell a blocked action to be allowed and configure new rules for the future.

ServerDefender VP Input Exception

Click to enlarge.

Forensics

The LogViewer’s forensic tools enable users to gain further knowledge about an event and the session and IP behind it.

“View This Session in IIS Logs” displays the session logs with errors recorded by ServerDefender VP highlighted. This feature is useful to determine what occurred in a session prior to an error occurring and establishing the validity of an error, should there be any questions around it.

“View this IP Only” displays only the events in the LogViewer attributed to that IP address. This makes it easier to visualize the actions of a single IP address and understand its patterns, which can help users determine if the action they should take against the IP, if any.

Questions for Us? Ready to try?

The LogViewer is a powerful tool for viewing malicious traffic in your app and way to quickly react to events. If there’s anything else you’d like to learn about the LogViewer – or ServerDefender VP in general –  send us an email at info@prot80software.com or Tweet us @port80software. If you’d like to enjoy a 30-day free trial, go ahead and download now.

No Comments »


Patch Now: Schannel Vulnerability Poses Huge Threat

Posted: November 13th, 2014
Filed under: IIS & HTTP


A critical vulnerability in Microsoft Schannel headlined the security bulletin released by Microsoft for November. The vulnerability is the latest in TLS vulnerabilities for 2014, and means that every major TLS stack has been impacted by a severe vulnerability this year alone, as reported by Ars Technica. The Schannel vulnerability is drawing comparisons to Heartbleed, as it similarly allows for remote code execution and data theft.

Needless to say, it is imperative that affected systems are patched immediately.


Microsoft Security Bulletin MS14-066 – Critical – Find & Install Patch


Secure Channel, also known as Schannel, is the standard security package used by SSL/TLS in Windows. The Schannel vulnerability impacts all versions of Windows dating back to Vista/Windows Server 2003.

After the Bash and Poodle Vulnerabilities earlier this year, we should not be surprised to see a vulnerability that has gone unpatched for an extended period of time. This vulnerability just underscores the fact that even very mature software may have serious bugs from time to time.

The complete November security bulletin can viewed here.

Looking for more details about the Schannel vulnerability (MS14-066)? Read More

No Comments »

POODLE SSL 3.0 Vulnerability: What it is and how to deal with it

Posted: October 17th, 2014
Filed under: IIS & HTTP


 

from our friends at Net-Square

A Vulnerability known as POODLE, an acronym for Padding Oracle On Downgraded Legacy Encryption, is making headlines this week. The bug is a vulnerability in the design of SSL version 3.0, which is more than 15 years. However, SSL 3.0 is still widely supported, and nearly 97% of SSL servers are likely to be vulnerable to the bug, according to an October Netcraft Survey.

SSL v3.0 Vulnerable Servers

 

This vulnerability allows an attacker to intercept plaintext data from secure connections. Since SSLv3 has been quiet famous in the last 15 years it has put literally millions of browsers in jeopardy. As the chart above indicates, this is a vulnerability that has a sweeping impact.

How it happens?

Though users are upgrading to latest SSL versions (TLS 1.0, 1.1, 1.2), many TLS versions are backward compatible with SSL 3.0, hence when web browsers fail at connecting on these newer SSL version (i.e. TLS 1.0, 1.1, or 1.2), they may fall back to the older SSL 3.0 connection for a smooth user experience.

The other possibility is, a user is forced to step down to SSL 3.0. If an attacker has successfully performed a Man In The Middle attack MITM and causes connection failures, including the failure of TLS 1.0/1.1/1.2 connections. They can force the use of SSL 3.0 and then exploit the poodle bug in order to decrypt secure content transmitted between a server and a browser. Due to this down shift of the protocol the connection becomes vulnerable to the attack, eventually exploiting and intercepting user’s private data.

Google’s Bodo Möller, Thai Duong, and Krzysztof Kotowicz published the complete security advisory which can be found on openssl.org.

Possible remediation

To avoid falling prey to attackers exploiting POODLE, avoiding the use of public Wi-Fi hotspots, if user is sending valuable information (using online banking, accessing social networks via a browser, etc.), and noting this is always a risk, but the Poodle vulnerability makes it even more dangerous.

The other recommendation is disabling SSL v3 and all previous versions of the protocol in your browser settings and also on the server side will completely avoid it.  SSL v3 is 15 years old now and has been superseded by the more up-to-date and widely supported TSL protocol, supported by most modern web browsers.

DigitCert published a detailed step-by-step guide for disabling SSL 3.0.

Richard Burte also shared the command lines to disable SSL 3.0 on GitHub.

No Comments »

Zero-Day Vulnerability (CVE-2014-4114) in Windows Server Exploited by Russian Espionage Group “Sandworm”

Posted: October 14th, 2014
Filed under: IIS & HTTP, Web and Application Security


 

A Russian espionage group is exploiting a zero-day vulnerability in Windows Server 2008 and 2012, iSIGHT Partners reported on Tuesday. Microsoft is currently working on a patch for the vulnerability (CVE-2014-4114), but a number of targets have already been hit.

When exploited, this vulnerability allows an attacker to remotely execute arbitrary code, but requires a specially crafted file and use of social engineering to convince a user to open the file. iSIGHT noted specifically that PowerPoint files were used to exploit the vulnerability.

While there are specific targets that have been named, iSIGHT is also quick to point out that the visibility of the attack is limited and there is potential for broader targeting beyond this group of targets. The known targets include:

  • NATO
  • Ukranian government organizations
  • A Western European government organization
  • Energy sector firms (specifically in Poland)
  • European telecommunications firms
  • An United States academic organization

The team behind the attacks was dubbed the “Sandworm Team,” based on encoded references in command and control URLs and malware samples that refer to the sci-fi series Dune. iSIGHT reported that it has been monitoring the Sandworm Team since late 2013, and believes they were formed sometime in 2009.

Geopolitical Tensions Creating Targets

The takeaway here seems to be that the attacks do not only target governmental entities. The Sandworm Team has instead targeted entities that are geopolitically relevant in a broader sense: energy, telecommunications, education.

This should serve as a sign of potential threats to come. Private sector businesses that are strategically sensitive in a geopolitical sense might be on some state’s list of targets. This means organizations that share information with, provide services to, or provide infrastructure utilized by governmental organizations may be at risk. State-sponsored attacks will focus on targets with strategic significance which can range from obvious ones like power grids and financial institutions to less obvious targets like research universities.

State-sponsored attacks are on the rise and the targets are becoming broader. Organizations who align themselves with sensitive entities should have a heightened sense of awareness and look to raise their defenses if needed.

We will update this post accordingly as the story continues to develop.

No Comments »

Takeaways and Questions from the Home Depot Data Breach

Posted: September 18th, 2014
Filed under: IIS & HTTP


One of the main goals of spending time and money to implement information security is to make it difficult for hackers to get in and data to get out. When ‘hackers’ compromised Home Depot and stole upwards of 60 million credit card numbers recently, it wasn’t all that difficult.

The breach, which could be the largest in US history, occurred after a piece of malware (possibly the Backoff malware) made its way onto the point of sales at numerous Home Depot stores. When customers swipe their card at checkout, the card data was captured and sent back to a server. If this sounds familiar, that’s because this is the same technique that was used in the Target breach last December.

A line that is being repeated in news and blogs is that the hackers didn’t do anything terribly complicated or anything that required a ton of hacking skill. Lines like this usually only come out of incidents that were caused by carelessness or ineptitude. Hacking a major corporation’s POSs shouldn’t be easy; it should be hard. Stealing 60 million credit card numbers shouldn’t be easy; it should be hard. We don’t yet know all the details behind the breach, but we certainly have learned some takeaways:

  • Malware is still a potent threat – Threat signature based antivirus is not capable of detecting new types of viruses or malware. Since antivirus and anti-malware depend on signature databases to detect and eliminate threats, new threats often go unseen until an incident occurs. This leaves a huge blind spot in organization’s security infrastructure. However, this may not have been the case with Home Depot. As reported by ThreatPost, BackOff isn’t a complex Windows Trojan, it’s just re-purposed to run on a Windows-based POS and therefore should be detected by antivirus. This means that Home Depot either did not have antivirus in place or it was not updated – either scenario is bad. That leads us to our next takeaway.
  • We don’t learn – This same style attack just occurred to a major U.S. retailer and was all over the news. Everyone knew about this attack – especially IT and security people – and yet the same style of attack was even more successful in the Home Depot incident. The lessons learned from Target should have raised guard enough to at least make sure that antivirus was properly installed on the servers managing the POS machines and updated regularly. Symantec has specifically addressed how its software detects point-of-sale malware, and many antivirus vendors were quick to add signatures for BackOff variants after they were discovered.  In this instance, the vendors appear to be doing their part, but Home Depot seems to have failed to protect itself.
  • No PINs stolen, but that doesn’t matter – In a report issued by Home Depot they stated: “While the company continues to determine the full scope, scale and impact of the breach, there is no evidence that debit PIN numbers were compromised.” But unfortunately that doesn’t matter. As Brian Krebs reported, the method of PIN reset is so out of date that even a stranger can reset your PIN with enough personal information simply by using the automated voice system:

“Countless banks in the United States let customers change their PINs with a simple telephone call, using an automated call-in system known as a Voice Response Unit (VRU). A large number of these VRU systems allow the caller to change their PIN provided they pass three out of five security checks.”

  • Where does cybersecurity insurance come into play? Business Insurance reported that Home Depot has $105M in cyber insurance to cover data breaches. Cyber liability insurance is a growing industry with the threat for seriously damaging  data breaches making growing more and more. This begs the question: will organizations lean too heavily on insurance policies rather than implementing better security policies? That isn’t to say that Home Depot did this, but one has to wonder if cyber insurance will provide executives a level of comfort that will detract from investing in proper security.

Every breach that occurs is unfortunate, but it’s also a chance for everyone to learn and avoid potentially critical mistakes in the future. What do you think some of the major takeaways or questions coming out of the Home Depot breach are?

No Comments »

Why You Should Care About Bots, and How to Deal with Them

Posted: July 10th, 2014
Filed under: IIS & HTTP


Has your site recently been bogged down by thousands of rapid requests from a distant land you do no business with? Have you been seeing spam in your form responses or comments? Are you seeing requests for pages that don’t exist on your site? If so, you may have bots.

Don’t worry, we all have bots. It’s a normal part of a site growing up. One day you’re launching, and the next day skimmers, scammers, and scrapers are scouring your site for information or holes they can poke through.

Bots are a ubiquitous part of the web universe at this point, flying through the pipes of the internet looking for prey. Normal as they may be, there is reason to be concerned with bots. One of the most recent reports from Incapsula puts bot traffic at 61% of all web traffic. That number is nothing to sneeze at, mostly because sneezing at things is rude, but also because it’s a very big number. While this is significant, there is still some debate around whether or not this traffic is visible in web analytics.

What do bots actually do?

“Are you a good bot, or a bad bot?”

Well Dorothy, not all bots are bad. In fact, there are some good bots that do things like crawl your site for search engines, or monitor RSS feeds. These bots are helpful and you’ll want to make sure that they don’t encounter any obstacles when crawling your site. Bad bots are primarily used for reconnaissance, but can pose various degrees of threat.

Lesser Threat

  • Email scrapers – These bots will scour sites for email addresses to harvest, which can lead to lots of spam or potentially harmful bait emails (i.e. phishing attacks, or malware).
  • Link spam – Ever see spam comments or in your submission form results? If these form fields allow links to be submitted, they can cause a lot of trouble. A link to a site with malware on it in your comments could endanger your users.

Greater Threat

  • Link spam (part II) – Imagine a scenario where someone with admin privileges clicks a link from a form submission, or in a spam comment. Now imagine that the link is to a site that installs a key logger on the admin’s machine. Next time the admin logs into your site or server, the credentials are captured, and all the protection you’ve put in place is void.

Automated Exploits

  • (Aimlessly) Search and destroy – These bots can potentially do a lot of harm, if they find a vulnerability on your site. While these bots are dangerous, they also operate without any real direction. Armed with a list of popular known exploits, these bots will crawl the web and throw these known exploit tricks at every site they encounter. If they come across a site with a hole, a hook will add that site to a queue for further exploitation.
  • Targeted Search and destroy – The same as above, but with a targeted list of sites to crawl.

What’s the end game?

Bad bots are a way for an outsider to own your server. Once the server is controlled, the bad guys can do a range of things with it:

  • Steal sensitive data stored there (personal info, credit card numbers, etc.)
  • Steal account passwords
  • Send malicious emails through it
  • Attack other sites/servers with it

Why Stop Bots?

An overwhelming amount of bot requests can bog down a site and cause it to run very slowly, just like how a large amount of legitimate traffic can eat up resources and slow down a site. This is a problem for a couple of reasons:

  1. Slow site = unresponsive pages = unhappy customers = lost sales
  2. Slow site = SEO hit (site speed is a factor in SEO ranking)

Prevent heavy resource usage costs

What adds insult to injury after a slowed site prevents sales? A huge bill from your hosting provider! Yes, all those extra requests and all those extra resources being used typically cost money.

Prevent data theft

Guess what else will cost you money: data theft! Of course, this can also hugely damage public perception and reputation – which are invaluable. Not to mention the fact it could mean other people’s information, money, and identities are put at risk.

Signs You Have Bots

There are a number of ways to spot bot traffic in your logs, but if you don’t know what to look for, you will likely never know you have a bot problem. Here are a few tell-tale signs of bots hitting your site:

  • Rapid requests – A normal user browsing a site won’t request 100 pages in a few seconds, as most internet users do not have super-human reading and clicking abilities.  However, bots do. And bots will make multiple requests per second by simply following links they find, or attempting to complete forms.
  • Lots of requests from the same IP address – all over the site – Aside from making a ton of requests in quick succession, bots can typically be spotted by a long trail of requests. No matter how interesting the content on your site, most real users won’t browse every page on the site – unless it happens to be a very small site. Bots will do this. Most real users will also usually be able to successfully submit a form on the first try or two – given they have an IQ higher than that of a lemming. However, a bot, which has no IQ, may not be able to do so. You may, in fact, see multiple failed attempts to submit a form, all from the same IP.
  • Requests without sessions – Real users browsing your site will normally accept cookies, bots often will not. Requests from IPs that don’t have sessions are likely bots.
  • Requests at odd times/locations – If you see requests at times or from locations that do not make sense for your business, then it could be a sign of bot traffic. For example, if you only do business in North America, but you see a number of requests from Eastern Europe in the middle of the night, then it’s definitely worth investigating.
  • Suspicious user-agents – A general way to spot suspicious user-agents is by looking for rare or infrequent user-agents that aren’t associated with a browser (or at least a well-known or popular one). Once you find them, take a look at their activity in your logs for anything suspicious. There are also lists of known dangerous bots that can be used for reference. Lastly, a simple Google search should indicate if they are known to be bad or not.
  • Bad data – You may be accustomed to seeing bad data (spam, empty) come through your forms, but be sure to look at it with a critical eye. Spam in your forms, or empty submissions can be dangerous.
  • Bad requests – Well-behaved users won’t typically type in directories in the address bar when navigating your site, it’s much easier to navigate by clicking links on the site. So, if you see a bunch of requests for URLs with a .asp extension on your all-PHP site, then you may have a bot poking around for a known vulnerability.

Stop Bots with ServerDefender VP

By now, you’re probably asking: How can I stop the bot uprising? Don’t worry, you won’t need John Connor for this mission. You can stop these bots much more easily, and without 3 sequels, mind you. Using ServerDefender VP, you can set up a bot policy in minutes and prevent the pests of the internet from causing you headaches.

1) Figure out your policy. A very strict bot policy will require sensitive security controls that have a very low tolerance for behavior that looks like bot behavior. This will keep bot traffic down, but could put you at risk of blocking good bots or legitimate traffic. Things to take into consideration here:

  • What does normal user behavior look like?
  • Keep in mind that if your site is very error prone (be honest with yourself here), that you may want to be

2) Launch ServerDefender VP’s settings manager and enter expert view. Under the Session Management tab, go to Bot Policy. Click the Configure button to launch the configuration panel.

3) Once you know how you plan to handle bots, you can jump into the configuration. Here’s a brief rundown of what each control does:

“Begin applying bot detection counters after ____ requests” – This tells ServerDefender VP when it should begin sniffing an IP for bot behavior. If you set this value to 1, ServerDefender VP will begin monitoring an IP’s requests for bot behavior after its first request. This essentially provides no leeway. You can provide just enough leeway for the good bots by easing up when SDVP begins looking to detect bots. Providing some leeway isn’t necessarily a bad thing, as bad bots are likely to make many many requests, not just a few.

“Maximum allowed errors per second” – As explained earlier, normal users don’t make hundreds of requests per second, and therefore they do not make hundreds of errors per second. Once the number of requests set in the previous control group is reached, then the max errors per second allowed configuration will kick in. This area will determine the strength of your bot policy, as this setting is really where you’ll trap your bots.

Setting this to a higher value provides some leeway for good bots to crawl your site without being penalized for errors. The lower you set this value, the more strict it is. Typically, setting this number to a single digit value should provide sufficient padding to prevent blocking users committing innocuous errors, while ensuring trouble making bots do not pass through.

“Percentage of requests allowed without referrer” & “percentage of errors allowed without referrer”– These are good to keep at 100%, as legitimate users sometimes do not even make requests with a referrer all of the time. Once the bot controls are in place, you can also configure the blacklist for user-agents. You can either add bad user-agents you’ve encountered in the past, or add them from a list of bad user-agents. There are plenty of articles and lists of bad-user agents which you can pull from, if you choose to do so.

Questions? Need Advice or Help?

We’re always glad to lend a hand. If you have any web app security questions or would like to try out ServerDefender VP for yourself, you can email us at support@port80software.com

 

Bonus Bot Info

NPR reported on the severity of the bot threat recently, bringing the conversation to the general public.

No Comments »

Port80 Donating $50,000 in Web Security Software to Secure Schools

Posted: April 9th, 2014
Filed under: IIS & HTTP


Big data breaches have been in the spotlight recently. You’ve likely heard of the ones happening at big corporations, but what about those happening at schools?

Educational institutions are at risk, and many don’t have the budget to implement proper security. At Port80, we’d like to do our part to help make education more secure. We will be awarding $50,000 worth of web security software to 25-50 educational organizations by September 15.

Does putting a piece of software in place make you automatically secure? Of course not, but for those who have vulnerable systems that cannot be quickly or easily fixed, we’d like to help.

Learn More Apply Now
 

Please pass this message along to anyone you think may qualify. We hope that together we can make education a more secure place.

The Port80 Software Team

No Comments »

Breach Brief: Spec’s Wine, Spirits, & Finer Foods

Posted: April 7th, 2014
Filed under: IIS & HTTP


Data breaches. They don’t just happen to the retail big boys like Target and Neiman Marcus. They happen to big and small organizations, and every size in between. It was recently revealed that Texas liquor chain Spec’s Wine, Spirits, and Finer Foods fell victim to a serious data breach. Spec’s has 155 locations around Texas, ‘where everything is bigger’… Including the breaches!

Half a Million Victims

According to Spec’s statements, the breach affected fewer than 5% of their total transactions- less than 550,000 customers. While half a million customers is a sizable number of victims, Spec’s may be counting themselves lucky, as the breach only affected 34 smaller neighborhood stores, rather than all of their locations. Information exposed during the breach may include bank routing numbers, as well as payment card or check information.

What Happened

Spec’s problems began on October 31, 2012, when one of their computer systems was compromised. When did the compromise end, you ask? The breach ended as late as March 20. For those counting, that’s nearly 17 months of uninterrupted access to data.

Spec’s spokeswoman Jenifer Sarver told the Houston Chronicle that the breach was, “a very sophisticated attack by a hacker … who went to great lengths to cover their tracks.” Sarver also went on to reveal that, “It took professional forensics investigators considerable time to find and understand the problem then make recommendations for Spec’s to fully address and fix them.”

What makes this breach newsworthy?

Every breach story is bad in some regard:

  • There are victims whose information is no longer private
  • There are mistakes made by staff
  • There are property/money losses

Some concerning points about this breach and why we think it’s relevant:

  • The breach went on for 17 months
  • The breach was first noticed by banking institutions when suspicious transactions began, not by Spec’s IT team
  • Evidence of breach may have surfaced over a year ago, but no action was taken
  • Resolving this problem after discovery has taken considerable time

What we can learn from this breach

The Spec’s Wine, Spirits, and Finer Foods breach illustrates the need for a strong security posture, no matter the size of an organization.

One security tool that makes monitoring, identifying, and responding to attacks much simpler for small and medium sized organizations is ServerDefender VP. This powerful tool is easy to use and helps protect against more than just a list of known attack signatures.

No Comments »

Sochi: A Five-Ring Circus of Web Security Nightmares, or Just Another Day on the Wi-Fi

Posted: February 12th, 2014
Filed under: IIS & HTTP


 

A lot of people are talking about the web security concerns in Sochi. Have you heard the story about that guy immediately being hacked when booting up a laptop, or how everyone at Sochi will definitely be hacked because of the unsafe the Wi-Fi networks there?

These incidents are not exclusive to Sochi: they are concerns for all open Wi-Fi networks. In fact, you may not be much safer on your local cafe’s public Wi-Fi. That guy sitting in the back corner sipping his latte isn’t working, he’s running WireShark (or another sniffer tool) to steal information about your online banking session. You don’t even need much technical capability to do what he’s doing, just the willingness to perform an illegal activity.

Read the rest of this entry »

No Comments »

Preventing Cross Site Request Forgery Attacks

Posted: February 4th, 2014
Filed under: IIS & HTTP


What is a Cross Site Request Forgery Attack?

Cross-site request forgery (CSRF or XSRF) is an attack that has been in the OWASP Top 10 since its inception, but is not nearly as talked about as other OWASP lifers like XSS or SQL injection. We’ve decided to give CSRF some needed attention and discuss some ways to mitigate it.

Also known as a “one click attack” or “session riding,” CSRF an exploit very similar to an XSS attack. Rather than an attacker injecting unauthorized code into a website, a cross-site request forgery attack only transmits unauthorized commands from a user that the website or application considers to be authenticated.

Certain websites and applications are at risk: those that perform actions based on input from trusted and authenticated users without requiring the user to authorize the specific action. These attacks are characteristic vulnerabilities of Ajax-based applications that make use of the XMLHttpRequest (XHR) API. A user that is authenticated by a cookie saved in his Web browser could unknowingly send an HTTP request to a site that trusts him and thereby cause an unwanted action (for instance, withdrawing funds from a bank account).

Read the rest of this entry »

No Comments »

PCI DSS Version 3.0: New and Important Requirements

Posted: November 27th, 2013
Filed under: IIS & HTTP


PCI DSS is a set of standards developed by major credit card companies to keep credit card information secure and reduce fraud. The standards apply to any organization that processes, stores, or transmits credit card data. Occasionally, the PCI Security Standards Council will announce updates to the standards, which may require approved companies to make changes to their security hardware, software, and practices. The latest updates (PCI DSS Standards 3.0) were introduced a few months back, but recent documentation has shed a bit more light on you need to do to start preparing for compliance once the new rules come into effect on January 1, 2014.

Read the rest of this entry »

No Comments »

4 Ways Reporting and Alerting Are Valuable to Web Security

Posted: October 30th, 2013
Filed under: IIS & HTTP


Despite our best preventive efforts and proactive measures, practices, and training, security breaches still happen. It is just a fact of life today. The most prepared CISOs could quickly handle a breach if they knew when it was going to occur. But there is no spidey sense that will tingle when a hacker makes his way into your database, or alarm that will sound when a user’s session is hijacked and unauthorized permissions are obtained. But certain activities can help you notice unusual and potentially dangerous activities happening around your web assets. They include:

  • Configuring alerts
  • Using reporting tools
  • Monitoring your app

These activities are vital for effectively dealing with an incident. Alerts (via email, SMS, etc.) offer the application and site owners a way to know that they are under attack. It’s as if the app is shouting “Help! Something is wrong!” And regular use of reports gives site owners a way monitor normal usage on the site and quickly recognize unusual activity. But the benefits do not end there.

Continue Reading

No Comments »

PCI DSS 3.0: What You Need to Know

Posted: October 3rd, 2013
Filed under: IIS & HTTP


This November, the Payment Card Industry (PCI) Security Standards Council will change the PCI Data Security Standard (PCI DSS) and the Payment Application-Data Security Standard (PA-DSS). This means organizations that handle cardholder data will need to update their security to adhere with the new rules. We’ve read the initial documentation and have laid out some of the biggest changes to prepare for over the coming year.

Continue Reading

No Comments »

Using Third Party Apps? Make sure they are secure!

Posted: August 18th, 2013
Filed under: IIS & HTTP


Business today cannot ignore varied creative spaces for marketing their offerings. And this is exactly the reason for a tremendous rise in third party applications, be it standalone programs or small plugins that add functionality. This is a departure from the previous paradigm of companies depending heavily upon enterprise software providers and a few others for all their applications.

Organizations now want to have a go at everything, which seems more convenient and helps them network. Employees can’t seem to live without social networking applications like Facebook, LinkedIn, Twitter, and various other applications offered by 3rd party providers making them essential for today’s business. According to mobile market research and consultancy firm research2guidance, the market for app development services, including application creation, management, distribution and extension services, will grow in to $100 billion in 2015.

Although these applications and social networks are primarily intended for consumer use, companies are increasingly recognizing their business benefits. This creates a unique challenge for the IT department. In addition to the benefits, they can negatively impact productivity, network bandwidth, users’ privacy, data security and the integrity of IT systems (via malware and application vulnerabilities). A lot of these applications come with severe vulnerabilities and exposing business and personal data to them poses a high security risk. Previously, only malware was a major threat. But today, about 75% of cyber attacks happen due to vulnerabilities in third-party applications. General perception amongst companies is that by investing in patch management, and  by patching third party applications, they will be safe. But there is more to it than just patch management.

During our network and application audits, we have observed that such patching devices, even if implemented and configured, fail to ensure 100% patch management. Also enterprises are always at the mercy of third-party vendors for patching the flaws and preventing a software exploit. In some cases, the patches are released months after a flaw has been detected. And in the meantime new flaws emerge. In order to be secure, 3rd party applications should be managed more proactively.

Some do’s and don’ts for third party apps:

•Depending upon risk, companies should define and offer selective usage of these applications.

•Frequent security audits of all 3rd Party applications should be implemented. A good practice would be to incorporate a mandatory requirement of security audit certificate in application procurement tender. This would enforce software product companies to implement secure coding practices and get audited from an independent security firm.

•Only implementing an automated patch management system will not help the cause. There has to be a team of knowledgeable people managing this system and ensuring patch adherence.

•It is advisable to implement two-factor authentication for 3rd party applications. Twofactor authentication that uses out-of-band authentication such as a PIN sent to a smart phone, does require a hacker to go to extensive lengths to beat it, and so adds an additional layer of protection.

•Conduct security awareness trainings for business users, application IT teams and Infosec teams at regular intervals to educate and sensitize teams on ongoing attack trends and how they can prevent them.

•Finally, even employees can ensure secure and safe usage by practicing a few things like using different passwords for their personal and business accounts and regularly changing them. Define privacy settings in all social media applications such that personal information is not exposed. Immediately revoke access to third party applications if employees sense anything fishy in their accounts. These are small steps, but can go a long in ensuring safe and secure usage!

-Hardik Kothari, Business Development, Net-Square Solutions

 

No Comments »

Happy Sysadmin Appreciation Day 2013!

Posted: July 26th, 2013
Filed under: IIS & HTTP


Happy Sysadmin Appreciation Day to all the great System Administrators out there! Today, we tip our caps to you for all you do to keep business up and running every single day.

Not familiar with Sysadmin Day? sysadminday.com, a site dedicated to this glorious holiday, sums it up well:

“What exactly is SysAdmin Day? Oh, it’s only the single greatest 24 hours on the planet… and pretty much the most important holiday of the year. It’s also the perfect opportunity to pay tribute to the heroic men and women who, come rain or shine, prevent disasters, keep IT secure and put out tech fires left and right.”

Source: http://someshoes.tumblr.com/post/7626121348

 

No Comments »

Log Management: The Benefits of Cloud Logging Tools

Posted: May 22nd, 2013
Filed under: IIS & HTTP


Logs are good for more than just taking up space on your hard drive. Logs are useful records of an event that took place at a particular time and in a particular manner. Since it’s obviously impossible to see everything is happening in your systems all the time, and certain events (like security incidents or application errors) may require forensic or debug data to assess, logging can be a critical piece of a IT infrastructure. Chances are if you are a system administrator, you’ve come across one or two logs in your time.

Logs are typically generated for things like:

  • Security events
  • Uptime/downtime
  • Errors (5xx, etc.)
  • Traffic/usage

Why Logging is Useful

For some, logs may seem like a nuisance. For others, their purpose is a mystery.

Many organizations use logs to help with things like troubleshooting, monitoring and alerting, analytics, and application debugging. At Port80, we do a considerable amount of logging with our web application firewall, ServerDefender VP. Here we use logs to detail security events such as XSS, SQL injection, input validation, and buffer overflow attacks. They capture detailed information about who, where, and how the event occurred. For us, logs are a way to view events and perform an assessment.

We look to see if legitimate users are being inadvertently blocked by security controls by analyzing the logs of the user moving through the site. If we see lots of bad behavior (attempting to access certain pages, XSS exploits, etc), we’ll know they are likely a malicious user and we might block their IP. If we see some harmless behavior that set off our web application firewall, we might want to adjust our controls for a particular field or page. But without this log data, we would have no way of knowing what type of actions to take because we would have no data to base our decisions on.

For another take on logging, information security video-blogger Javvad Malik produced a quick overview of log management that’s both educational and entertaining: 


Continue Reading

No Comments »

Third Party Apps: Secure Enough?

Posted: May 17th, 2013
Filed under: IIS & HTTP


The volatility in the current environment requires organizations to react very quickly to the changing business landscape. Consequently, this has to be done not only with speed but also under severe cost pressures. More and more IT teams are adopting third party packaged solutions as their answer to the challenge of providing quick solutions to business, as building proprietary solutions often satisfy neither  time or budgetary requirements.

This trend is growing fast in all organizations. Business are signing strategic IT sourcing deals whereby they hand over the entire IT support to an external vendor, placing the vendor with the responsibility for infrastructure and personnel. Or, they are moving to outsourcing model where they buy and customize solutions from a third party vendor for their automation needs.

To cash-in on this trend, some IT consulting companies have built products which they customize according to the client’s requirements and implement them onsite. In this process the one piece that gets neglected the most is security.

Most organizations don’t have processes or checks in place to ensure the third party code is implemented securely. In our experience, while testing applications which have been provided by third party vendors experiencing security flaws, we have seen vulnerabilities that could have been easily exploited by the attackers to access highly confidential personal and financial data.

The story is no different in other verticals. In December 2012, an Egyptian hacker breached Yahoo!’s security systems and acquired full access to Yahoo! Database server. The SQLi attack was carried out on a Yahoo! Web application, which was a third party application.

So how can organizations protect against this? We  mooted the idea of conducting regular Security Assessment to some of the firms who have many products. But given that there is extensive customization done of these products at the time of implementation, the best practice is to perform a periodic code review of the code deployed at the client end.

The argument against this is that very often clients feel helpless that they will not receive access to the code. Recently, though, there have been cases where clients have been able to get the vendors to agree to access to code for security code review. But no matter what, when deploying or integrating a third party application, ensure that you perform proper security checks and don’t just deploy the quickest and cheapest solution. Remember, you’re only as secure as your weakest link.

 

No Comments »

Hack Back?

Posted: May 9th, 2013
Filed under: IIS & HTTP


As discussed in a previous post on incident response, there really isn’t any form of authority one can call in the event of a hack attack. So, if we live in a world where we are left to fend for ourselves in cases of cyber criminality, what are we to do?

One potential course of action to take, in the absence of authorities or first responders, while under attack is to hack back. However, even in this regard there are not sufficient laws to help people and organizations defend themselves. In fact, if anything, there are laws that could land those who hack back in trouble.

For example, cybercrime laws in the US have extensive provisions for what is constituted as cybercrime. However, none have provisions that define exceptions to the rule for cases of self-defense.  If an organization or individual were to attempt to stop such an attack by attaching the machine(s) where the attack originated, they may not be able to plead “self-defense.” In fact, their efforts may be categorized as an attack and they may face legal repercussions.

The issue of hacking back isn’t just one that has beleaguered technical people, it’s even become a debate for lawyers. To hear the legal side of things, the Federalist Society has a recorded discussion on the legality of hacking back between a group of lawyers.

Problems with Hack Back

Legality aside, there are other issues that arise when considering hacking back. For one, attackers often don’t just attack from their own machines, but from botnets or zombie machines (i.e. machines belonging to other unsuspecting  individuals and organizations that they have been able to virtually own). In a case like this, hacking back would really mean attacking and shutting down or damaging machines belonging to people who otherwise have nothing to do with the attack. This would really just make life miserable for the person or organization in the middle of it all, and make the person who thinks they are defending themselves somewhat of a bad guy.

But it’s Kind of Like the Real World…

Criminal laws in most countries have express clauses defining what constitutes self defense and upholding the right of an individual to use force in order defend his/her body and property. So let’s take an example.

If some thieves came on a stolen bike to steal money from someone traveling in a car and in defending him or herself, if the individual in the car ends up damaging the bike, the owner of the bike cannot file a complaint against the person in the car. Isn’t this similar to what happens in the online world when the attackers hijack machines and use them to attack others? In the absence of any specific protection in the laws concerning cybercrime, shouldn’t provisions from the criminal laws come to the aid of the beleaguered organizations who when under attack, can attack back to control the damage?

It is strange that while laws don’t protect individuals and organizations, nations have already started using “hack back” as a strategy to strike back. Recently, stories came out about the United States attempting to hack back China after numerous state-sponsored hacks originating from the Chinese. This could be interpreted in two ways: there is a indeed a shadow cyberwar occurring, or it was a defensive technique.

So what can organizations do? In light of the confusion in the law and the fact that the business world is more globally connected, organizations need to focus on strengthening their own assets against attacks. Using a red team approach is a good idea to evaluate the preparedness to respond to any type of attacks. The red team approach is a concept of allowing a team of crack commando style infosec analysts to attack the corporate IT assets to gauge the preparedness of the IT assets to withstand the attacks and effectiveness of the incident response process. Preparedness and knowledge are traits that could better equip you to deal with hack attacks, in lieu of the existence of a dedicated cyber-authority.


-Hiren

No Comments »