Skip to content

POODLE SSL 3.0 Vulnerability: What it is and how to deal with it

Posted: October 17th, 2014 | Filed under: IIS & HTTP

 

from our friends at Net-Square

A Vulnerability known as POODLE, an acronym for Padding Oracle On Downgraded Legacy Encryption, is making headlines this week. The bug is a vulnerability in the design of SSL version 3.0, which is more than 15 years. However, SSL 3.0 is still widely supported, and nearly 97% of SSL servers are likely to be vulnerable to the bug, according to an October Netcraft Survey.

SSL v3.0 Vulnerable Servers

 

This vulnerability allows an attacker to intercept plaintext data from secure connections. Since SSLv3 has been quiet famous in the last 15 years it has put literally millions of browsers in jeopardy. As the chart above indicates, this is a vulnerability that has a sweeping impact.

How it happens?

Though users are upgrading to latest SSL versions (TLS 1.0, 1.1, 1.2), many TLS versions are backward compatible with SSL 3.0, hence when web browsers fail at connecting on these newer SSL version (i.e. TLS 1.0, 1.1, or 1.2), they may fall back to the older SSL 3.0 connection for a smooth user experience.

The other possibility is, a user is forced to step down to SSL 3.0. If an attacker has successfully performed a Man In The Middle attack MITM and causes connection failures, including the failure of TLS 1.0/1.1/1.2 connections. They can force the use of SSL 3.0 and then exploit the poodle bug in order to decrypt secure content transmitted between a server and a browser. Due to this down shift of the protocol the connection becomes vulnerable to the attack, eventually exploiting and intercepting user’s private data.

Google’s Bodo Möller, Thai Duong, and Krzysztof Kotowicz published the complete security advisory which can be found on openssl.org.

Possible remediation

To avoid falling prey to attackers exploiting POODLE, avoiding the use of public Wi-Fi hotspots, if user is sending valuable information (using online banking, accessing social networks via a browser, etc.), and noting this is always a risk, but the Poodle vulnerability makes it even more dangerous.

The other recommendation is disabling SSL v3 and all previous versions of the protocol in your browser settings and also on the server side will completely avoid it.  SSL v3 is 15 years old now and has been superseded by the more up-to-date and widely supported TSL protocol, supported by most modern web browsers.

DigitCert published a detailed step-by-step guide for disabling SSL 3.0.

Richard Burte also shared the command lines to disable SSL 3.0 on GitHub.

No Comments »

Zero-Day Vulnerability (CVE-2014-4114) in Windows Server Exploited by Russian Espionage Group “Sandworm”

Posted: October 14th, 2014 | Filed under: IIS & HTTP, Web and Application Security

 

A Russian espionage group is exploiting a zero-day vulnerability in Windows Server 2008 and 2012, iSIGHT Partners reported on Tuesday. Microsoft is currently working on a patch for the vulnerability (CVE-2014-4114), but a number of targets have already been hit.

When exploited, this vulnerability allows an attacker to remotely execute arbitrary code, but requires a specially crafted file and use of social engineering to convince a user to open the file. iSIGHT noted specifically that PowerPoint files were used to exploit the vulnerability.

While there are specific targets that have been named, iSIGHT is also quick to point out that the visibility of the attack is limited and there is potential for broader targeting beyond this group of targets. The known targets include:

  • NATO
  • Ukranian government organizations
  • A Western European government organization
  • Energy sector firms (specifically in Poland)
  • European telecommunications firms
  • An United States academic organization

The team behind the attacks was dubbed the “Sandworm Team,” based on encoded references in command and control URLs and malware samples that refer to the sci-fi series Dune. iSIGHT reported that it has been monitoring the Sandworm Team since late 2013, and believes they were formed sometime in 2009.

Geopolitical Tensions Creating Targets

The takeaway here seems to be that the attacks do not only target governmental entities. The Sandworm Team has instead targeted entities that are geopolitically relevant in a broader sense: energy, telecommunications, education.

This should serve as a sign of potential threats to come. Private sector businesses that are strategically sensitive in a geopolitical sense might be on some state’s list of targets. This means organizations that share information with, provide services to, or provide infrastructure utilized by governmental organizations may be at risk. State-sponsored attacks will focus on targets with strategic significance which can range from obvious ones like power grids and financial institutions to less obvious targets like research universities.

State-sponsored attacks are on the rise and the targets are becoming broader. Organizations who align themselves with sensitive entities should have a heightened sense of awareness and look to raise their defenses if needed.

We will update this post accordingly as the story continues to develop.

No Comments »

Takeaways and Questions from the Home Depot Data Breach

Posted: September 18th, 2014 | Filed under: IIS & HTTP

One of the main goals of spending time and money to implement information security is to make it difficult for hackers to get in and data to get out. When ‘hackers’ compromised Home Depot and stole upwards of 60 million credit card numbers recently, it wasn’t all that difficult.

The breach, which could be the largest in US history, occurred after a piece of malware (possibly the Backoff malware) made its way onto the point of sales at numerous Home Depot stores. When customers swipe their card at checkout, the card data was captured and sent back to a server. If this sounds familiar, that’s because this is the same technique that was used in the Target breach last December.

A line that is being repeated in news and blogs is that the hackers didn’t do anything terribly complicated or anything that required a ton of hacking skill. Lines like this usually only come out of incidents that were caused by carelessness or ineptitude. Hacking a major corporation’s POSs shouldn’t be easy; it should be hard. Stealing 60 million credit card numbers shouldn’t be easy; it should be hard. We don’t yet know all the details behind the breach, but we certainly have learned some takeaways:

  • Malware is still a potent threat - Threat signature based antivirus is not capable of detecting new types of viruses or malware. Since antivirus and anti-malware depend on signature databases to detect and eliminate threats, new threats often go unseen until an incident occurs. This leaves a huge blind spot in organization’s security infrastructure. However, this may not have been the case with Home Depot. As reported by ThreatPost, BackOff isn’t a complex Windows Trojan, it’s just re-purposed to run on a Windows-based POS and therefore should be detected by antivirus. This means that Home Depot either did not have antivirus in place or it was not updated – either scenario is bad. That leads us to our next takeaway.
  • We don’t learn – This same style attack just occurred to a major U.S. retailer and was all over the news. Everyone knew about this attack – especially IT and security people – and yet the same style of attack was even more successful in the Home Depot incident. The lessons learned from Target should have raised guard enough to at least make sure that antivirus was properly installed on the servers managing the POS machines and updated regularly. Symantec has specifically addressed how its software detects point-of-sale malware, and many antivirus vendors were quick to add signatures for BackOff variants after they were discovered.  In this instance, the vendors appear to be doing their part, but Home Depot seems to have failed to protect itself.
  • No PINs stolen, but that doesn’t matter - In a report issued by Home Depot they stated: “While the company continues to determine the full scope, scale and impact of the breach, there is no evidence that debit PIN numbers were compromised.” But unfortunately that doesn’t matter. As Brian Krebs reported, the method of PIN reset is so out of date that even a stranger can reset your PIN with enough personal information simply by using the automated voice system:

“Countless banks in the United States let customers change their PINs with a simple telephone call, using an automated call-in system known as a Voice Response Unit (VRU). A large number of these VRU systems allow the caller to change their PIN provided they pass three out of five security checks.”

  • Where does cybersecurity insurance come into play? Business Insurance reported that Home Depot has $105M in cyber insurance to cover data breaches. Cyber liability insurance is a growing industry with the threat for seriously damaging  data breaches making growing more and more. This begs the question: will organizations lean too heavily on insurance policies rather than implementing better security policies? That isn’t to say that Home Depot did this, but one has to wonder if cyber insurance will provide executives a level of comfort that will detract from investing in proper security.

Every breach that occurs is unfortunate, but it’s also a chance for everyone to learn and avoid potentially critical mistakes in the future. What do you think some of the major takeaways or questions coming out of the Home Depot breach are?

No Comments »

Why You Should Care About Bots, and How to Deal with Them

Posted: July 10th, 2014 | Filed under: IIS & HTTP

Has your site recently been bogged down by thousands of rapid requests from a distant land you do no business with? Have you been seeing spam in your form responses or comments? Are you seeing requests for pages that don’t exist on your site? If so, you may have bots.

Don’t worry, we all have bots. It’s a normal part of a site growing up. One day you’re launching, and the next day skimmers, scammers, and scrapers are scouring your site for information or holes they can poke through.

Bots are a ubiquitous part of the web universe at this point, flying through the pipes of the internet looking for prey. Normal as they may be, there is reason to be concerned with bots. One of the most recent reports from Incapsula puts bot traffic at 61% of all web traffic. That number is nothing to sneeze at, mostly because sneezing at things is rude, but also because it’s a very big number. While this is significant, there is still some debate around whether or not this traffic is visible in web analytics.

What do bots actually do?

“Are you a good bot, or a bad bot?”

Well Dorothy, not all bots are bad. In fact, there are some good bots that do things like crawl your site for search engines, or monitor RSS feeds. These bots are helpful and you’ll want to make sure that they don’t encounter any obstacles when crawling your site. Bad bots are primarily used for reconnaissance, but can pose various degrees of threat.

Lesser Threat

  • Email scrapers - These bots will scour sites for email addresses to harvest, which can lead to lots of spam or potentially harmful bait emails (i.e. phishing attacks, or malware).
  • Link spam - Ever see spam comments or in your submission form results? If these form fields allow links to be submitted, they can cause a lot of trouble. A link to a site with malware on it in your comments could endanger your users.

Greater Threat

  • Link spam (part II) – Imagine a scenario where someone with admin privileges clicks a link from a form submission, or in a spam comment. Now imagine that the link is to a site that installs a key logger on the admin’s machine. Next time the admin logs into your site or server, the credentials are captured, and all the protection you’ve put in place is void.

Automated Exploits

  • (Aimlessly) Search and destroy - These bots can potentially do a lot of harm, if they find a vulnerability on your site. While these bots are dangerous, they also operate without any real direction. Armed with a list of popular known exploits, these bots will crawl the web and throw these known exploit tricks at every site they encounter. If they come across a site with a hole, a hook will add that site to a queue for further exploitation.
  • Targeted Search and destroy - The same as above, but with a targeted list of sites to crawl.

What’s the end game?

Bad bots are a way for an outsider to own your server. Once the server is controlled, the bad guys can do a range of things with it:

  • Steal sensitive data stored there (personal info, credit card numbers, etc.)
  • Steal account passwords
  • Send malicious emails through it
  • Attack other sites/servers with it

Why Stop Bots?

An overwhelming amount of bot requests can bog down a site and cause it to run very slowly, just like how a large amount of legitimate traffic can eat up resources and slow down a site. This is a problem for a couple of reasons:

  1. Slow site = unresponsive pages = unhappy customers = lost sales
  2. Slow site = SEO hit (site speed is a factor in SEO ranking)

Prevent heavy resource usage costs

What adds insult to injury after a slowed site prevents sales? A huge bill from your hosting provider! Yes, all those extra requests and all those extra resources being used typically cost money.

Prevent data theft

Guess what else will cost you money: data theft! Of course, this can also hugely damage public perception and reputation – which are invaluable. Not to mention the fact it could mean other people’s information, money, and identities are put at risk.

Signs You Have Bots

There are a number of ways to spot bot traffic in your logs, but if you don’t know what to look for, you will likely never know you have a bot problem. Here are a few tell-tale signs of bots hitting your site:

  • Rapid requests - A normal user browsing a site won’t request 100 pages in a few seconds, as most internet users do not have super-human reading and clicking abilities.  However, bots do. And bots will make multiple requests per second by simply following links they find, or attempting to complete forms.
  • Lots of requests from the same IP address - all over the site – Aside from making a ton of requests in quick succession, bots can typically be spotted by a long trail of requests. No matter how interesting the content on your site, most real users won’t browse every page on the site – unless it happens to be a very small site. Bots will do this. Most real users will also usually be able to successfully submit a form on the first try or two – given they have an IQ higher than that of a lemming. However, a bot, which has no IQ, may not be able to do so. You may, in fact, see multiple failed attempts to submit a form, all from the same IP.
  • Requests without sessions - Real users browsing your site will normally accept cookies, bots often will not. Requests from IPs that don’t have sessions are likely bots.
  • Requests at odd times/locations - If you see requests at times or from locations that do not make sense for your business, then it could be a sign of bot traffic. For example, if you only do business in North America, but you see a number of requests from Eastern Europe in the middle of the night, then it’s definitely worth investigating.
  • Suspicious user-agents - A general way to spot suspicious user-agents is by looking for rare or infrequent user-agents that aren’t associated with a browser (or at least a well-known or popular one). Once you find them, take a look at their activity in your logs for anything suspicious. There are also lists of known dangerous bots that can be used for reference. Lastly, a simple Google search should indicate if they are known to be bad or not.
  • Bad data - You may be accustomed to seeing bad data (spam, empty) come through your forms, but be sure to look at it with a critical eye. Spam in your forms, or empty submissions can be dangerous.
  • Bad requests - Well-behaved users won’t typically type in directories in the address bar when navigating your site, it’s much easier to navigate by clicking links on the site. So, if you see a bunch of requests for URLs with a .asp extension on your all-PHP site, then you may have a bot poking around for a known vulnerability.

Stop Bots with ServerDefender VP

By now, you’re probably asking: How can I stop the bot uprising? Don’t worry, you won’t need John Connor for this mission. You can stop these bots much more easily, and without 3 sequels, mind you. Using ServerDefender VP, you can set up a bot policy in minutes and prevent the pests of the internet from causing you headaches.

1) Figure out your policy. A very strict bot policy will require sensitive security controls that have a very low tolerance for behavior that looks like bot behavior. This will keep bot traffic down, but could put you at risk of blocking good bots or legitimate traffic. Things to take into consideration here:

  • What does normal user behavior look like?
  • Keep in mind that if your site is very error prone (be honest with yourself here), that you may want to be

2) Launch ServerDefender VP’s settings manager and enter expert view. Under the Session Management tab, go to Bot Policy. Click the Configure button to launch the configuration panel.

3) Once you know how you plan to handle bots, you can jump into the configuration. Here’s a brief rundown of what each control does:

“Begin applying bot detection counters after ____ requests” - This tells ServerDefender VP when it should begin sniffing an IP for bot behavior. If you set this value to 1, ServerDefender VP will begin monitoring an IP’s requests for bot behavior after its first request. This essentially provides no leeway. You can provide just enough leeway for the good bots by easing up when SDVP begins looking to detect bots. Providing some leeway isn’t necessarily a bad thing, as bad bots are likely to make many many requests, not just a few.

“Maximum allowed errors per second” - As explained earlier, normal users don’t make hundreds of requests per second, and therefore they do not make hundreds of errors per second. Once the number of requests set in the previous control group is reached, then the max errors per second allowed configuration will kick in. This area will determine the strength of your bot policy, as this setting is really where you’ll trap your bots.

Setting this to a higher value provides some leeway for good bots to crawl your site without being penalized for errors. The lower you set this value, the more strict it is. Typically, setting this number to a single digit value should provide sufficient padding to prevent blocking users committing innocuous errors, while ensuring trouble making bots do not pass through.

“Percentage of requests allowed without referrer” & “percentage of errors allowed without referrer”- These are good to keep at 100%, as legitimate users sometimes do not even make requests with a referrer all of the time. Once the bot controls are in place, you can also configure the blacklist for user-agents. You can either add bad user-agents you’ve encountered in the past, or add them from a list of bad user-agents. There are plenty of articles and lists of bad-user agents which you can pull from, if you choose to do so.

Questions? Need Advice or Help?

We’re always glad to lend a hand. If you have any web app security questions or would like to try out ServerDefender VP for yourself, you can email us at support@port80software.com

 

Bonus Bot Info

NPR reported on the severity of the bot threat recently, bringing the conversation to the general public.

No Comments »

Port80 Donating $50,000 in Web Security Software to Secure Schools

Posted: April 9th, 2014 | Filed under: IIS & HTTP

Big data breaches have been in the spotlight recently. You’ve likely heard of the ones happening at big corporations, but what about those happening at schools?

Educational institutions are at risk, and many don’t have the budget to implement proper security. At Port80, we’d like to do our part to help make education more secure. We will be awarding $50,000 worth of web security software to 25-50 educational organizations by September 15.

Does putting a piece of software in place make you automatically secure? Of course not, but for those who have vulnerable systems that cannot be quickly or easily fixed, we’d like to help.

Learn More Apply Now
 

Please pass this message along to anyone you think may qualify. We hope that together we can make education a more secure place.

The Port80 Software Team

No Comments »