Posted: July 31st, 2014 | Filed under: Web and Application Security
from our partners at Net-Square
Criminals were successful in bypassing an Android-based, two-factor authentication system during their spear phishing and malware attacks. The malicious campaign, known as Operation Emmental was discovered by a security software company earlier this year.
The criminal gang which managed Operation Emmental used phishing attacks to gather bank customer’s personal information and other sensitive data, including:
They used this information to bypass bank authentication systems used by 34 different banks across four countries.
These attacks were first discovered about five months ago and have been actively targeting customers of financial services firms from Switzerland, Austria, Sweden and Japan. All of the targeted banks use a session-based token, sent via SMS, to act as a second factor for authenticating users before they’re allowed to log into their online bank account.
How Operation Emmental Was Executed
It all starts with a fake email that looks like it was sent by a legitimate and well-known entity. Then the cyber criminals serve malware attached to the email as an apparently harmless Control Panel (.cpl) file.
If users execute the malware, which may be disguised as a Windows update tool, the malware changes their system’s settings to point to an attacker-controlled Domain Name System. This allows attackers to secretly observe and control all HTTP traffic. Next, a new root Secure Sockets Layer (SSL) certificate is installed, which looks legitimate and prevents web browsers from warning victims of a bad/insecure cert as it normally would.
The malware deletes itself leaving behind only the altered configuration settings. This makes the attack difficult to spot: when users with infected computers eventually try to access the bank’s website, they are instead pointed to a malicious site that looks and works just like the real bank website.
The next phase of the attack occurs when users log into the fake banking site. Once logged in, users are instructed to download and install an Android app that generates one-time tokens for logging into their bank. In reality, it will intercept SMS messages from the bank and forward them to a command-and-control server or to another mobile phone number.
This means that the cybercriminal not only gets the victims’ online banking credentials through the phishing website, but also the session tokens needed to bank online as well. The criminals end up with full control of the victims’ bank accounts. By stealing the credentials and compromising the authenticated session of the user, it would appear normal to the Bank, as if a user is merely conducting a typical financial transaction. In reality, the user is potentially having their bank account drained without any of the typical banking warning flags going up.
What can you do to protect yourself and your users?
One recommendation comes from the researcher who first discovered this attack. Improve the verification schemes for users and user transactions. If the verification process went beyond multi-factor authentication or session-based tokens via SMS, it could prevent this particular type of campaign.
In addition, banks should warn their customers never to click on links in emails, but instead cut and paste links in the browser bars.
In the remediation report, additional recommendations have stated that banks should implement open source Domain-based Message Authentication, Reporting & Conformance (DMARC) technology. This helps in verifying the email origin and domain names, eventually blocking many types of phishing attacks against customers. DMARC is fundamentally important since it ascertains whether an email of a domain name is spoofed or impersonated.
The lessons learned from Operation Emmental are ones that stretch beyond banking. These concepts can be applied to any type of app with a secure login. If you have any questions about testing or securing your app, please feel free to reach out to us!
No Comments »
Posted: July 10th, 2014 | Filed under: IIS & HTTP
Has your site recently been bogged down by thousands of rapid requests from a distant land you do no business with? Have you been seeing spam in your form responses or comments? Are you seeing requests for pages that don’t exist on your site? If so, you may have bots.
Don’t worry, we all have bots. It’s a normal part of a site growing up. One day you’re launching, and the next day skimmers, scammers, and scrapers are scouring your site for information or holes they can poke through.
Bots are a ubiquitous part of the web universe at this point, flying through the pipes of the internet looking for prey. Normal as they may be, there is reason to be concerned with bots. One of the most recent reports from Incapsula puts bot traffic at 61% of all web traffic. That number is nothing to sneeze at, mostly because sneezing at things is rude, but also because it’s a very big number. While this is significant, there is still some debate around whether or not this traffic is visible in web analytics.
What do bots actually do?
“Are you a good bot, or a bad bot?”
Well Dorothy, not all bots are bad. In fact, there are some good bots that do things like crawl your site for search engines, or monitor RSS feeds. These bots are helpful and you’ll want to make sure that they don’t encounter any obstacles when crawling your site. Bad bots are primarily used for reconnaissance, but can pose various degrees of threat.
- Email scrapers - These bots will scour sites for email addresses to harvest, which can lead to lots of spam or potentially harmful bait emails (i.e. phishing attacks, or malware).
- Link spam - Ever see spam comments or in your submission form results? If these form fields allow links to be submitted, they can cause a lot of trouble. A link to a site with malware on it in your comments could endanger your users.
- Link spam (part II) – Imagine a scenario where someone with admin privileges clicks a link from a form submission, or in a spam comment. Now imagine that the link is to a site that installs a key logger on the admin’s machine. Next time the admin logs into your site or server, the credentials are captured, and all the protection you’ve put in place is void.
- (Aimlessly) Search and destroy - These bots can potentially do a lot of harm, if they find a vulnerability on your site. While these bots are dangerous, they also operate without any real direction. Armed with a list of popular known exploits, these bots will crawl the web and throw these known exploit tricks at every site they encounter. If they come across a site with a hole, a hook will add that site to a queue for further exploitation.
- Targeted Search and destroy - The same as above, but with a targeted list of sites to crawl.
What’s the end game?
Bad bots are a way for an outsider to own your server. Once the server is controlled, the bad guys can do a range of things with it:
- Steal sensitive data stored there (personal info, credit card numbers, etc.)
- Steal account passwords
- Send malicious emails through it
- Attack other sites/servers with it
Why Stop Bots?
An overwhelming amount of bot requests can bog down a site and cause it to run very slowly, just like how a large amount of legitimate traffic can eat up resources and slow down a site. This is a problem for a couple of reasons:
- Slow site = unresponsive pages = unhappy customers = lost sales
- Slow site = SEO hit (site speed is a factor in SEO ranking)
Prevent heavy resource usage costs
What adds insult to injury after a slowed site prevents sales? A huge bill from your hosting provider! Yes, all those extra requests and all those extra resources being used typically cost money.
Prevent data theft
Guess what else will cost you money: data theft! Of course, this can also hugely damage public perception and reputation – which are invaluable. Not to mention the fact it could mean other people’s information, money, and identities are put at risk.
Signs You Have Bots
There are a number of ways to spot bot traffic in your logs, but if you don’t know what to look for, you will likely never know you have a bot problem. Here are a few tell-tale signs of bots hitting your site:
- Rapid requests - A normal user browsing a site won’t request 100 pages in a few seconds, as most internet users do not have super-human reading and clicking abilities. However, bots do. And bots will make multiple requests per second by simply following links they find, or attempting to complete forms.
- Lots of requests from the same IP address - all over the site – Aside from making a ton of requests in quick succession, bots can typically be spotted by a long trail of requests. No matter how interesting the content on your site, most real users won’t browse every page on the site – unless it happens to be a very small site. Bots will do this. Most real users will also usually be able to successfully submit a form on the first try or two – given they have an IQ higher than that of a lemming. However, a bot, which has no IQ, may not be able to do so. You may, in fact, see multiple failed attempts to submit a form, all from the same IP.
- Requests without sessions - Real users browsing your site will normally accept cookies, bots often will not. Requests from IPs that don’t have sessions are likely bots.
- Requests at odd times/locations - If you see requests at times or from locations that do not make sense for your business, then it could be a sign of bot traffic. For example, if you only do business in North America, but you see a number of requests from Eastern Europe in the middle of the night, then it’s definitely worth investigating.
- Suspicious user-agents - A general way to spot suspicious user-agents is by looking for rare or infrequent user-agents that aren’t associated with a browser (or at least a well-known or popular one). Once you find them, take a look at their activity in your logs for anything suspicious. There are also lists of known dangerous bots that can be used for reference. Lastly, a simple Google search should indicate if they are known to be bad or not.
- Bad data - You may be accustomed to seeing bad data (spam, empty) come through your forms, but be sure to look at it with a critical eye. Spam in your forms, or empty submissions can be dangerous.
- Bad requests - Well-behaved users won’t typically type in directories in the address bar when navigating your site, it’s much easier to navigate by clicking links on the site. So, if you see a bunch of requests for URLs with a .asp extension on your all-PHP site, then you may have a bot poking around for a known vulnerability.
Stop Bots with ServerDefender VP
By now, you’re probably asking: How can I stop the bot uprising? Don’t worry, you won’t need John Connor for this mission. You can stop these bots much more easily, and without 3 sequels, mind you. Using ServerDefender VP, you can set up a bot policy in minutes and prevent the pests of the internet from causing you headaches.
1) Figure out your policy. A very strict bot policy will require sensitive security controls that have a very low tolerance for behavior that looks like bot behavior. This will keep bot traffic down, but could put you at risk of blocking good bots or legitimate traffic. Things to take into consideration here:
- What does normal user behavior look like?
- Keep in mind that if your site is very error prone (be honest with yourself here), that you may want to be
2) Launch ServerDefender VP’s settings manager and enter expert view. Under the Session Management tab, go to Bot Policy. Click the Configure button to launch the configuration panel.
3) Once you know how you plan to handle bots, you can jump into the configuration. Here’s a brief rundown of what each control does:
“Begin applying bot detection counters after ____ requests” - This tells ServerDefender VP when it should begin sniffing an IP for bot behavior. If you set this value to 1, ServerDefender VP will begin monitoring an IP’s requests for bot behavior after its first request. This essentially provides no leeway. You can provide just enough leeway for the good bots by easing up when SDVP begins looking to detect bots. Providing some leeway isn’t necessarily a bad thing, as bad bots are likely to make many many requests, not just a few.
“Maximum allowed errors per second” - As explained earlier, normal users don’t make hundreds of requests per second, and therefore they do not make hundreds of errors per second. Once the number of requests set in the previous control group is reached, then the max errors per second allowed configuration will kick in. This area will determine the strength of your bot policy, as this setting is really where you’ll trap your bots.
Setting this to a higher value provides some leeway for good bots to crawl your site without being penalized for errors. The lower you set this value, the more strict it is. Typically, setting this number to a single digit value should provide sufficient padding to prevent blocking users committing innocuous errors, while ensuring trouble making bots do not pass through.
“Percentage of requests allowed without referrer” & “percentage of errors allowed without referrer”- These are good to keep at 100%, as legitimate users sometimes do not even make requests with a referrer all of the time. Once the bot controls are in place, you can also configure the blacklist for user-agents. You can either add bad user-agents you’ve encountered in the past, or add them from a list of bad user-agents. There are plenty of articles and lists of bad-user agents which you can pull from, if you choose to do so.
Questions? Need Advice or Help?
We’re always glad to lend a hand. If you have any web app security questions or would like to try out ServerDefender VP for yourself, you can email us at firstname.lastname@example.org
Bonus Bot Info
NPR reported on the severity of the bot threat recently, bringing the conversation to the general public.
No Comments »
Posted: April 9th, 2014 | Filed under: IIS & HTTP
Big data breaches have been in the spotlight recently. You’ve likely heard of the ones happening at big corporations, but what about those happening at schools?
Educational institutions are at risk, and many don’t have the budget to implement proper security. At Port80, we’d like to do our part to help make education more secure. We will be awarding $50,000 worth of web security software to 25-50 educational organizations by September 15.
Does putting a piece of software in place make you automatically secure? Of course not, but for those who have vulnerable systems that cannot be quickly or easily fixed, we’d like to help.
Learn More Apply Now
Please pass this message along to anyone you think may qualify. We hope that together we can make education a more secure place.
The Port80 Software Team
No Comments »
Posted: April 7th, 2014 | Filed under: IIS & HTTP
Data breaches. They don’t just happen to the retail big boys like Target and Neiman Marcus. They happen to big and small organizations, and every size in between. It was recently revealed that Texas liquor chain Spec’s Wine, Spirits, and Finer Foods fell victim to a serious data breach. Spec’s has 155 locations around Texas, ‘where everything is bigger’… Including the breaches!
Half a Million Victims
According to Spec’s statements, the breach affected fewer than 5% of their total transactions- less than 550,000 customers. While half a million customers is a sizable number of victims, Spec’s may be counting themselves lucky, as the breach only affected 34 smaller neighborhood stores, rather than all of their locations. Information exposed during the breach may include bank routing numbers, as well as payment card or check information.
Spec’s problems began on October 31, 2012, when one of their computer systems was compromised. When did the compromise end, you ask? The breach ended as late as March 20. For those counting, that’s nearly 17 months of uninterrupted access to data.
Spec’s spokeswoman Jenifer Sarver told the Houston Chronicle that the breach was, “a very sophisticated attack by a hacker … who went to great lengths to cover their tracks.” Sarver also went on to reveal that, “It took professional forensics investigators considerable time to find and understand the problem then make recommendations for Spec’s to fully address and fix them.”
What makes this breach newsworthy?
Every breach story is bad in some regard:
- There are victims whose information is no longer private
- There are mistakes made by staff
- There are property/money losses
Some concerning points about this breach and why we think it’s relevant:
- The breach went on for 17 months
- The breach was first noticed by banking institutions when suspicious transactions began, not by Spec’s IT team
- Evidence of breach may have surfaced over a year ago, but no action was taken
- Resolving this problem after discovery has taken considerable time
What we can learn from this breach
The Spec’s Wine, Spirits, and Finer Foods breach illustrates the need for a strong security posture, no matter the size of an organization.
One security tool that makes monitoring, identifying, and responding to attacks much simpler for small and medium sized organizations is ServerDefender VP. This powerful tool is easy to use and helps protect against more than just a list of known attack signatures.
No Comments »
Posted: March 6th, 2014 | Filed under: Web and Application Security | Tags: information security, infosec, sql injection vulnerabilities, vulnerability scanner, web app scanner, web application security, web application vulnerability scanner, web security, xss vulnerabilities
Many of customers come to us asking how they can test their web applications for vulnerabilities. For an automated approach, there a numerous web application vulnerability scanners out there that can help detect vulnerabilities. With so many options, picking the appropriate scanner can be a little bit tricky. Which is most accurate? Which is the most thorough? The answer is rarely clear.
Lucky for us, the folks over at Security Tools Benchmarking recently assembled their yearly list of web scanners, aptly named “The Web Application Vulnerability Scanners Benchmark”. The list is very comprehensive and puts both open source and commercial scanners through a gamut of tests. The assessment looks at twelve different aspects of each tool to assist individuals and organizations in their evaluation of vulnerability scanners.
In total, 63 different web application vulnerability scanners were test (we’d say that’s pretty thorough), with 49 of those being free or open-source projects, and 14 of them being commercial.
The following features were assessed during the evaluation:
- The ability to detect Reflected XSS and/or SQL Injection and/or Path Traversal/Local File Inclusion/Remote File Inclusion vulnerabilities.
- The ability to scan multiple URLs at once (using either a crawler/spider feature, URL/Log file parsing feature or a built-in proxy).
- The ability to control and limit the scan to internal or external host (domain/IP).
You can organize the scanners by commercial or open source and see a quick comparison of each scanner’s features. From there you can dive into a detailed report for individual scanners.
View the full commercial comparison.
View the full open source comparison.
If you’re looking for a scanner, we encourage you to take a look at the comple report and evaluation criteria over at the Security Tool Addict blog. If you have questions about remediating or securing vulnerabilities after your scan, you can always contact Port80 Software for advice.
No Comments »