Posted: April 4th, 2013 | Filed under: IIS & HTTP
Ask any CISO what his or her topmost concern currently and 7 out of 10 will tell you its bring your own device (BYOD). More than 70% of IT executives believe that companies without BYOD will be at a competitive disadvantage.
The importance of this topic can gauged by the fact that only in the last week, we have received calls from five Client CISO’s who have asked us for our opinion on the subject. During the same time we conducted a security review of two organizations on behalf of a client, and both these organizations had a BYOD policy.
BYOD surely has its advantages. Employees are happy as it gives them freedom to use their own device, which increases flexibility, convenience, and productivity. Companies are happy because it cuts the cost of deployment and management of sometimes hundreds of devices. It’s not surprising therefore that BYOD has become a natural favorite amongst both employees and employers. In fact our President Hiren Shah was a visionary in this regard as he implemented BYOD way back in 2006, when he released a policy of allowing select models of mobile phones access to corporate e-mail.
No Comments »
Posted: February 28th, 2013 | Filed under: IIS & HTTP
From our partners at Net-Square
Two Factor Authentication, as the name suggests, relates to a form of approval which requires a user to present two or more of the 3 authentication factors. These are: “something the user knows (eg. Password, PIN)”, “something the user has (eg. ATM card, Smart Card)” and “something the user is (eg. Biometric characteristic such as fingerprint).” Not a new concept for the masses in this era of online banking and internet trading. Two factor authentication has become increasingly important for online applications because of the ever increasing hacking attacks.
Recently, a US based company offering free cloud storage services was hacked where few accounts were broken into and one employee account containing user data files was breached. This sounds dangerous, as hackers may get confidential data from account owners. Since this service was being used worldwide, where companies were storing important data online, this company then decided to implement two-factor authentication to prevent such attacks.
No Comments »
Posted: January 8th, 2013 | Filed under: Web and Application Security | Tags: hackers, infosec, web security
From our partners at Net-Square
The year that was 2012 has ended, and it is time to start thinking about challenges that the New Year shall bring. As defenses get stronger, so do attacks. 2013 shall be the year of hybrid attacks – targeting man and machine together. The greatest challenge for 2013 shall lie in re-designing your Information Security strategy to measure up to heightened expectations. As you make your plans, let me share with you my top 5 thoughts for improving the maturity of your InfoSec program.
1. Plan on staffing a Red Team
A Red Team is “an independent group that seeks to challenge an organization in order to improve effectiveness”. Red Teaming has its origins in the military. In an InfoSec context, Red Teams serve as an “intelligence agency” to identify gaps, vulnerabilities and shortcomings in your organization’s IT infrastructure. The sole agenda of the Red Team is to find the holes before attackers do, while continuously coming up with new threat scenarios that impact the organization’s IT function.
2. Ensure that all IT purchases require InfoSec approval
There are few tasks more thankless than having to maintain security for an IT system that is defective by design. Talking to our clients revealed that 80% of all vulnerabilities fall under the “we know it already” category. “We have inherited a mess”. “We know it is broken, but what do we do?” Do these phrases sound familiar? Well then, make it a policy decision to evaluate and test all major IT requisitions before signing the cheque.
3. Insist upon pre-tested 3rd party developed software
Majority of the vulnerabilities we find lie in 3rd party developed software, or heavily customized implementations of large packaged applications. Shouldn’t the software vendor have their software tested for security vulnerabilities before selling it to your organization? It is time to insist for it during the procurement cycle and I would add insist on getting a White box testing certification.
4. Publish a testing calendar for the entire year…and stick to it!
Announce all your vulnerability assessment and penetration testing schedules for the entire year at the very beginning of 2013. Schedule quarterly or half yearly tests for all critical applications, and at least annual tests for all others. Let all your developers and vendors know of the testing schedules. Do not let the testing schedule get sidetracked by release cycles. Software production shall always be delayed. Delaying your testing shall only prolong the agony.
5. Conduct at least one surprise attack on a critical application
Hackers aren’t going to wait until after your system migration is complete. Hackers aren’t going to spare you during peak transaction hours. Hackers will target your live systems, not your UAT systems. And your IT team will always be stressed – 365 days a year. That is reality. So why conduct fairy-tale penetration testing? As a leader of your InfoSec organization, plan on conducting a surprise attack on the production servers of your critical application during peak business hours. Let me just say that this shall be the shortest path to figuring out the biggest gaps in your organization.
As always, I would like to quote “that which does not kill you makes you stronger.”
No Comments »
Posted: December 12th, 2012 | Filed under: IIS & HTTP | Tags: application security, blacklist, information security, Port80 Software, web application security, whitelist
Does the Blacklist Approach Work?
Traditionally, IT security is thought of from a threat perspective. It always brings into focus thoughts of protecting the applications, systems and infrastructure from viruses, malware and other threats posed to IT assets. Therefore one is always focused on identifying new threats and making sure they get integrated into the “Blacklist,” an “allow all, except” list that is maintained to protect one’s assets. This is the same principle on which many anti-virus, anti-malware, and other security product providers work. You update the signatures, and the blacklist is updated so you will be protected from a certain threat, which, by the way, is out there in the open known to everyone. While we have our thoughts on whether this approach is truly effective or not in protecting against viruses and malware, our views on application security is very clear. The blacklist approach doesn’t work; especially not today when the attacks have become very sophisticated.
The Problem(s) with Blacklists…
For one, we are fast reaching a saturation point for the blacklist approach’s effectiveness, as the volume of blacklists that need to be maintained is large and ever growing. As one Senior IT Manager at one of our client’s organizations once put to us, “How much will I filter? There is no end to it.” This is not the first time we have come across this frustration. We recognize this challenge for the drivers of IT in an organization, as their core function is to improve productivity and drive innovation.
And second, because the attack vectors have become complex and the attackers more innovative and skillful in evading detection, the “Blacklist” approach will not work. I was personally seized of this challenge when we were working on putting together an Anti-Spam solution in my earlier stint. The sheer number of SPAM messages meant that some of them would definitely filter through. Unfortunately, the same scenario is now playing out in the Application Vulnerability space, but with potentially disastrous implications.
So You’re Saying I should Whitelist?
So then what is the answer? Well, take the “Whitelist” approach. With the whitelist approach you structure the application to only accept the legitimate functionality and stop everything else. Some simplistically put it as diagonally opposite of blacklisting, i.e. the “deny all except” philosophy. In the past this approach has faced a roadblock because nobody wanted to take a chance of blocking a legitimate transaction. Recognizing this challenge, we are now helping our customers design applications by integrating the whitelist approach. What we do here is sit with the architecture or development team and review the business case for each user input and then work out different solutions of applying a whitelist on these inputs. We believe that this approach works best as now you are only allowing a legitimate functionality to get executed. In what form does this whitelist approach take? It takes many different forms like filtering input characters against an array of allowable characters or doing a comparison of input values against legitimate values from the database.
Using the blacklist approach is like chasing your tail. How long can you do it for before you exhaust yourself?
Until next time, stay safe!
No Comments »
Posted: November 15th, 2012 | Filed under: IIS & HTTP, Web and Application Security | Tags: cyber security, information security, microsoft iis, Port80 Software, serverdefender vp, web application firewall, web security
Security You Can See
For the least few years, we have been developing ServerDefender VP, an advanced Web application firewall for IIS. One of the features that has been evolving along with ServerDefender VP is the LogViewer. This is the hub of the WAF where users can interact with and monitor malicious traffic hitting their site. Since there is so much to do within the LogViewer it sometimes becomes easy for a feature or two to be missed, so we’ve decided to explain some of the cool tricks its capable of.
What is the LogViewer?
The LogViewer is a tool that visualizes events (blocked threats and errors) that occur in your application and allows you take a variety of different actions on them with only a few clicks. When selecting an event users can see an array of data that pertains to it such as the referrer, user-agent, IP address, session ID, GET and POST data, and other critical information.
Click to enlarge.
What Can Actions Can I take on an Event?
There are several different actions that a user can take on an event in the LogViewer. The primary actions are for security settings (blocking IP addresses and creating exceptions), forensic tools (viewing all events by IP, comparing a session against IIS logs), and exporting reports.
Click to enlarge.
One of the key actions available to users from the LogViewer is the ability to add an exception to event, such as a false positive. Adding an exception on an event lets users specify new settings should the same event occur. This means that users can tell a blocked action to be allowed and configure new rules for the future.
Click to enlarge.
The LogViewer’s forensic tools enable users to gain further knowledge about an event and the session and IP behind it.
“View This Session in IIS Logs” displays the session logs with errors recorded by ServerDefender VP highlighted. This feature is useful to determine what occurred in a session prior to an error occurring and establishing the validity of an error, should there be any questions around it.
“View this IP Only” displays only the events in the LogViewer attributed to that IP address. This makes it easier to visualize the actions of a single IP address and understand its patterns, which can help users determine if the action they should take against the IP, if any.
Questions for Us? Ready to try?
The LogViewer is a powerful tool for viewing malicious traffic in your app and way to quickly react to events. If there’s anything else you’d like to learn about the LogViewer – or ServerDefender VP in general – send us an email at email@example.com or Tweet us @port80software. If you’d like to enjoy a 30-day free trial, go ahead and download now.
No Comments »