GETVPN for MPLS WAN Encryption March 5, 2009Posted by mdmercurio in Stories.
Tags: encryption, getvpn, mpls, network security, technology
add a comment
For many years there has been a compromise of security vs. convenience on private WAN networks. For the most part, WAN connections have been considered private even though there are potential points throughout the path from which data could be compromised. In theory, a provider could tap into the data stream at any point within the data path. In addition, with solutions such as MPLS, the security of the VPN is totally reliant upon the service provider and their configuration.
This compromise has been in place given the complexity and overhead associated with trying to encrypt all WAN connections for a large organization. To create individual IPSec tunnels to every endpoint in the WAN cloud is both cumbersome and a management nightmare. Quality of service is also difficult to maintain as IPSec only keeps the TOS bit and encapsulates everything else. As a solution Cisco has come up with Group Encrypted Transport VPN or GETVPN for short. Catchy acronym that makes one wonder if some marketing person came up with the name before the technology was developed. The basics are simple, rather than creating individual point to point IPSec connections, GETVPN establishes an agreed upon key throughout all the WAN points. Additionally, some packet level detail is maintained to allow for quality of service and natural routing.
Cisco is making an aggressive play here that many other router providers will need to answer soon. As security auditors and assessors recognize there is an easy and cost effective solution to the minimal risk of data compromise over the WAN, they may begin on insisting such a solution be implemented. Right now, GETVPN is a Cisco proprietary solution so you need a completely Cisco WAN infrastructure to support it.
As with any security solution, there are some things to understand regarding this technology before implementing it. Jan Bervar has a good write-up regarding some of the potential security pitfalls with this solution. Even with these potential drawbacks, GETVPN is a good solution for fully meshed encryption across the WAN and should be a consideration for companies moving forward.
Compliance Is Not A Product January 28, 2009Posted by mdmercurio in Stories.
Tags: compliance, network security, PCI compliance, security risk, technology
1 comment so far
I recently had a conversation with a colleague in which we discussed a solution for a customer based on PCI compliance needs. The solution was for remote offices that currently use Cisco 800 series routers to terminate VPN services to a corporate WAN and they wanted to meet PCI requirements for having a stateful firewall along with logging and monitoring. The comment from my colleague was something similar to, “That router won’t meet their PCI regulatory requirements, they need a real firewall.”
There are two major problems with this statement. The first is easily addressed, the Cisco ISR routers including the 800 series are all capable of meeting some PCI requirements with the proper IOS and configured correctly. Cisco has a page dedicated to PCI and validated designs and the ISR router is included in many of them.
The second problem is more important but sometimes a little harder to understand. A product, in and of itself, cannot be deemed as being PCI compliant or not. A product is only part of the solution, the total compliance piece includes the policy defining the devices purpose and use, the standard defining the devices configuration, and the procedures defining the devices administration. All of these together are needed to meet a particular PCI requirement, but none by themselves does the trick.
Let’s take the above example, the PCI Data Security Standard defines the firewall needs for public networks in section 1.3 and the need for proper monitoring and logging in section 10.2. It does not state that you must use a particular brand or model to meet those requirements or that a specific model doesn’t meet those requirements. With firewalls on public networks, the particular customer in question first needs a security policy statement dictating that all networks connected to the Internet need to have a stateful inspection firewall. Very simple and seems redundant, but that is what is required to show the auditor you understand the regulatory need. The policy should be generic to ensure compliance with multiple regulations as needed and to ensure the IT department has some leeway in which products to embrace to meet the need. The second part is the company standard in meeting that policy. This is where a particular product comes into play. A sample standard for this example would state that this company utilizes Cisco 800 series routers with the firewall feature set enabled. It will also detail the approved configuration for these devices to ensure egress filtering, NAT, and stateful inspection needs are met. After that, defined procedures will detail the daily administration, logging, and change management procedures for the device in question. All these documents would be presented to the auditor as evidence who would review them for completeness and compare them to actual install configurations to ensure compliance.
I will leave with one final comment. Compliance should not be a scary or complicated process. Vendors often use compliance as a marketing ploy using age old scare tactics, but companies need to realize compliance is the minimum requirement for security to be met. If you are scared about meeting minimal security requirements, your organization has bigger security issues to deal with.
Physical Security an IT Issue January 22, 2009Posted by mdmercurio in Stories.
Tags: network security, physical security, security risk, technology
add a comment
The emergence of a number of next generation technologies for access control and physical security are bringing the responsibility of physical security into the IT department. Nothing makes that more apparent than the emergence of Cisco into the field with a product line geared specifically to access control and physical security.
Traditional analog camera and coax control systems are quickly being replaced with IP connected systems. This brings a whole new level of compatibility and solutions. On a simple level, direct communication to the IP network by physical access devices allows for easier control of users. One user database can control computer and physical access. Coordinated video surveillance will allow first responders incredible access to more information than ever before. Banks and retail outlets will have access to centralized high definition video that is no longer grainy and can push those images off site immediately to avoid any tampering with the equipment.
On the other hand, this brings a whole new realm of responsibility on the IT staff and creates new potential risks to the organzation. Could a remote attacker possibly open a door remotely for an accomplice? Even so, this promises to be a useful technology.
Shred-it Data Leakage January 12, 2009Posted by mdmercurio in Stories, Tips and Tricks.
Tags: data leakage, network security, security risk, technology
add a comment
I’ve conducted many security assessments and like to feel I have a good understanding as to where to look for potential data leakage issues. Sometimes though, an imaginative mind comes up with something that you haven’t thought of before.
During a security assessment, an associate and I were conducting a walk through of the office to look for obvious issues. As we walked by a Shred-it bin sitting in the office, my associate stuck his hand in and pulled out some papers. Sure enough, some of the data on the papers were confidential. I had walked by many of these bins during many assessments but had not thought of them as a data risk of this sort. While the slot on these bins are small, an overfilled bin can potentially cause a disclosure of confidential information.
Needless to say, for all assessments after that I have viewed these bins in a different manner. I always look to determine if the bin is overflowing and often attempt to gather papers from them. Some data leakage methods such as USB keys, CD’s, lost laptops, and portable hard drives are apparent. Others like paper overflowing from a shred bin are not.
The first lesson here is obvious, shredder services may cause a potential risk because intact data resides in these bins until the service comes to shred them. The second lesson, however, is much more important. Given the amount of methods available for data to leave a business, it is important to keep an open mind as a security professional and think imaginatively.
What Comes After CAPTCHA? January 6, 2009Posted by mdmercurio in News.
Tags: CAPTCHA, network security, security risk, technology
add a comment
Last year the security of CAPTCHA protection was called into question. Earlier in the year, Websense revealed that spammers were using a bot to break Microsoft’s CAPTCHA defense on Live Mail, and just last month computer scientists revealed the ability to break audio CAPTCHA’s with a high percentage rate.
The whole premise of a CAPTCHA is that it is easier for a human to interpret an image and determine the general form of a letter than for a computer to do it. Take the following image:
A human can easily distinguish the letter ‘R’ in the image, while a computer will have a much harder time. One reason for this is we do not fully understand brain functions enough to fully mimic the reasoning behind our interpretation. That is changing. You’ll notice the audio CAPTCHA finding was not announced at a security convention but at a conference on neural networks. Very briefly, neural networks utilize the concept that rather than program a system to mimic an action, the system is given boundaries in which to act and then they learn through progressively corrected behavior. In the same fashion as a child learning their letters, a neural network system can learn what a correct letter is through trial and error. As this technology gets more advanced, CAPTCHA will no longer be a valid method in determining a human from a computer.
Depending on the timing, the security impact could be enormous. Almost every site from social networking to email systems rely upon CAPTCHA technology to stem the flood of bogus accounts. Apart from just spam, imagine a new breed of DOS where millions of accounts were being created to flood a website and bring the system to a halt.
We need to develop a new method before the CAPTCHA scheme is fully compromised as it will take time for websites to embrace a new technology. One option as to a new scheme is to use pictures instead of letters. Take the three images below.
Humans have no problem interpreting them as cars, but it would be very difficult for a computer. This could work better than letters as letters have a narrower limit visually before they become illegible whereas an image of a car taken at almost any angle is recognizable as a car. The major obstacle here would be to catalog enough images that are distinguishable yet too many to catalog for an automated system. This also seems like a stop gap measure as it will only be a matter of time before systems can recognize images as well as letters.
CAPTCHA’s are used to quickly determine a human from a computer for convenience. In the early days of the web, it wasn’t unheard of to have a site take a day or two to manually authorize your account. These days, registrations equal dollars. Websites with the most registered users are considered the most valuable, so making it easy for a user to register is a key motivation. If they can not tell a true account from a bogus one, however, the accounts are meaningless. Maybe the time has come to enact measures such as a waiting period between registration and account activation and forgo a little convenience for the sake of security. With open standards like OpenID for cross site registration becoming more prevalent, a waiting period could be less of a hassle.
Got an idea for a CAPTCHA replacement or a comment? Let me know.
Top 5 Security Issues for 2009 January 2, 2009Posted by mdmercurio in News, Tips and Tricks.
Tags: network security, security issues, security risk, technology
Happy New Year! 2008 closes with some of the most advanced attacks ever found, so we can certainly look forward to an interesting 2009.
There is a lot of information in those reports, but the reports highlight one item: The attacks are getting more organized and focused. Given that, it is increasingly important to be diligent in security practices. Here is my list of the top five security issues to focus on for 2009:
- VoIP: Voice is now a data issue when once it was the realm of the telecommunication technicians. Gone are RJ11 jacks and TDM systems quickly being replaced with RJ45, TCP/IP packets, and MPLS WANs. There have always been attacks on voice, but now they affect the data network. VoIP is often installed by outside consultants and internal staff need to come up to speed on it quickly. Voice is now data and needs to be treated as such with the same precautions given any confidential data on the network. Security engineers need to get up to speed on VoIP and keep there ears open here.
- Data Leakage: OK, this is an overused term, but the threat still exists. The problem is that stopping data from leaving the network is next to impossible. It is just too big a job and there is always a way around it and users find it. I worked for a company that forced laptop encryption (good thing) to ensure stolen or lost laptops were not compromised, yet they don’t backup individual systems on the network because they don’t have the capacity. So they allow users to backup to USB or external hard drives which are not encrypted. Hello?
- Social Engineering: Criminals typically go after the low hanging fruit. Why break into a house with the lights on, a security system, and a dog when the house next door is dark and inviting? Right now, remote attacks are quick and successful and easily accomplished. As security measures against these attacks get more sophisticated, they will look for other low hanging fruit. If the reports are correct and attacks are getting more organized and focused, I predict an increase in more personal social engineering type attacks. They’re easy, they just need to figure how to make them profitable.
- Recovery: Security professionals are much more likely to concentrate on technologies that prevent systems from being compromised or fail then on systems that help them recover quickly when a compromise occurs. Systems are much more likely to succumb to failing hardware than just about any other attack. Yes, failing hardware IS an attack on the availability of your data systems. Technologies such as virtualization and imaging are allowing for quick recovery.
- Remote Users and Partners: More companies are allowing remote users and third parties to enter the network. This essentially extends the border of the network to the device that is attaching. There are many good technological controls to help lower the risk here, but in the ever increasing need to allow data access to anyone, shortcuts are often taken. Security professionals need to review remote access carefully and internal staff really need to review the methods. Do users truly need SSLVPN, IPSec VPN’s, Citrix, and OWA? If possible, limit the access to one standardized method for all.
Wireless: It’s been a security issue in the past and will continue to in the future. Wireless extends connections to the network beyond the physical boundaries of the walls and with that comes risk. Do you know what, if any, encryption is being used at your user’s homes? Does this matter? It could.
Rogue Devices and Software: When you can plug any device into any port, issues arise. Back in the Novell/IPX days, Doom servers could unintentionally slow down a network. These days it could be Playstations and P2P software.
There are tons of other issues out there. Let me know what you are going to focus on.
Reducing a High Risk Finding December 30, 2008Posted by mdmercurio in Tips and Tricks.
Tags: Information Privacy, network security, risk assessment, security assessment, security risk, technology
add a comment
When conducting a security assessment, the level of risk defined for a resource varies according to the value of the data and other protections in place. Unfortunately, audit tools can not determine if a level of risk is actually lower based on items that the tool can not measure. Those new to the security field are wary about reducing the risk level of a finding and what often happens is there is a disproportionate amount of findings with a high level of risk.
Overall, this is a good thing. Security professionals should err on the side of caution when it comes to risk. This being said, a good security professional should understand what it would actually take to exploit the vulnerability, and that will help them determine the level of risk.
About ten years ago I conducted my first security assessment. I was a newly trained security professional who had just gone through the Network Associates ‘Total Network Security Professional’ training and certification. I had developed an assessment for the company I worked for and was using the Network Associates program CyberCop Scanner. The report came out with a huge amount of ‘high’ level risks indicating SMB was enabled on the machines scanned. Almost every server on the network was afflicted with this so called risk. I knew it was Windows sharing, but the tool stated it was a high risk and in the report it went in as a high risk. What I couldn’t explain to the customer was why something necessary such as file sharing was a high risk to them and what they could do to mitigate it. What I learned afterward is that the risk would be considered high if SMB was open on the Internet. The customer had a firewall and was protected from outside attacks. They had proper password security on the designated shares, thus the risk was lower than reported.
I have seen many engineers that I have trained go through this process. Instead of trial by fire in front of the customer, however, I review the findings with them and ask a couple of questions regarding each:
1) What is the vulnerability found?
2) What would need to be done to exploit this vulnerability?
3) If the vulnerability was exploited, what would occur?
Risks need to be weighed in this fashion to be valid. I’ll use an extreme example. If I find a wide open anonymous FTP server, the immediate response is that it is a high risk issue. Now consider the server is on the DMZ of a firewall, that server is used as an SMTP relay only, is fully patched, and the firewall has egress and ingress filtering to the server for port 25 only. The risk has been reduced by other protections on the network that my scanner can not weigh. In order to exploit the FTP server, I now need to compromise the server over the SMTP service, and also compromise the firewall to allow the FTP port. If my firewall is compromised, an FTP server would be the least of my problems.
I tend to use this general risk weighing technique used on a case by case basis: If a vulnerability can be exploited directly, then there is a higher risk, if more than one sequential item needs to be compromised before the vulnerability can be exploited, then it probably is not a high risk. If you have to jump on one leg, rub your tummy, pat your head, and recite the pledge at the same time to make it happen, it likely won’t.
Rootkit in Security Software December 29, 2008Posted by mdmercurio in News.
Tags: network security, rootkit, security risk, technology, virus
add a comment
All they are saying is that the publisher is the same publisher of the Sony USB rootkit found in 2007. A little research turned up the name of the company: FineArt Technology Co.
While it is not a well known brand of security products in the US, I’m putting up this notice regarding their EIS product since Trend is not. Given there is a history of issues with this company I am disappointed that Trend did not release this information themselves.