Earlier this year, MasterCard issued a somewhat radical change for their SDP program stating that Level 2 Merchants had to have an onsite assessment by a QSA by the end of 2010 as stated here:
http://www.mastercard.com/us/merchant/pdf/SDP_Program_Revisions.pdf
However, this hokey pokey move put the Level 2 Merchants "in" the same boat as Level 1 merchants significantly upped the ante. First of all, it would certainly mean an increased cost for their PCI program by requiring an audit. But more importantly, many of these merchants would no longer be able to just claim they are following PCI via an SAQ form but would now have to prove it to an auditor. That really was the alarming part.
Needless to say, MasterCard got a lot of complaints from merchants and even banks. So, they did the hokey pokey and put their Level 2 Merchants back "out" - at least partly. Now they have moved the date back to mid 2011 and they have conceded that the SAQ can still be done, only now the company must send an employee to training before that employee can perform the SAQ.
http://www.mastercard.com/us/sdp/merchants/merchant_levels.html
On a positive note, this was a wake up call for Level 2 Merchants to enforce the understanding that no matter what size they are, compliance is really to the PCI DSS in full - not just filling out a form.
As far as the banks, who really need to absorb these positions and communicate them back to the merchants, we are still waiting to see how they consistently position this. For example, will the person trained also have to sign the SAQ? Are they liable for a breach? Will the annual test be the same as the QSA? Do they need to have cyber liability insurance like a QSAC needs to? How will there be enough training sessions to cover all the Level 2 Merchants?
Ultimately, this still leaves the merchants with the decision of whether it's more valuable to pay someone internally to go to training or engage a QSA to perform an onsite assessment. The burden to be compliant is the same, but clearly a QSA has more experience. Given the number of breaches that still occur, one would hope that merchants want their program evaluated with the most due diligence.
Read more!
Tuesday, December 29, 2009
Thursday, December 24, 2009
MS08-067 Strikes Again
Firewalls, anti-virus, and intrusion detection systems can protect you from many things threatening you information security. What about your employees? Can they be easily locked down? The quick answer is no. The human element is the weakest link, and the following story is no exception.
While performing a social engineering attack, we were given a list of users to impersonate. The goal was to retrieve passwords verbally for either email or a website. On the site, I noticed that if your account was locked you could get a PIN from IT to unlock it.
I placed a call to the IT department using a spoofed number and made sure the IT person, we'll call him Joe, knew I was pressed for time. I told him I needed a PIN because my account was locked and he asked me for my name first. The person I was impersonating had a difficult name to spell, but I attempted to spell it for him and made a little joke about how difficult the name was to spell. Joe chuckled a bit and said he could send the PIN to my voicemail.
I told Joe I was calling from my mobile phone and didn't have access to my voicemail at the moment. I reminded him that I really needed to get my PIN quickly so I could log-in and asked what my options were. Joe said that if I verified the employee number he could give the PIN to me over the phone. I told him I would have to look for it and to hold for a minute.
After thinking about what to do next, I figured all was lost. So I decided to have a little bit of fun first. After about a minute, I picked the call back up and said, "Okay, I got it. It is MS08-067."
There was a brief pause and Joe said, "Your new password is _____."
I was stunned and I didn't think he was being serious until I logged in successfully. I told him thank you and how much I appreciated his help. From there, I continued accessing everything that I could log-in to and retrieved corporate email, social security numbers, pay rates, and gained access to the employee database. While Joe probably felt pretty good about helping me, I was helping myself to all of his co-workers' and corporate information.
Experience teaches us that it is human nature to want to help someone in need. We all aspire to be a hero of sorts. It may be just helping out that person on the phone, or holding a door open for someone with their hands full. Just keep in mind that while your intentions may be good, theirs may not be.
Education is very important. Teach your employees it is okay to question others. People who believe that they can't be social engineered most likely have already fallen to victim to it.
How do YOU protect yourself?
By Chris Murrey
Read more!
While performing a social engineering attack, we were given a list of users to impersonate. The goal was to retrieve passwords verbally for either email or a website. On the site, I noticed that if your account was locked you could get a PIN from IT to unlock it.
I placed a call to the IT department using a spoofed number and made sure the IT person, we'll call him Joe, knew I was pressed for time. I told him I needed a PIN because my account was locked and he asked me for my name first. The person I was impersonating had a difficult name to spell, but I attempted to spell it for him and made a little joke about how difficult the name was to spell. Joe chuckled a bit and said he could send the PIN to my voicemail.
I told Joe I was calling from my mobile phone and didn't have access to my voicemail at the moment. I reminded him that I really needed to get my PIN quickly so I could log-in and asked what my options were. Joe said that if I verified the employee number he could give the PIN to me over the phone. I told him I would have to look for it and to hold for a minute.
After thinking about what to do next, I figured all was lost. So I decided to have a little bit of fun first. After about a minute, I picked the call back up and said, "Okay, I got it. It is MS08-067."
There was a brief pause and Joe said, "Your new password is _____."
I was stunned and I didn't think he was being serious until I logged in successfully. I told him thank you and how much I appreciated his help. From there, I continued accessing everything that I could log-in to and retrieved corporate email, social security numbers, pay rates, and gained access to the employee database. While Joe probably felt pretty good about helping me, I was helping myself to all of his co-workers' and corporate information.
Experience teaches us that it is human nature to want to help someone in need. We all aspire to be a hero of sorts. It may be just helping out that person on the phone, or holding a door open for someone with their hands full. Just keep in mind that while your intentions may be good, theirs may not be.
Education is very important. Teach your employees it is okay to question others. People who believe that they can't be social engineered most likely have already fallen to victim to it.
How do YOU protect yourself?
By Chris Murrey
Read more!
Labels:
Social Engineering
Friday, December 18, 2009
Are Your Applications PA-DSS Compliant?
The PA-DSS deadline is closer than you may realize and there is bound to be a mad-rush at the end. July 2010 is the deadline for Phase 5 of Visa’s “Payment Application Security Mandates”. By that date, Visa is requiring that acquirers certify that all their merchants and processors are using PA-DSS certified payment applications. Did you get that? Stop, backup, read it again. If you are using a purchased payment application (those that are sold, distributed or licensed to third parties), it better be on the PA-DSS list.
PA-DSS is nothing new. It was introduced in 2007 as the successor to Visa’s Payment Application Best Practices (PAPB), which is intended to help software vendors and others develop secure payment applications, which could include anything from a POS system to online shopping cart software. PA-DSS requires that payment application be assessed by a 3rd-party, pass a series of security tests, and adhere to leading-practices before it can be distributed. If it fails any part of the assessment, it cannot be used as a payment application.
What does this deadline mean? First, it means that merchants’ time of using anything but compliant payment applications is nearing an end. Second, any new merchant applying for a merchant account will have to, as one of the steps to getting the account, show the acquirer, that they are using a PA-DSS certified payment application.
It's unclear how hard of a stance the payment brands are going to take on non PA-DSS or PCI compliant payment applications. If they go the full mile, they could shut down any organization whose credit card processing isn't compliant. They could also hand down some major fines for non-compliance. No one really seems to know what those fines are, but obviously if credit card data is compromised while you are non-PCI compliant, you could be subject to hefty fines. With the ability to accept credit cards at stake, trusting non-compliant applications hardly seems like a risk worth taking.
July will be here quicker than you might realize. Are you ready? It's quiet in here... can you hear the echo?
Read more!
PA-DSS is nothing new. It was introduced in 2007 as the successor to Visa’s Payment Application Best Practices (PAPB), which is intended to help software vendors and others develop secure payment applications, which could include anything from a POS system to online shopping cart software. PA-DSS requires that payment application be assessed by a 3rd-party, pass a series of security tests, and adhere to leading-practices before it can be distributed. If it fails any part of the assessment, it cannot be used as a payment application.
What does this deadline mean? First, it means that merchants’ time of using anything but compliant payment applications is nearing an end. Second, any new merchant applying for a merchant account will have to, as one of the steps to getting the account, show the acquirer, that they are using a PA-DSS certified payment application.
It's unclear how hard of a stance the payment brands are going to take on non PA-DSS or PCI compliant payment applications. If they go the full mile, they could shut down any organization whose credit card processing isn't compliant. They could also hand down some major fines for non-compliance. No one really seems to know what those fines are, but obviously if credit card data is compromised while you are non-PCI compliant, you could be subject to hefty fines. With the ability to accept credit cards at stake, trusting non-compliant applications hardly seems like a risk worth taking.
July will be here quicker than you might realize. Are you ready? It's quiet in here... can you hear the echo?
Read more!
Friday, December 11, 2009
Securing a PCI compliant vendor
Seven restaurants are suing Radiant Systems and Computer World for producing and selling insecure systems that led to security breaches, which then led to fines and other costs for the breached companies.
The restaurants claim that they were sold a product that was not PCI compliant, and the two vendors should be held responsible for the data lost and the money spent as a result of the breach.
Radiant Systems is a point of sale terminal company and Computer World is the company that sold and maintained the Radiant Systems product. The question is, should the vendors or the restaurants be held responsible for the data breach? After reading a blog that left the matter undetermined, it was necessary to clear up the confusion.
First off, a ROC or SAQ from the restaurants must have signed off on the product. As a certified QSA, the first thing we would do is check the PCI list to ensure that the product is listed. The point of sale system is certified by version. While version 1.0 of the product may be certified, 1.5 may not be. The restaurants should have spent the time and money to determine that the vendors and the products purchased would be keeping their company data secure, and they clearly did not.
Anyone can say they are PCI compliant. It's a very lucrative business right now, and many people are falsely claiming compliance to make money. As a business who is interested in hiring a vendor to provide or implement a product, it is your responsibility to research the vendor and choose the best product. Ultimately, the fault falls upon the breached restaurants.
An easy way to prevent situations like this in your company:
Read more!
The restaurants claim that they were sold a product that was not PCI compliant, and the two vendors should be held responsible for the data lost and the money spent as a result of the breach.
Radiant Systems is a point of sale terminal company and Computer World is the company that sold and maintained the Radiant Systems product. The question is, should the vendors or the restaurants be held responsible for the data breach? After reading a blog that left the matter undetermined, it was necessary to clear up the confusion.
First off, a ROC or SAQ from the restaurants must have signed off on the product. As a certified QSA, the first thing we would do is check the PCI list to ensure that the product is listed. The point of sale system is certified by version. While version 1.0 of the product may be certified, 1.5 may not be. The restaurants should have spent the time and money to determine that the vendors and the products purchased would be keeping their company data secure, and they clearly did not.
Anyone can say they are PCI compliant. It's a very lucrative business right now, and many people are falsely claiming compliance to make money. As a business who is interested in hiring a vendor to provide or implement a product, it is your responsibility to research the vendor and choose the best product. Ultimately, the fault falls upon the breached restaurants.
An easy way to prevent situations like this in your company:
- Look at the PCI list. PCI provides an Approved Service Provider list that includes products, product versions and the codes used. Before bringing in any vendor to your company, check the list to be sure their product is PCI compliant.
Read more!
Labels:
pci compliance
Friday, December 4, 2009
iPhone App Developer Sued for User Phone Number Theft
As the saying goes, it’s not IF, but WHEN will your personal information will be stolen in a data breach.
On October 8, 2009 I wrote a blog entitled “What’s the Value of Your Mobile Phone’s Address Book?” which highlighted the fact that iPhone applications have access to your phones entire address book and you are trusting that the developer is (hopefully) not a rogue one. This has been known since at least January 26, 2009. The possible abuse that my blog was aimed at pointing out to readers came to light recently. A class action lawsuit has been filed against app developer Storm8 and the full details of the proceedings can be found here.
Read more!
On October 8, 2009 I wrote a blog entitled “What’s the Value of Your Mobile Phone’s Address Book?” which highlighted the fact that iPhone applications have access to your phones entire address book and you are trusting that the developer is (hopefully) not a rogue one. This has been known since at least January 26, 2009. The possible abuse that my blog was aimed at pointing out to readers came to light recently. A class action lawsuit has been filed against app developer Storm8 and the full details of the proceedings can be found here.
Read more!
Labels:
data breach,
iphone,
PII
Monday, November 9, 2009
Is Your Response Time Less Than 120 Days?
I recently read a blog about ChoicePoint and the ongoing coverage of their business, especially after 13,750 people had their personal information compromised. Tina Stow, who seems to represent ChoicePoint, left a comment on the blog stating:
4 months?!!? Holy cow! If a “monitoring tool” is unintentionally switched off for 1/3 of an ENTIRE YEAR and no one notices, I wonder what else has been going on that went unnoticed. No wonder they had a breach! That statement reminds me of clients that never see brute force attacks on their systems simply because they never review logs. Whatever the reason that the said system was not discovered to be down or malfunctioning, a core deficiency exists within the system and/or process(es) surrounding it; or the statement presented simply lacks validity.
With that being said, you can buy the latest, greatest, or most expensive tools, systems, and software out there, but unless installed, configured, and used in a correct or proper manner, do you little to no good. It’s like putting in a web application firewall without having it “learn” your web application, or dropping in a firewall with any/any rules in place; it will only get you so far. Unfortunately, it may have earned a “checkmark” or opportunity to issue a press release saying “we did something”.
Read more!
“We have several monitoring tools and the one in question was not intentionally switched off. Due to human error for which the Company took appropriate action, one of our monitoring tools was temporarily and mistakenly turned off for a four month period. The other monitoring tools and our information security program were working. We have added redundancies to try to prevent future human error.”
4 months?!!? Holy cow! If a “monitoring tool” is unintentionally switched off for 1/3 of an ENTIRE YEAR and no one notices, I wonder what else has been going on that went unnoticed. No wonder they had a breach! That statement reminds me of clients that never see brute force attacks on their systems simply because they never review logs. Whatever the reason that the said system was not discovered to be down or malfunctioning, a core deficiency exists within the system and/or process(es) surrounding it; or the statement presented simply lacks validity.
With that being said, you can buy the latest, greatest, or most expensive tools, systems, and software out there, but unless installed, configured, and used in a correct or proper manner, do you little to no good. It’s like putting in a web application firewall without having it “learn” your web application, or dropping in a firewall with any/any rules in place; it will only get you so far. Unfortunately, it may have earned a “checkmark” or opportunity to issue a press release saying “we did something”.
Read more!
Thursday, November 5, 2009
To Pen Test or not to Pen Test, that is the question…
Time and time again I am challenged by clients and security professionals alike on what is the real benefit of penetration testing. Though this seems like an age old debate with many famous hackers and security professionals weighing in, I am not entirely sure I understand the argument undermining the importance/benefits of penetration testing. Below are arguments from both black and white hat security professionals:
White Hat – “Pen testing can show one of two things: your security sucks or your security is better than your pen tester”
Black Hat– “The very concept of "penetration testing" is fundamentally flawed. The problem with it is that the penetration tester has a limited set of targets they're allowed to attack, while a real attacker can attack anything in order to gain access to the site/box. So if a site on a shared host is being tested, just because site1.com is "secure" that does NOT in any way mean that the server is secure, because site2.com could easily be vulnerable to all sorts of simple attacks. The time constraint is another problem. A professional pentester with a week or two to spend on a client's network may or may not get into everything. A real dedicated hacker making the slog who spends a month of eight hour days WILL get into anything they target. You're lucky if it even takes him that long, really.”
Though I understand the point of the above arguments, I believe the logic behind these statements is fundamentally flawed. The point they are trying to make is that an organization is never going to be entirely secure and that an attacker with dedicated time and resources WILL in all cases break in, so performing a pen test is only validating something already known. I see penetration testing differently. Penetration testing is not designed to make an organization 100 percent secure but to make them MORE secure (assuming identified vulnerabilities are remediated) and MORE aware what they were before the penetration assessment was performed. It also is a good means to test current logical and physical security controls. From my experience, many security professionals responsible for an organization’s security do not understand the full ramifications of vulnerabilities and thus can become complacent in fixing them.
Case Study:
A vulnerability scanner returns one vulnerability on Company A’s external presence. The vulnerability identified was Cross Site Scripting or SQL Injection. A report is issued to Company A showing a “High” risk rating based on this vulnerability. Company A may not understand how a single vulnerability can translate into a “High” risk rating and thus chooses to ignore or at least delay remediating this vulnerability until time and resources become available. Does this mean Company A’s security is bad? I would say if that if Company A had an external presence of 50 servers and 10 applications, and only one vulnerability was identified, the answer would be no. Company A may very well have good security, but as with everything else in life, mistakes happen. Now let’s assume a penetration assessment was performed on Company A’s external presence instead. Not only would the penetration assessment identify this vulnerability, it would attempt to exploit it. Let’s go on to say that this vulnerability is in fact exploitable and allows for full system compromise. What is a client going to react to more? A report stating they have one critical vulnerability stating what could happen or a report stating they have one critical vulnerability and, oh yeah, by the way we compromised your entire domain controller? A client who just had their entire domain controller compromised is going to be more inclined to fix the vulnerability in a timely manner than one reading a report stating what could happen if not fixed.
How can one argue that this penetration assessment was not beneficial to Company A? It effectively made Company A more aware as to the dangers associated with a critical vulnerability, which in turn made them take a proactive approach to fixing the problem almost instantaneously, thus reducing their overall risk rating.
This is one simple example. I can go on and on about the benefits of penetration testing. Security is about managing and reducing risk to an acceptable level. A penetration assessment isn’t intended to reduce an organizations risk to zero percent, but then again neither is any security assessment. Any time an organization connects a device to a network it assumes a certain amount of risk. It’s understood that zero-day vulnerabilities will always surface and cannot be prevented. So sure, a dedicated attacker could decide to spend 6 months developing an exploit for an unknown vulnerability, however this is going to take a more sophisticated attacker which makes this a less likely scenario.
A penetration assessment is simply used as a means to identify vulnerabilities and provide proof of concept examples on exploiting these vulnerabilities. By doing so, it effectively better explains ratings associated with vulnerabilities which in turn produce much more conscious/aware security professionals. A much more aware security department will be able to better help reduce the overall risk for an organization.
Read more!
White Hat – “Pen testing can show one of two things: your security sucks or your security is better than your pen tester”
Black Hat– “The very concept of "penetration testing" is fundamentally flawed. The problem with it is that the penetration tester has a limited set of targets they're allowed to attack, while a real attacker can attack anything in order to gain access to the site/box. So if a site on a shared host is being tested, just because site1.com is "secure" that does NOT in any way mean that the server is secure, because site2.com could easily be vulnerable to all sorts of simple attacks. The time constraint is another problem. A professional pentester with a week or two to spend on a client's network may or may not get into everything. A real dedicated hacker making the slog who spends a month of eight hour days WILL get into anything they target. You're lucky if it even takes him that long, really.”
Though I understand the point of the above arguments, I believe the logic behind these statements is fundamentally flawed. The point they are trying to make is that an organization is never going to be entirely secure and that an attacker with dedicated time and resources WILL in all cases break in, so performing a pen test is only validating something already known. I see penetration testing differently. Penetration testing is not designed to make an organization 100 percent secure but to make them MORE secure (assuming identified vulnerabilities are remediated) and MORE aware what they were before the penetration assessment was performed. It also is a good means to test current logical and physical security controls. From my experience, many security professionals responsible for an organization’s security do not understand the full ramifications of vulnerabilities and thus can become complacent in fixing them.
Case Study:
A vulnerability scanner returns one vulnerability on Company A’s external presence. The vulnerability identified was Cross Site Scripting or SQL Injection. A report is issued to Company A showing a “High” risk rating based on this vulnerability. Company A may not understand how a single vulnerability can translate into a “High” risk rating and thus chooses to ignore or at least delay remediating this vulnerability until time and resources become available. Does this mean Company A’s security is bad? I would say if that if Company A had an external presence of 50 servers and 10 applications, and only one vulnerability was identified, the answer would be no. Company A may very well have good security, but as with everything else in life, mistakes happen. Now let’s assume a penetration assessment was performed on Company A’s external presence instead. Not only would the penetration assessment identify this vulnerability, it would attempt to exploit it. Let’s go on to say that this vulnerability is in fact exploitable and allows for full system compromise. What is a client going to react to more? A report stating they have one critical vulnerability stating what could happen or a report stating they have one critical vulnerability and, oh yeah, by the way we compromised your entire domain controller? A client who just had their entire domain controller compromised is going to be more inclined to fix the vulnerability in a timely manner than one reading a report stating what could happen if not fixed.
How can one argue that this penetration assessment was not beneficial to Company A? It effectively made Company A more aware as to the dangers associated with a critical vulnerability, which in turn made them take a proactive approach to fixing the problem almost instantaneously, thus reducing their overall risk rating.
This is one simple example. I can go on and on about the benefits of penetration testing. Security is about managing and reducing risk to an acceptable level. A penetration assessment isn’t intended to reduce an organizations risk to zero percent, but then again neither is any security assessment. Any time an organization connects a device to a network it assumes a certain amount of risk. It’s understood that zero-day vulnerabilities will always surface and cannot be prevented. So sure, a dedicated attacker could decide to spend 6 months developing an exploit for an unknown vulnerability, however this is going to take a more sophisticated attacker which makes this a less likely scenario.
A penetration assessment is simply used as a means to identify vulnerabilities and provide proof of concept examples on exploiting these vulnerabilities. By doing so, it effectively better explains ratings associated with vulnerabilities which in turn produce much more conscious/aware security professionals. A much more aware security department will be able to better help reduce the overall risk for an organization.
Read more!
Friday, October 23, 2009
The Network Neutrality Debate: Good or Evil?
So for a long time now there has been a bill around in congress about Network Neutrality. Some people like it, some people don’t, others just don’t care. But who’s really looked into it? I mean, it sounds good. It sounds like it could help everyone out, right? It’s keeping the Internet neutral, right?
Well, for those of you who haven’t looked into Net Neutrality, its time you hear about it. Let’s look at the up side of this debate. The original idea was great: Ensure that all traffic on the Internet was treated equally by all Internet Service Providers. Net Neutrality is supposed to mean no discrimination and tries to prevent Internet Service Providers from blocking, speeding up or slowing down Web content based on its source, ownership or destination. That sounds good, right? I like this original idea, but as with many ideas that get turned into legislation, the point gets missed and in the case of this idea, the point is being completely smothered.
Now that the government has its hands on it, Net Neutrality will go the way of all the other bills that have gone through congress, adding pork at every congressman along the way. Net Neutrality line items state that every citizen in the US should be given free broadband Internet access. The proponents of this bill state that if there is only one provider to give Internet access that they can’t block content or stop the end user from getting to the site they want to view so the government should intervene. The proponents of this bill are in the mindset that if government can control this the ISPs won’t be able to implement a tiered Internet access model.
Others say that if Net Neutrality isn’t passed, companies will start to charge more to get to certain content on the Internet or that Internet Service Providers (ISP) can sign agreements with certain companies to give special access to that company’s website. For instance, if I went to Google, but my ISP signed a contract with Yahoo or Microsoft I wouldn’t be able to get to Google or the speed would be so slow I would have to use something else to search the Internet. People think that without Net Neutrality ISPs can tax content providers for using the backbone of the Internet to move data, or discriminate in favor of certain traffic, or block access to certain sites all together. Again, let me stress how this original idea makes complete sense and how much I agree with it at this point.
With the amount of service providers out there, the scenarios mentioned earlier (blocking content and discrimination of data) would never happen because if it did, people would just switch to another provider. Look at it this way, the Internet, in its current setup, has been operated for over 20 years without regulation or government interference. Net Neutrality protections have existed for the entire history of the Internet. Additionally, since its conception and the start of its mainstream use, the government has wanted to tax the usage of the Internet. Back at the beginning of the Internet a group of congressmen banded together and said, “No” to taxing Internet usage. But now with the government is trying to grab power from all over, and congress feels that it should control, monitor, and secure the Internet as well.
And it doesn’t stop there; if Net Neutrality goes through, the government will not only do a power grab over the Internet, but include wireless phone companies too since they are also part of digital communications. The FCC would basically be able to moderate and know everything that is being transferred over the Internet or wireless phones. Security and Privacy would be thrown out the window in this scenario. The Internet has been the source of the highest levels of freedom the world has ever known. There have never been any restrictions on speech, religion, or information on the Internet (some sites have their own policies, but you can always find information out there somewhere).
Aside from the Internet being a place for freedom, think about what will happen when the government steps in and tries to regulate and monitor it. Think about anything the government tries to run; it gets clouded in paperwork and the service is degraded to a level no one wants. The phone companies are a prime example of this, the government stepped in at the state and federal level and the prices skyrocketed. But the market innovated coming up with VoIP and free phone servers that utilize the Internet. The free market is responsible for having such a vast, open set of connected networks that make up the Internet; it would do nothing but hurt companies that try to impede this open communication of all types of content.
So now for some added truth on this; Net Neutrality is going to essentially going to cause these things to happen, just from a different angle. Now that H.R. 3458 has been introduced and federal stimulus money has been part of the deal, the government is going to pork the bill up so much you won’t even recognize it right before it is voted on.
Let’s put it in perspective: Over the last 3 or 4 years the telecommunications industry has pumped over 100 billion dollars into the data backbone and it has resulted in blazing fast speeds, lower price per kilobyte of bandwidth, and provided a higher level of competition. Now think about this: the government stimulus package invested 7.2 billion dollars in to this Net Neutrality bill and they call that “just a down payment” according to the diversity czar Mark Lloyd. His opinion is that managing the media, control of it by the state, can help level the playing field for those that aren’t fortunate enough to get all the news. Now why would you want to pay for Net Neutrality when you already pay for the Internet? Just with the thought of the government stepping in the price has already gone up in the form of taxes.
Almost everyone pays for the Internet in some way, either in your cell phone bill, your cable bill, your land line phone bill, and your VoIP phone (in some cases). All this money pays to keep the Internet up and running. When you purchase Internet access you are expecting a certain level of quality and service from the provider you are paying; be it AT&T, Sprint, Verizon, Time Warner, Comcast, just to name a few. Basically your monthly bill on these services goes to keeping the Internet up and running (I say this because basically everything is transmitted digitally).
Mark Lloyd, Chief Diversity Czar of the Federal Communications Commission said, “It should be clear by now that my focus here is not freedom of speech or the press. This freedom is all too often an exaggeration. At the very least blind references to freedom of speech or press serves as a distraction from the critical examination of other communication policies. The purpose of free speech is warped to protect global corporations and block rules [by the government], fines, and regulations that would promote democratic governance.”
This statement is coming from a guy who is a devoted liberal progressive (AKA Marxist) looking to stifle your freedom of speech. Mark Lloyd, a disciple of Saul Alinsky and fan of Hugo Chavez, wants to destroy talk radio and says free speech is a distraction. Mark Lloyd also says Venezuela is an example we should follow and he feels that the government should control all media outlets. His statements also want to tax media outlets equal to that of their total operating cost to help subsidize public media. If he is willing to do that with media outlets, what is he willing to do with censoring the Internet?
Government's first duty is to protect the people, not run their lives. It is not to tax you for your freedoms, it is not to regulate the things you do in life, and it is not the goal for government to interfere with every aspect of the country. If the government takes control of the Internet the way they are planning in this Network Neutrality bill, I promise you that the quality and value of the Internet will degrade and it will be the start of the end of the Internet as we know it.
Throughout the bill there are statements like, “unfettered access,” “lawful usage, devices and services,” “severely harmed,” “economic interest,” and “prevention of unwanted content.” The problem with this is that they never state who will be monitoring this or setting the standards on the content, bandwidth, and what they consider to be lawful.
http://thomas.loc.gov/cgi-bin/query/D?c111:1:./temp/~c111u6UoXZ::
Ronald Reagan once famously said, "Government's view of the economy could be summed up in a few short phrases: If it moves, tax it. If it keeps moving, regulate it. And if it stops moving, subsidize it."
Let’s keep the Internet free and open as it was designed. And let’s also keep Net Neutrality exactly how it was designed; to protect the freedoms of the Internet.
Read more!
Well, for those of you who haven’t looked into Net Neutrality, its time you hear about it. Let’s look at the up side of this debate. The original idea was great: Ensure that all traffic on the Internet was treated equally by all Internet Service Providers. Net Neutrality is supposed to mean no discrimination and tries to prevent Internet Service Providers from blocking, speeding up or slowing down Web content based on its source, ownership or destination. That sounds good, right? I like this original idea, but as with many ideas that get turned into legislation, the point gets missed and in the case of this idea, the point is being completely smothered.
Now that the government has its hands on it, Net Neutrality will go the way of all the other bills that have gone through congress, adding pork at every congressman along the way. Net Neutrality line items state that every citizen in the US should be given free broadband Internet access. The proponents of this bill state that if there is only one provider to give Internet access that they can’t block content or stop the end user from getting to the site they want to view so the government should intervene. The proponents of this bill are in the mindset that if government can control this the ISPs won’t be able to implement a tiered Internet access model.
Others say that if Net Neutrality isn’t passed, companies will start to charge more to get to certain content on the Internet or that Internet Service Providers (ISP) can sign agreements with certain companies to give special access to that company’s website. For instance, if I went to Google, but my ISP signed a contract with Yahoo or Microsoft I wouldn’t be able to get to Google or the speed would be so slow I would have to use something else to search the Internet. People think that without Net Neutrality ISPs can tax content providers for using the backbone of the Internet to move data, or discriminate in favor of certain traffic, or block access to certain sites all together. Again, let me stress how this original idea makes complete sense and how much I agree with it at this point.
With the amount of service providers out there, the scenarios mentioned earlier (blocking content and discrimination of data) would never happen because if it did, people would just switch to another provider. Look at it this way, the Internet, in its current setup, has been operated for over 20 years without regulation or government interference. Net Neutrality protections have existed for the entire history of the Internet. Additionally, since its conception and the start of its mainstream use, the government has wanted to tax the usage of the Internet. Back at the beginning of the Internet a group of congressmen banded together and said, “No” to taxing Internet usage. But now with the government is trying to grab power from all over, and congress feels that it should control, monitor, and secure the Internet as well.
And it doesn’t stop there; if Net Neutrality goes through, the government will not only do a power grab over the Internet, but include wireless phone companies too since they are also part of digital communications. The FCC would basically be able to moderate and know everything that is being transferred over the Internet or wireless phones. Security and Privacy would be thrown out the window in this scenario. The Internet has been the source of the highest levels of freedom the world has ever known. There have never been any restrictions on speech, religion, or information on the Internet (some sites have their own policies, but you can always find information out there somewhere).
Aside from the Internet being a place for freedom, think about what will happen when the government steps in and tries to regulate and monitor it. Think about anything the government tries to run; it gets clouded in paperwork and the service is degraded to a level no one wants. The phone companies are a prime example of this, the government stepped in at the state and federal level and the prices skyrocketed. But the market innovated coming up with VoIP and free phone servers that utilize the Internet. The free market is responsible for having such a vast, open set of connected networks that make up the Internet; it would do nothing but hurt companies that try to impede this open communication of all types of content.
So now for some added truth on this; Net Neutrality is going to essentially going to cause these things to happen, just from a different angle. Now that H.R. 3458 has been introduced and federal stimulus money has been part of the deal, the government is going to pork the bill up so much you won’t even recognize it right before it is voted on.
Let’s put it in perspective: Over the last 3 or 4 years the telecommunications industry has pumped over 100 billion dollars into the data backbone and it has resulted in blazing fast speeds, lower price per kilobyte of bandwidth, and provided a higher level of competition. Now think about this: the government stimulus package invested 7.2 billion dollars in to this Net Neutrality bill and they call that “just a down payment” according to the diversity czar Mark Lloyd. His opinion is that managing the media, control of it by the state, can help level the playing field for those that aren’t fortunate enough to get all the news. Now why would you want to pay for Net Neutrality when you already pay for the Internet? Just with the thought of the government stepping in the price has already gone up in the form of taxes.
Almost everyone pays for the Internet in some way, either in your cell phone bill, your cable bill, your land line phone bill, and your VoIP phone (in some cases). All this money pays to keep the Internet up and running. When you purchase Internet access you are expecting a certain level of quality and service from the provider you are paying; be it AT&T, Sprint, Verizon, Time Warner, Comcast, just to name a few. Basically your monthly bill on these services goes to keeping the Internet up and running (I say this because basically everything is transmitted digitally).
Mark Lloyd, Chief Diversity Czar of the Federal Communications Commission said, “It should be clear by now that my focus here is not freedom of speech or the press. This freedom is all too often an exaggeration. At the very least blind references to freedom of speech or press serves as a distraction from the critical examination of other communication policies. The purpose of free speech is warped to protect global corporations and block rules [by the government], fines, and regulations that would promote democratic governance.”
This statement is coming from a guy who is a devoted liberal progressive (AKA Marxist) looking to stifle your freedom of speech. Mark Lloyd, a disciple of Saul Alinsky and fan of Hugo Chavez, wants to destroy talk radio and says free speech is a distraction. Mark Lloyd also says Venezuela is an example we should follow and he feels that the government should control all media outlets. His statements also want to tax media outlets equal to that of their total operating cost to help subsidize public media. If he is willing to do that with media outlets, what is he willing to do with censoring the Internet?
Government's first duty is to protect the people, not run their lives. It is not to tax you for your freedoms, it is not to regulate the things you do in life, and it is not the goal for government to interfere with every aspect of the country. If the government takes control of the Internet the way they are planning in this Network Neutrality bill, I promise you that the quality and value of the Internet will degrade and it will be the start of the end of the Internet as we know it.
Throughout the bill there are statements like, “unfettered access,” “lawful usage, devices and services,” “severely harmed,” “economic interest,” and “prevention of unwanted content.” The problem with this is that they never state who will be monitoring this or setting the standards on the content, bandwidth, and what they consider to be lawful.
http://thomas.loc.gov/cgi-bin/query/D?c111:1:./temp/~c111u6UoXZ::
Ronald Reagan once famously said, "Government's view of the economy could be summed up in a few short phrases: If it moves, tax it. If it keeps moving, regulate it. And if it stops moving, subsidize it."
Let’s keep the Internet free and open as it was designed. And let’s also keep Net Neutrality exactly how it was designed; to protect the freedoms of the Internet.
Read more!
Labels:
Net Neutrality
Friday, October 9, 2009
The Louisville Metro InfoSec Capture the Flag
Just returned last night from the Louisville Metro Information Security Conference in Kentucky. I typically stay clear from the Capture the Flag events as I'm usually networking with people or presenting. This year I decided (with a little nudge from a couple friends) to participate in the Louisville InfoSec capture the flag. This years CTF was designed and put on by Irongeek (http://www.irongeek.com) which your always in for a blast with him.
Our team came in first place and everyone on the team did an amazing job contributing. I have to give a shout out to Irongeek and his time and dedication to the CTF. It was truly a great experience. Some of the ideas, twists, vulnerability linking, and creativity of the overall CTF was a unique experience in itself. The hack with the rotating web cam to see a password written on the computer is just a taste of the creativity Irongeek put into the CTF.
Overall, it was truly a great time and a great experience at the Louisville InfoSec Conference, would highly recommend it next year!
Quick outline of how we got first place:
Machine 1:
MS08-067 with Meterpreter payload, dumped hashes, and performed rainbowtables to crack passwords. Fast rainbowtables didn't work, ended up using CUDA cracking power to get the password.
Second machine: Directory traversal to /etc/passwd, found user account that was on the windows box, with same password on *nix box. Pulled off encrypted TrueCrypt volume. Found a robots.txt with a disallow on a config file that contained the MySQL db username and password. Connected to the mysql machine and extracted a table that had the truecrypt password in it. Inside that file was a password protected 7zip file.
Third machine: Web based camera web interface, not a default password, no published vulnerability, no apparent easy way in. Performed arp cache poisoning, obtained the credentials passed in the clear of view/view. This got us access to a web cam that could rotate. Rotating it from left to right, it revealed a piece of paper with a handwritten password that ultimately allowed us access to the 7zip file.
Thanks again guys was a blast, nice job Pure-Hate, Archangel, and Titan. Bang up job team.
Irongeek's post about the CTF: http://www.irongeek.com/i.php?page=videos/louisville-infosec-ctf-2009
Read more!
Our team came in first place and everyone on the team did an amazing job contributing. I have to give a shout out to Irongeek and his time and dedication to the CTF. It was truly a great experience. Some of the ideas, twists, vulnerability linking, and creativity of the overall CTF was a unique experience in itself. The hack with the rotating web cam to see a password written on the computer is just a taste of the creativity Irongeek put into the CTF.
Overall, it was truly a great time and a great experience at the Louisville InfoSec Conference, would highly recommend it next year!
Quick outline of how we got first place:
Machine 1:
MS08-067 with Meterpreter payload, dumped hashes, and performed rainbowtables to crack passwords. Fast rainbowtables didn't work, ended up using CUDA cracking power to get the password.
Second machine: Directory traversal to /etc/passwd, found user account that was on the windows box, with same password on *nix box. Pulled off encrypted TrueCrypt volume. Found a robots.txt with a disallow on a config file that contained the MySQL db username and password. Connected to the mysql machine and extracted a table that had the truecrypt password in it. Inside that file was a password protected 7zip file.
Third machine: Web based camera web interface, not a default password, no published vulnerability, no apparent easy way in. Performed arp cache poisoning, obtained the credentials passed in the clear of view/view. This got us access to a web cam that could rotate. Rotating it from left to right, it revealed a piece of paper with a handwritten password that ultimately allowed us access to the 7zip file.
Thanks again guys was a blast, nice job Pure-Hate, Archangel, and Titan. Bang up job team.
Irongeek's post about the CTF: http://www.irongeek.com/i.php?page=videos/louisville-infosec-ctf-2009
Read more!
Thursday, October 8, 2009
What’s the Value of Your Mobile Phone’s Address Book?
Being a consultant, I travel a good amount around the United States for various engagements. While at airports, hotels, and other public places that offer opportunities for wireless communications, I often find myself in amazement of the information at large.
Approximately 4 months ago, I sat at the airport gate awaiting my incoming flight and a woman sitting next (2 seats down, or about 6 feet away) to me was talking on her cell phone about her travel reservations. Whomever she was speaking with apparently had Internet access as she gave them instructions on how to open up Internet Explorer, navigate to www..com, and login with a username of and password of . She proceeded to instruction this person how to search for a hotel, and book a reservation, also giving her credit card number, expiration date, and CV2 code. After she hung up from the call, I thought to myself that it would be very amusing to go thank her for her information and paying for my flight/hotel. Of course, I did not, but thought to myself how clueless must this lady be.
From airports to hotels, it is no surprise that there are always open file shares, shared iTunes libraries, and similar things readily available to people via Bluetooth or wireless communications. This is nothing new, however the sensitivity of the information people purposely or inadvertently have shared varies. When doing a penetration test for a large health care organization, a coworker and I gained access to a Web site that was indexed in Google that provided us with a complete employee directory listing. This client was alarmed at what we found, as we could couple this gold mine of internal information with some XSS flaws to perform a large scale phishing attack.
I pose this question: How important to your organization is your mobile phone’s phonebook?
More specifically, the Apple iPhone’s use for the corporate world has been a topic of debate for some time. With the number of applications in the AppStore, how well do you think Apple is doing screening them all for rogue code? Just as an organization has that fear of a time bomb planted by and ex-developer in one of its code bases, should iPhone users and enterprises be worried about your information on their iPhone? The answer is yes.
Code showing how to read not only your iPhone’s number, but also your entire address book as well has been published online for some time now. Additionally, the article claims that applications can obtain personal information from most of the iPhone’s file system despite Apple having a developer sandbox in place. We’ve already seen the $999.99 “I am Rich” app that tricked 8 people into its $1,000 price tag, so what else might exist in the thousands of other applications available? Do your C-level executives use an iPhone? Has your address book or theirs already been compromised? You may never know…
Read more!
Approximately 4 months ago, I sat at the airport gate awaiting my incoming flight and a woman sitting next (2 seats down, or about 6 feet away) to me was talking on her cell phone about her travel reservations. Whomever she was speaking with apparently had Internet access as she gave them instructions on how to open up Internet Explorer, navigate to www.
From airports to hotels, it is no surprise that there are always open file shares, shared iTunes libraries, and similar things readily available to people via Bluetooth or wireless communications. This is nothing new, however the sensitivity of the information people purposely or inadvertently have shared varies. When doing a penetration test for a large health care organization, a coworker and I gained access to a Web site that was indexed in Google that provided us with a complete employee directory listing. This client was alarmed at what we found, as we could couple this gold mine of internal information with some XSS flaws to perform a large scale phishing attack.
I pose this question: How important to your organization is your mobile phone’s phonebook?
More specifically, the Apple iPhone’s use for the corporate world has been a topic of debate for some time. With the number of applications in the AppStore, how well do you think Apple is doing screening them all for rogue code? Just as an organization has that fear of a time bomb planted by and ex-developer in one of its code bases, should iPhone users and enterprises be worried about your information on their iPhone? The answer is yes.
Code showing how to read not only your iPhone’s number, but also your entire address book as well has been published online for some time now. Additionally, the article claims that applications can obtain personal information from most of the iPhone’s file system despite Apple having a developer sandbox in place. We’ve already seen the $999.99 “I am Rich” app that tricked 8 people into its $1,000 price tag, so what else might exist in the thousands of other applications available? Do your C-level executives use an iPhone? Has your address book or theirs already been compromised? You may never know…
Read more!
Labels:
addressbook,
iphone,
mobile device,
mobile phone,
Security
Tuesday, October 6, 2009
How a simple python fuzzer brought down SMBv2 in 2 seconds.
If you haven't had a chance to check out the post by Laurent Gaffie (posted at the end of this blog), it's a really great read on how the latest SMBv2 zero-day got discovered.
Laurent used a simplistic packet reconstruction fuzzer in python to ultimately discover what is now a remotely exploitable zero-day within SMBv2 systems. Let's dissect the code a little bit:
from socket import *
from time import sleep
from random import choice
host = "IP_ADDR", 445
#Negotiate Protocol Request
packet = [chr(int(a, 16)) for a in """
00 00 00 90
ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00
00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00
00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50
52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d
41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66
6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e
31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c
41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20
30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00
""".split()]
while True:
#/Core#
what = packet[:]
where = choice(range(len(packet)))
which = chr(choice(range(256)))
what[where] = which
#/Core#
#sending stuff @host
sock = socket()
sock.connect(host)
sock.send(' '.join(what))
sleep(0.1) # dont flood it
print 'fuzzing param %s' % (which.encode("hex"))
print 'complete packet %s' % (''.join(what).encode("hex"))
# When SMB Or RPC die (with TCP), sock get a timed out and die @the last packet, printing these things is more than usefull
sock.close()
Look at the #Negotiate Protocol Request portions, this is simply rebuilding a dump of a valid SMB request, easily obtainable through wireshark or other sniffers, the rest of the fuzzer simply substitutes every byte with a substituted value like most fuzzers do. The blog outlines how could something like this escape Microsoft's auditing and how easy it was for Laurent to find this bug.
Also if you haven't read the post on how this bug became exploitable using the trampoline method for reliable exploitation, take a read here: http://blog.metasploit.com/2009/10/smb2-351-packets-from-trampoline.html written by Piotre Bania.
Using three stages, some division to calculate a INC ESI, POP ESI, and RET (0x46, 0x5E, 0xC3) to our shellcode, the smbv2 exploit is now a living breathing remote exploit.
For more information and an explanation of how the exploit was discovered check out: http://g-laurent.blogspot.com/2009/10/more-explication-on-cve-2009-3103.html
Read more!
Laurent used a simplistic packet reconstruction fuzzer in python to ultimately discover what is now a remotely exploitable zero-day within SMBv2 systems. Let's dissect the code a little bit:
from socket import *
from time import sleep
from random import choice
host = "IP_ADDR", 445
#Negotiate Protocol Request
packet = [chr(int(a, 16)) for a in """
00 00 00 90
ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00
00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00
00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50
52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d
41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66
6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e
31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c
41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20
30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00
""".split()]
while True:
#/Core#
what = packet[:]
where = choice(range(len(packet)))
which = chr(choice(range(256)))
what[where] = which
#/Core#
#sending stuff @host
sock = socket()
sock.connect(host)
sock.send(' '.join(what))
sleep(0.1) # dont flood it
print 'fuzzing param %s' % (which.encode("hex"))
print 'complete packet %s' % (''.join(what).encode("hex"))
# When SMB Or RPC die (with TCP), sock get a timed out and die @the last packet, printing these things is more than usefull
sock.close()
Look at the #Negotiate Protocol Request portions, this is simply rebuilding a dump of a valid SMB request, easily obtainable through wireshark or other sniffers, the rest of the fuzzer simply substitutes every byte with a substituted value like most fuzzers do. The blog outlines how could something like this escape Microsoft's auditing and how easy it was for Laurent to find this bug.
Also if you haven't read the post on how this bug became exploitable using the trampoline method for reliable exploitation, take a read here: http://blog.metasploit.com/2009/10/smb2-351-packets-from-trampoline.html written by Piotre Bania.
Using three stages, some division to calculate a INC ESI, POP ESI, and RET (0x46, 0x5E, 0xC3) to our shellcode, the smbv2 exploit is now a living breathing remote exploit.
For more information and an explanation of how the exploit was discovered check out: http://g-laurent.blogspot.com/2009/10/more-explication-on-cve-2009-3103.html
Read more!
Sunday, October 4, 2009
Patrick Swayze- Roadhouse Ramblings
I have always liked the movie Roadhouse. Patrick Swayze is an amazing actor (and has more range than he gets credit for- remember Wong Fu?). Throw in Sam Elliot and I don’t see how you can go wrong. Before you decide that I have taken up blogging about cinema, let me say that in light of the recent passing of Swayze, I think we can learn a few things about information security from Roadhouse. And also, we can learn from the way that hackers have exploited the death of Swayze to spread viruses.
In Roadhouse, Swayze is called in to clean up a bar, and thus a town, ravaged by criminals. These criminals steal from honest people and legitimate businesses to enrich themselves. In information security, we come in and clean up servers and networks ravaged by, well, criminals stealing from honest people and legitimate businesses. Remember the bartender, the distant relative of the main antagonist in the movie, stealing money from the register? He can represent the threat organizations face from their own employees. Swayze threw him out. Swayze cleaned up the bar, and hardened it against attackers. While I don’t claim to look as cool as Swayze while neutralizing threats, we also spend our days identifying and removing threats. More about that in later blogs.
What I want to discuss here is how attackers use news events such as the death of Swayze to spread malicious software. E-mail claiming to contain photos of or links to stories about celebrities will often link to sites that install malicious software. The human element is regularly the weakest part of any security program. Rather than attack your hardened systems, attackers will work to gain the confidence of those who already have access to you your systems: your employees. To be secure, it is important to have a culture of security. Every employee must understand the importance of their role in protecting systems and information. And every employee must be educated as to the threats and techniques used by attackers. All the locks in the world won’t help keep your information safe if your employees open the door every time a sympathetic character comes knocking. Sure, anti-virus and e-mail filtering can help, but employees need to know how to recognize suspicious e-mail, and they need to be educated to never open it.
We have a lot of success exploiting the human element. People have a natural inclination to be helpful, and curiosity is a big part of human nature. There is software and processes that can help combat social engineering, but until all your employees understand the risks, it is difficult to be secure. That is another area where we help our clients. Just like Dalton (Swayze) showed the other bouncers at the bar how to handle problems, we can show you how to educate employees and keep your environment safe.
I don’t want to take the analogy too far, to the point of ridiculousness (if it isn’t too late already), but we have found that sometimes the best way to articulate threats to information security is to use analogies based on the bricks and mortar world. And in the electronic world, much like a roadhouse, there are all types of people, with all types of intentions. The first step in securing your information, or roadhouse, is an assessment. Then, we can get to work on cleaning it up.
Read more!
In Roadhouse, Swayze is called in to clean up a bar, and thus a town, ravaged by criminals. These criminals steal from honest people and legitimate businesses to enrich themselves. In information security, we come in and clean up servers and networks ravaged by, well, criminals stealing from honest people and legitimate businesses. Remember the bartender, the distant relative of the main antagonist in the movie, stealing money from the register? He can represent the threat organizations face from their own employees. Swayze threw him out. Swayze cleaned up the bar, and hardened it against attackers. While I don’t claim to look as cool as Swayze while neutralizing threats, we also spend our days identifying and removing threats. More about that in later blogs.
What I want to discuss here is how attackers use news events such as the death of Swayze to spread malicious software. E-mail claiming to contain photos of or links to stories about celebrities will often link to sites that install malicious software. The human element is regularly the weakest part of any security program. Rather than attack your hardened systems, attackers will work to gain the confidence of those who already have access to you your systems: your employees. To be secure, it is important to have a culture of security. Every employee must understand the importance of their role in protecting systems and information. And every employee must be educated as to the threats and techniques used by attackers. All the locks in the world won’t help keep your information safe if your employees open the door every time a sympathetic character comes knocking. Sure, anti-virus and e-mail filtering can help, but employees need to know how to recognize suspicious e-mail, and they need to be educated to never open it.
We have a lot of success exploiting the human element. People have a natural inclination to be helpful, and curiosity is a big part of human nature. There is software and processes that can help combat social engineering, but until all your employees understand the risks, it is difficult to be secure. That is another area where we help our clients. Just like Dalton (Swayze) showed the other bouncers at the bar how to handle problems, we can show you how to educate employees and keep your environment safe.
I don’t want to take the analogy too far, to the point of ridiculousness (if it isn’t too late already), but we have found that sometimes the best way to articulate threats to information security is to use analogies based on the bricks and mortar world. And in the electronic world, much like a roadhouse, there are all types of people, with all types of intentions. The first step in securing your information, or roadhouse, is an assessment. Then, we can get to work on cleaning it up.
Read more!
Labels:
Celebrity security
Monday, September 28, 2009
SMBv2 Exploit now in Metasploit as well as Screenshots!
As version 3.3 stable comes near, H.D. Moore and the crew from the Metasploit team has released a couple of great new features with the 3.3 dev version. Most notably last night was the commit for the latest SMBv2 remote code execution vulnerability that specifically targets Windows Vista and Windows 2008 and is still currently unpatched!!
The second awesome looking feature is the capability to take screenshots of an already compromised system through metasploit. When delivering the meterpreter payload you simply migrate to explorer.exe and type in screenshot /yourdir/screenshot.bmp, after that the victims machine will then be captured. Just another reason why the meterpreter console is one of the best post-exploitation swiss army knife out there.
Stay tuned for more Metasploit additions!
Read more!
The second awesome looking feature is the capability to take screenshots of an already compromised system through metasploit. When delivering the meterpreter payload you simply migrate to explorer.exe and type in screenshot /yourdir/screenshot.bmp, after that the victims machine will then be captured. Just another reason why the meterpreter console is one of the best post-exploitation swiss army knife out there.
Stay tuned for more Metasploit additions!
Read more!
Monday, September 21, 2009
Using SWOT to Evaluate Your Security Posture
Because the value of security is based on what is prevented, or doesn’t happen, it can be difficult to quantify. One simple way to evaluate your security needs can be with a SWOT analysis modified for security. Almost all of us are familiar with the SWOT analysis- it is business 101. For those who are not, it as an analysis of Strengths, Weaknesses, Opportunities, and Threats. When you are trying to get buy in and the resulting budget for security initiatives, the SWOT analysis lets you speak in the language that executives understand.
The exercise is most effective when combined with the systems approach. Without getting into the details, basically you need to clearly define the security objectives before completing the SWOT analysis.
Objectives can be anything from supporting the organization’s financial goals to protecting client information. If you can put dollar amounts to the categories, you are a step ahead.
Strengths
It is helpful to start with the good. What are the organizations security strengths? For smaller organizations, a strength may be the very fact that they are small, and thus fairly easy to secure. Maybe the organization already has a “security culture” with security embedded. Often, an organization’s strengths present an easy opportunity to increase security. By building on or positively modifying existing controls, security can usually be increased.
Weaknesses
Weakness can be broad or specific. A general lack of a security program or culture is a weakness, but is not defined enough to guide action. Look for specific areas. For example, not having a patch management program in place is a definite weakness. But organizations lacking a robust patch management process have a great opportunity to increase security. We see organizations fall prey to vulnerabilities that would not have been able to be exploited if they had a proper patch management program. Implementing a patch management program may involve spending some money, but it is a small price compared to remediating damage caused by an exploited vulnerability.
Many weaknesses are highly technical in nature. Lack of logging or change management are weaknesses that likely will take significant effort to fix. Articulating the weakness presented by not having systems like this in place and the strengths from implementing them is important to get management to approve the effort. Once again, putting dollar amounts on the costs of a successful exploitation of a vulnerability is paramount.
Another weakness that is especially common in today’s economy is a lack of funds. It can be tough to get buy in for initiatives without ample funding, but for organizations with cash flow problems, the expenses resulting from a security breach may be enough to put them out of business for good. That is a point that if properly articulated, should sway even the most security adverse executive.
Opportunities
Opportunities are generally fairly distinct. Does the organization have funds for security allocated, but not spent? Are logging systems in place, but not used? Do robust security policies exist, but have never been distributed? Think of driving around without buckling the seatbelt. Simply buckling your seatbelt costs nothing, doesn’t take much effort, and instantly and vastly improves safety. Opportunities are low hanging fruit that you can’t afford not to take advantage of. The best part is that taking advantage of most opportunities doesn’t require management approval or any significant spending.
Threats
Many threats, especially from a security perspective, are fairly easy to delineate. If the organization is subject regulations such as PCI DSS, HIPAA or SOX, the cost of non compliance can be astronomical. The costs of reputational damage often far outweigh the fines for non compliance. And the fines for non compliance are stiff.
How do I get started?
A good template in MS Word format for a SWOT analysis can be found here: http://www.zeltser.com/sans/isc/swot-matrix-template.doc
The best place to start is with an assessment. Many times organizations have trouble assessing their own security because they are too close to it, or lack the time or expertise. An experienced outside assessor will have seen countless situations and levels of security, and will be able to help with all four areas of the SWOT analysis.
If an assessment is not possible, a brainstorming session can be good for getting most of the fields started. The more people you can involve in the brainstorming sessions the better, and be sure to include front line employees if possible. They are often aware of the issues that exist in an organization, but lack the proper channels of communication to get to executives.
This is definitely an exercise worth your time and effort. The sooner you get started, the sooner you can approach executive management and get started on increasing security.
Read more!
The exercise is most effective when combined with the systems approach. Without getting into the details, basically you need to clearly define the security objectives before completing the SWOT analysis.
Objectives can be anything from supporting the organization’s financial goals to protecting client information. If you can put dollar amounts to the categories, you are a step ahead.
Strengths
It is helpful to start with the good. What are the organizations security strengths? For smaller organizations, a strength may be the very fact that they are small, and thus fairly easy to secure. Maybe the organization already has a “security culture” with security embedded. Often, an organization’s strengths present an easy opportunity to increase security. By building on or positively modifying existing controls, security can usually be increased.
Weaknesses
Weakness can be broad or specific. A general lack of a security program or culture is a weakness, but is not defined enough to guide action. Look for specific areas. For example, not having a patch management program in place is a definite weakness. But organizations lacking a robust patch management process have a great opportunity to increase security. We see organizations fall prey to vulnerabilities that would not have been able to be exploited if they had a proper patch management program. Implementing a patch management program may involve spending some money, but it is a small price compared to remediating damage caused by an exploited vulnerability.
Many weaknesses are highly technical in nature. Lack of logging or change management are weaknesses that likely will take significant effort to fix. Articulating the weakness presented by not having systems like this in place and the strengths from implementing them is important to get management to approve the effort. Once again, putting dollar amounts on the costs of a successful exploitation of a vulnerability is paramount.
Another weakness that is especially common in today’s economy is a lack of funds. It can be tough to get buy in for initiatives without ample funding, but for organizations with cash flow problems, the expenses resulting from a security breach may be enough to put them out of business for good. That is a point that if properly articulated, should sway even the most security adverse executive.
Opportunities
Opportunities are generally fairly distinct. Does the organization have funds for security allocated, but not spent? Are logging systems in place, but not used? Do robust security policies exist, but have never been distributed? Think of driving around without buckling the seatbelt. Simply buckling your seatbelt costs nothing, doesn’t take much effort, and instantly and vastly improves safety. Opportunities are low hanging fruit that you can’t afford not to take advantage of. The best part is that taking advantage of most opportunities doesn’t require management approval or any significant spending.
Threats
Many threats, especially from a security perspective, are fairly easy to delineate. If the organization is subject regulations such as PCI DSS, HIPAA or SOX, the cost of non compliance can be astronomical. The costs of reputational damage often far outweigh the fines for non compliance. And the fines for non compliance are stiff.
How do I get started?
A good template in MS Word format for a SWOT analysis can be found here: http://www.zeltser.com/sans/isc/swot-matrix-template.doc
The best place to start is with an assessment. Many times organizations have trouble assessing their own security because they are too close to it, or lack the time or expertise. An experienced outside assessor will have seen countless situations and levels of security, and will be able to help with all four areas of the SWOT analysis.
If an assessment is not possible, a brainstorming session can be good for getting most of the fields started. The more people you can involve in the brainstorming sessions the better, and be sure to include front line employees if possible. They are often aware of the issues that exist in an organization, but lack the proper channels of communication to get to executives.
This is definitely an exercise worth your time and effort. The sooner you get started, the sooner you can approach executive management and get started on increasing security.
Read more!
Labels:
CISO,
Information Security,
SWOT Business
Thursday, September 17, 2009
Information Security's Silver Bullet: There Isn't One
Back on June 27, 2008 ComputerWorld published an article "Web firewalls trumping other options as PCI deadline nears" just before the well known June 30, 2008 PCI 6.6 deadline. In February of 2008, the PCI Council published clarification on the PCI DSS section 6.6 and what the intent of it was. Over a year later, I frequently encounter Web applications that are far from compliant and this is no surprise. What is (sort of) surprising, is the false sense of security people have with PCI after completing their self-assessment questionnaire (SAQ) and dropping in a Web application firewall (WAF) thinking they are secure that is still ubiquitous.
Year after year I interface with individuals that think there is a single silver bullet to solve their information security concerns. Have they been misled somewhere in the past? Are they simply uninformed about security and the attacks out there? In a past web application review, my user ID was passed through the URL (example: http://www.website.com/index.php?uid=swhite) and I trivially changed it in my browser to "admin". This in turn allowed me to view over 11,500 files containing sensitive information on customers. A Web application firewall more often than not would not have caught this more than likely valid request, and allowed for identity theft with the information I was able to obtain.
ComputerWorld's article makes me nod my head, but at the same time, question the expertise of who is writing it. For example, they mention that web application firewalls can protect against things such as sql injection, buffer overflows, and cross-site scripting. The OWASP Top 10 list (2007 and 2004) doesn't mention buffer overflows and the PCI DSS section 6.6 specifically calls out Web applications. Buffer overflows in Web applications themselves are very unlikely to be exploited outside the capacity of a denial of service attack, but would more likely target a web server or other service running. As one who hosts web applications, I would worry about injection flaws and XSS before buffer overflows. I'm not sure why the author of the article included buffer overflows other than it is a buzzword for some people that makes them think security.
As a security professional, I commonly have to describe very technical issues in "normal people" terms. For information security, a field that has very techical aspects, non-technical individuals should understand that there is no single silver bullet to solve your security issues. Just as throwing up a WAF in front of your Web application that handles PCI data isn't the best (not necessarily least expensive) approach to complying with PCI DSS 6.6, that seemingly simply and one-time solutions to write off security concerns are not in accordance with industry best practices.
Defense in depth should be employed so that your resources are protected when preventative measures may fail, to ensure that you are protected from zero-day to patch day or until the controls are operating properly again. If at the end of the day, you learn one thing, let it be that there is no single solution to information security, or everyone would be doing it and the solution would be spreading like wildfire.
Read more!
Year after year I interface with individuals that think there is a single silver bullet to solve their information security concerns. Have they been misled somewhere in the past? Are they simply uninformed about security and the attacks out there? In a past web application review, my user ID was passed through the URL (example: http://www.website.com/index.php?uid=swhite) and I trivially changed it in my browser to "admin". This in turn allowed me to view over 11,500 files containing sensitive information on customers. A Web application firewall more often than not would not have caught this more than likely valid request, and allowed for identity theft with the information I was able to obtain.
ComputerWorld's article makes me nod my head, but at the same time, question the expertise of who is writing it. For example, they mention that web application firewalls can protect against things such as sql injection, buffer overflows, and cross-site scripting. The OWASP Top 10 list (2007 and 2004) doesn't mention buffer overflows and the PCI DSS section 6.6 specifically calls out Web applications. Buffer overflows in Web applications themselves are very unlikely to be exploited outside the capacity of a denial of service attack, but would more likely target a web server or other service running. As one who hosts web applications, I would worry about injection flaws and XSS before buffer overflows. I'm not sure why the author of the article included buffer overflows other than it is a buzzword for some people that makes them think security.
As a security professional, I commonly have to describe very technical issues in "normal people" terms. For information security, a field that has very techical aspects, non-technical individuals should understand that there is no single silver bullet to solve your security issues. Just as throwing up a WAF in front of your Web application that handles PCI data isn't the best (not necessarily least expensive) approach to complying with PCI DSS 6.6, that seemingly simply and one-time solutions to write off security concerns are not in accordance with industry best practices.
Defense in depth should be employed so that your resources are protected when preventative measures may fail, to ensure that you are protected from zero-day to patch day or until the controls are operating properly again. If at the end of the day, you learn one thing, let it be that there is no single solution to information security, or everyone would be doing it and the solution would be spreading like wildfire.
Read more!
Quick Trac Backup and Upgrade Script
Trac is an open source, web-based project management and bug-tracking tool. The program is inspired by CVSTrac, and was originally named svntrac due to its ability to interface with Subversion. It is developed and maintained by Edgewall Software.
When using trac, its often that you will have to upgrade to different versions, create backups and everything else. I just wrote a quick little script to do it all for you. Enjoy.
#!/usr/bin/python
import subprocess
from time import time
# TRAC LOCATION HERE
trac=("/var/local/trac/trac.website.com/")
# BACKUP DIR HERE
backup=("/home/yourname/tracbackups/")
# set time for directory
timesave=time()
print "Upgrading Trac..."
# Easy Install Trac just in case (wont erase anything)
subprocess.Popen("sudo easy_install Trac", shell=True).wait()
# STOP APACHE
subprocess.Popen("/etc/init.d/apache2 stop", shell=True).wait()
# BACKUP TRAC
subprocess.Popen("trac-admin %s hotcopy %s%s" % (trac,backup,timesave), shell=True).wait()
# UPGRADE TRAC
subprocess.Popen("easy_install --upgrade Trac", shell=True).wait()
# START APACHE
subprocess.Popen("/etc/init.d/apache2 start", shell=True).wait()
print "Finished upgrading Trac..."
Read more!
When using trac, its often that you will have to upgrade to different versions, create backups and everything else. I just wrote a quick little script to do it all for you. Enjoy.
#!/usr/bin/python
import subprocess
from time import time
# TRAC LOCATION HERE
trac=("/var/local/trac/trac.website.com/")
# BACKUP DIR HERE
backup=("/home/yourname/tracbackups/")
# set time for directory
timesave=time()
print "Upgrading Trac..."
# Easy Install Trac just in case (wont erase anything)
subprocess.Popen("sudo easy_install Trac", shell=True).wait()
# STOP APACHE
subprocess.Popen("/etc/init.d/apache2 stop", shell=True).wait()
# BACKUP TRAC
subprocess.Popen("trac-admin %s hotcopy %s%s" % (trac,backup,timesave), shell=True).wait()
# UPGRADE TRAC
subprocess.Popen("easy_install --upgrade Trac", shell=True).wait()
# START APACHE
subprocess.Popen("/etc/init.d/apache2 start", shell=True).wait()
print "Finished upgrading Trac..."
Read more!
Tuesday, September 15, 2009
Data Analytics to Detect Fraud
Some say that detecting fraud is fraud is like finding needle in a haystack. Often times this is true, but more often you don’t even know what the needle looks like or what haystack to look in. To overcome these obstacles data mining techniques can be used. One of the ways that is very powerful is data visualization. Using this technique you can “see” the anomalies much easier than just staring a list of numbers.
One of the earliest, but still powerful analytic like this is called “Benford’s Law” – also called “Digital Analysis”. The basic premise of this law is that certain leading digits in any random set of data will appear in a specific non-uniform manner or in a certain frequency. Anything that is outside that frequency indicates a non-compliant anomaly. For example, if an employee has a limit of approval of $5000, you might see a spike in the first two digits of “48” or “49” that is beyond what Benford’s law says it should be.
There are some great tools out there that can let you apply this to your data – including MS Excel (click here).
Read more!
One of the earliest, but still powerful analytic like this is called “Benford’s Law” – also called “Digital Analysis”. The basic premise of this law is that certain leading digits in any random set of data will appear in a specific non-uniform manner or in a certain frequency. Anything that is outside that frequency indicates a non-compliant anomaly. For example, if an employee has a limit of approval of $5000, you might see a spike in the first two digits of “48” or “49” that is beyond what Benford’s law says it should be.
There are some great tools out there that can let you apply this to your data – including MS Excel (click here).
Read more!
Sunday, August 30, 2009
WPA Flawed, Not Broken.
Reading the headlines the past few days on Slashdot, and other sites, you would think the world was ending. WPA IS BROKEN JUST LIKE WEP are a few of the headlines I've seen.
You can also see a SlashDot article (http://hardware.slashdot.org/story/09/08/27/180249/WPA-Encryption-Cracked-In-60-Seconds?from=rss) reading "WPA Encryption Cracked in Under 60 Seconds!".
From these articles you would believe that WPA is SERIOUSLY flawed allowing an attacker to CRACK the encryption and ultimately read your sensitive data, get your TKIP key, and own everything on your wireless network. Unfortunately, this is one whopper of media hype. A great article REALLY explaining the vulnerability can be found here: http://wifinetnews.com/.
Ultimately no keys are cracked, no encryption is "broken", and your ultimately a little less safe then you use to be with TKIP. The vulnerability was actually introduced last year and just improved on by the Japanese researchers.
There are some serious implications with the improved attack though, most notably if you are fairly close to the client, have a directional antenna, and can intercept some traffic, you can potentially ARP cache poison the victim and have all traffic go through the attacker first. This has a wide spectrum of exposures from sniffing traffic to having a fake DNS server and serving up bad pages.
A couple other mentions are that this only affects TKIP because of the backwards compatibility with IV's and WEP, this does not affect AES. The TKIP key is NOT recovered, it is simply the MIC checksum for message integrity. Since this is only for short packets with known data, there are only a select avenue for attack (i.e. ARP).
Ultimately, this is still a vulnerability that can have some serious repercussions, it is not the doomsday message that everyone seems to be portraying.
Read more!
You can also see a SlashDot article (http://hardware.slashdot.org/story/09/08/27/180249/WPA-Encryption-Cracked-In-60-Seconds?from=rss) reading "WPA Encryption Cracked in Under 60 Seconds!".
From these articles you would believe that WPA is SERIOUSLY flawed allowing an attacker to CRACK the encryption and ultimately read your sensitive data, get your TKIP key, and own everything on your wireless network. Unfortunately, this is one whopper of media hype. A great article REALLY explaining the vulnerability can be found here: http://wifinetnews.com/.
Ultimately no keys are cracked, no encryption is "broken", and your ultimately a little less safe then you use to be with TKIP. The vulnerability was actually introduced last year and just improved on by the Japanese researchers.
There are some serious implications with the improved attack though, most notably if you are fairly close to the client, have a directional antenna, and can intercept some traffic, you can potentially ARP cache poison the victim and have all traffic go through the attacker first. This has a wide spectrum of exposures from sniffing traffic to having a fake DNS server and serving up bad pages.
A couple other mentions are that this only affects TKIP because of the backwards compatibility with IV's and WEP, this does not affect AES. The TKIP key is NOT recovered, it is simply the MIC checksum for message integrity. Since this is only for short packets with known data, there are only a select avenue for attack (i.e. ARP).
Ultimately, this is still a vulnerability that can have some serious repercussions, it is not the doomsday message that everyone seems to be portraying.
Read more!
Monday, August 17, 2009
Passing the PCI Buck
If you've been following any of the recent PCI (Payment Card Industry) breaches, you'll see two trends coming from the breached organization. One one hand, they'll say that the PCI standard is flawed since it they were compliant and still got hacked. On the other hand, they'll say the problem was their QSA (Qualified Security Assessor) messed up. Although both the standard could be better and a QSA can be more thorough, it's time for organizations to stop passing the buck and admit that they screwed up.
Now I understand that there are certainly differences of quality in each QSA. I've seen everything from rubberstamped reports that make you wonder if the QSA can spell 'PCI' to a tinfoil hat QSA who uses the standard to create new, crazy requirements. But in the end, I sometimes wonder if the organizations understand truly what makes a good QSA and what their role is. Ultimately, the QSA is supposed to be an auditor (yes, I know technically we are 'assessors'). That means the QSA is trying to make sure the processes the organization established are working properly. In order to do that, the QSA needs to typically sample the controls and do some digging around. This should allow the QSA to determine that the control is generally operating correctly. However, the QSA is NOT the master of the organization's destiny for security. Heck, the audit is only once a year and cannot turn over every stone. The organization owns and is responsible for the processes, the controls, monitoring, and reacting. If they break down, that is the organization's problem.
Now as far as the standard goes, I hear a lot of people (generally non-QSA) stating that the PCI DSS is flawed. First of all, let's just understand that this was written to push compliance, not security, to a certain level since it was severely lacking in most organizations. Just to be clear, compliance is just a hurdle and not a ceiling. What I don't understand is then what other framework or compliance is better than PCI? For example, ISO 27001 is still too general and based on business decisions. The standard is still the most thorough, prescriptive, and layered one out there. This is not only for the controls within it but also the testing. For example, testing applies both externally and internally and escalates from scans, to pentests, and even web app reviews. For all the naysayers out there, I'd love to see a better version from them. Remember, PCI was the first and still the onyl one out there that has pushed hard for web application security such as the OWASP Top 10. Now for those who think the standard isn't detailed enough, give me a break. It isn't going to tell you exactly how to do X with product Y in a Z architecture. There are just too many variables.
So let's look at the latest details on the big breaches (http://www.wired.com/threatlevel/2009/08/tjx-hacker-charged-with-heartland/ ). Now we are finally starting to get enough details to understand what happened. Let's go with the assumption that the downfall to Heartland/Hannaford was SQL injection and see where the problem hypothetically is. Well the standard clearly states that it needs to be properly coded in requirements 6.5.2. Keep in mind that a QSA will not and may not even be able to review all your code. Oh yeah, that is the organization's requirement in 6.3.7. This also needs further tested through a web application review (possible third party problem) or prevented through a web application firewall (organization's problem) in 6.6. Not to mention that this would or should be detected during the vulnerability scanning (ASV problem) and penetration testing (probably third party problem) from requirements 11.2 and 11.3. So how exactly did the standard fail here when there are at so many layers of controls and testing designed to stop this?
This brings me back to yet another ranting point. It is unbelievable the number of organizations who are constantly trying to find the absolute cheapest vendor for any of the stuff we just discussed including the QSA, the ASV, and other testing. There is little going into understanding the quality or qualifications of the vendors being selected by the organization. This absolutely leads to 'getting what you pay' for quite a bit. But just remember, that is a choice of the organization to pick a deficient vendor.
We also need to realize that even if SQL injection had worked, there were still plenty of other controls within PCI that could have stopped the breach. First of all, there is a anti-malware requirement. Yes, I know that only recently was it clearly required for all platforms, not just those 'commonly affected'. But we have seen Linux malware and it's supported by all commercial anti-virus vendors and even free ones. So did the organization just not do it because it wasn't clearly dictated as a hurdle within the standard? I guess so. There is also a requirement to restrict firewall traffic both inbound AND outbound. So why was the malware allowed to talk to the internet from payment systems? There is no good business reason there. So the standard certainly isn't as flawed as people want to assert, though there it could be improved.
In the end, we certainly can conclude a few things. First of all, despite the fact that these organizations had been assessed as compliant, the organizations clearly were not compliant. Though it is possible that a QSA missed something, that is still the organization's problem. Also, the standard cannot be perfect but it is very well built and, if followed, would have stopped these. In security, like any process, it may only take one flaw for a breach to occur - even if that was the standard or the QSA. But that's why security has to be layered and has to be based on quality, not compliance. Regardless, the whole security process is owned by the organization and they need to take responsibility when it breaks. Passing the buck doesn't undo the breach, but does make for some interesting blamestorming to watch.
Read more!
Now I understand that there are certainly differences of quality in each QSA. I've seen everything from rubberstamped reports that make you wonder if the QSA can spell 'PCI' to a tinfoil hat QSA who uses the standard to create new, crazy requirements. But in the end, I sometimes wonder if the organizations understand truly what makes a good QSA and what their role is. Ultimately, the QSA is supposed to be an auditor (yes, I know technically we are 'assessors'). That means the QSA is trying to make sure the processes the organization established are working properly. In order to do that, the QSA needs to typically sample the controls and do some digging around. This should allow the QSA to determine that the control is generally operating correctly. However, the QSA is NOT the master of the organization's destiny for security. Heck, the audit is only once a year and cannot turn over every stone. The organization owns and is responsible for the processes, the controls, monitoring, and reacting. If they break down, that is the organization's problem.
Now as far as the standard goes, I hear a lot of people (generally non-QSA) stating that the PCI DSS is flawed. First of all, let's just understand that this was written to push compliance, not security, to a certain level since it was severely lacking in most organizations. Just to be clear, compliance is just a hurdle and not a ceiling. What I don't understand is then what other framework or compliance is better than PCI? For example, ISO 27001 is still too general and based on business decisions. The standard is still the most thorough, prescriptive, and layered one out there. This is not only for the controls within it but also the testing. For example, testing applies both externally and internally and escalates from scans, to pentests, and even web app reviews. For all the naysayers out there, I'd love to see a better version from them. Remember, PCI was the first and still the onyl one out there that has pushed hard for web application security such as the OWASP Top 10. Now for those who think the standard isn't detailed enough, give me a break. It isn't going to tell you exactly how to do X with product Y in a Z architecture. There are just too many variables.
So let's look at the latest details on the big breaches (http://www.wired.com/threatlevel/2009/08/tjx-hacker-charged-with-heartland/ ). Now we are finally starting to get enough details to understand what happened. Let's go with the assumption that the downfall to Heartland/Hannaford was SQL injection and see where the problem hypothetically is. Well the standard clearly states that it needs to be properly coded in requirements 6.5.2. Keep in mind that a QSA will not and may not even be able to review all your code. Oh yeah, that is the organization's requirement in 6.3.7. This also needs further tested through a web application review (possible third party problem) or prevented through a web application firewall (organization's problem) in 6.6. Not to mention that this would or should be detected during the vulnerability scanning (ASV problem) and penetration testing (probably third party problem) from requirements 11.2 and 11.3. So how exactly did the standard fail here when there are at so many layers of controls and testing designed to stop this?
This brings me back to yet another ranting point. It is unbelievable the number of organizations who are constantly trying to find the absolute cheapest vendor for any of the stuff we just discussed including the QSA, the ASV, and other testing. There is little going into understanding the quality or qualifications of the vendors being selected by the organization. This absolutely leads to 'getting what you pay' for quite a bit. But just remember, that is a choice of the organization to pick a deficient vendor.
We also need to realize that even if SQL injection had worked, there were still plenty of other controls within PCI that could have stopped the breach. First of all, there is a anti-malware requirement. Yes, I know that only recently was it clearly required for all platforms, not just those 'commonly affected'. But we have seen Linux malware and it's supported by all commercial anti-virus vendors and even free ones. So did the organization just not do it because it wasn't clearly dictated as a hurdle within the standard? I guess so. There is also a requirement to restrict firewall traffic both inbound AND outbound. So why was the malware allowed to talk to the internet from payment systems? There is no good business reason there. So the standard certainly isn't as flawed as people want to assert, though there it could be improved.
In the end, we certainly can conclude a few things. First of all, despite the fact that these organizations had been assessed as compliant, the organizations clearly were not compliant. Though it is possible that a QSA missed something, that is still the organization's problem. Also, the standard cannot be perfect but it is very well built and, if followed, would have stopped these. In security, like any process, it may only take one flaw for a breach to occur - even if that was the standard or the QSA. But that's why security has to be layered and has to be based on quality, not compliance. Regardless, the whole security process is owned by the organization and they need to take responsibility when it breaks. Passing the buck doesn't undo the breach, but does make for some interesting blamestorming to watch.
Read more!
Thursday, August 6, 2009
Knowing When You Don't Know Something...Expert Assistance
Many times I come across clients that think they can do everything. As we all know, no one knows everything, and we must consult subject matter experts (SME's) in order to get things done sometimes. Knowing when you don't know something is one of the most important skills to have. For those of you that enjoy the "motivational posters" as I'll call them that have black backgrounds usually accompanied by a picture with a witty caption, today's blog is for you.
Similar to Fail Blog[1], today's topic is a major fail, but in in information security world. I ran across a patch for some software to address "potential SQL injection"[2]. From the forum post, the "CEO" links to a page[3] with the fix. Here comes the fail(s). Under the section "What is an SQL injection attack", the first sentence reads: "SQL injection is also know as cross-site scripting". Wow, I bet OWASP would be surprised to know that. Secondly the fix[4] simply sets maxQueryLength_ to to 500 and checks for the string literal "DECLARE%20". This was the "patch" to prevent an advanced SQL injection attack in hex that was in the wild. If none of this makes sense to you, that's fine, and do what Mike Randolph should have done, and consult expert assistance. Information security isn't something to take a gamble on, and when in doubt, ask for expert help!
Read more!
Similar to Fail Blog[1], today's topic is a major fail, but in in information security world. I ran across a patch for some software to address "potential SQL injection"[2]. From the forum post, the "CEO" links to a page[3] with the fix. Here comes the fail(s). Under the section "What is an SQL injection attack", the first sentence reads: "SQL injection is also know as cross-site scripting". Wow, I bet OWASP would be surprised to know that. Secondly the fix[4] simply sets maxQueryLength_ to to 500 and checks for the string literal "DECLARE%20". This was the "patch" to prevent an advanced SQL injection attack in hex that was in the wild. If none of this makes sense to you, that's fine, and do what Mike Randolph should have done, and consult expert assistance. Information security isn't something to take a gamble on, and when in doubt, ask for expert help!
Read more!
Thursday, July 30, 2009
Conficker on the rise - again!
The infamous Conficker computer virus (also known as Downup, Downadup and Kido) appears to be making a comeback. In this past week we've seen two clients from different industries attacked by this worm. The worm uses a combination of advanced malware techniques, which has made it difficult to counter and has since spread rapidly into what is now believed to be the largest computer worm infection since the 2003 SQL Slammer.
At a minimum, all your computer systems should be updated with the latest virus definations and Microsoft has released a removal guide for the worm, and recommends using the current release of its Windows Malicious Software Removal Tool to remove the worm, then applying the patch to prevent re-infection.
Read more!
At a minimum, all your computer systems should be updated with the latest virus definations and Microsoft has released a removal guide for the worm, and recommends using the current release of its Windows Malicious Software Removal Tool to remove the worm, then applying the patch to prevent re-infection.
Read more!
Labels:
conficker,
data forensics,
vulnerabilities
Tuesday, July 28, 2009
Launching Exploits with Browser Detection
I recently published the latest FireFox 3.5 Heap Spray exploit to the public. This vulnerability took advantage of a font tag overflow which ultimately allowed the attacker to perform a heap based spray and execute code on the remote system.
A little basic background on heap sprays, they are relatively simple to exploit. The attacker essentially fills the heap with tons of "nops" (no operation) (in this case 0c0c0c0c which ends up being our return address) and ultimately the shellcode. When the return address is overwritten, it will point to a place in the heap (0c0c0c0c) which is a pretty good shot will be somewhere in our nops and ultimately continue to process no operations until we hit our shellcode. The reason why these exploits are not 100 percent successful is while we have a good percentage (typically 90-95% clip sometimes smaller) to hit our nops and ultimately our shellcode, sometimes we can land in the heap where we haven't overwritten everything, or in the middle of our shellcode, which would produce a crash.
Take a peek at the exploit code here: http://www.milw0rm.com/exploits/9181
Basically, the exploit will setup a webserver on port 80, you then connect a FireFox 3.5 browser (already patched by the way) it will trigger the overflow, spray the heap, and ultimately execute a bind shell on port 5500. This exploit code is inefficient in major way because it will attempt this exploit on anyone connecting to it regardless of browser, OS, or whatnot. More importantly, this exploit is ONLY for Windows based systems. To have a little more fun with this one, I decided to craft it a little differently now that the patch is out. We can make this much more reliable and efficient but throwing in some simple browser javascript detection and make this more of a universal exploit for a variety of different operating systems. If you look at the latest Fast-Track commit:
http://svn.thepentest.com/fasttrack/bin/exploits/firefox35.py
I've taken a few steps to allow you to attack both OSX, Windows, and Linux systems.
First lets take a peek at the first line:
// Initial detection for Firefox.
if (navigator.userAgent.indexOf("Firefox") != -1)
{
This will detect if FireFox is present, if so, continue on:
// Detect Windows and Linux
if (navigator.appVersion.indexOf("Win") !=-1)
If the OS is Windows, then set the payload and nops/return address to Windows based systems.
// Detect OSX and Firefox
else if (navigator.appVersion.indexOf("Mac") !=-1)
If browser is "Mac" then load the payload and nops/return address to Mac based systems.
// Detect Linux and Firefox
else if (navigator.appVersion.indexOf("X11") !=-1)
If system is *NIX, then load the payload and nops/return address to *NIX based systems. In this instance, it will only cause a FireFox crash.
Next:
else
{
window.location="about:blank"
}
If none of the criteria are met, then just load a blank page. This is useful for us when we have something like Internet Explorer or Safari that we know is not vulnerable, no need to actually execute the exploit right?
Alternatively using python it is also just as easy. I'll blog later on the method about detecting user-agents within the HTTP server within Python and handling those requests to perform the same functionality.
Read more!
A little basic background on heap sprays, they are relatively simple to exploit. The attacker essentially fills the heap with tons of "nops" (no operation) (in this case 0c0c0c0c which ends up being our return address) and ultimately the shellcode. When the return address is overwritten, it will point to a place in the heap (0c0c0c0c) which is a pretty good shot will be somewhere in our nops and ultimately continue to process no operations until we hit our shellcode. The reason why these exploits are not 100 percent successful is while we have a good percentage (typically 90-95% clip sometimes smaller) to hit our nops and ultimately our shellcode, sometimes we can land in the heap where we haven't overwritten everything, or in the middle of our shellcode, which would produce a crash.
Take a peek at the exploit code here: http://www.milw0rm.com/exploits/9181
Basically, the exploit will setup a webserver on port 80, you then connect a FireFox 3.5 browser (already patched by the way) it will trigger the overflow, spray the heap, and ultimately execute a bind shell on port 5500. This exploit code is inefficient in major way because it will attempt this exploit on anyone connecting to it regardless of browser, OS, or whatnot. More importantly, this exploit is ONLY for Windows based systems. To have a little more fun with this one, I decided to craft it a little differently now that the patch is out. We can make this much more reliable and efficient but throwing in some simple browser javascript detection and make this more of a universal exploit for a variety of different operating systems. If you look at the latest Fast-Track commit:
http://svn.thepentest.com/fasttrack/bin/exploits/firefox35.py
I've taken a few steps to allow you to attack both OSX, Windows, and Linux systems.
First lets take a peek at the first line:
// Initial detection for Firefox.
if (navigator.userAgent.indexOf("Firefox") != -1)
{
This will detect if FireFox is present, if so, continue on:
// Detect Windows and Linux
if (navigator.appVersion.indexOf("Win") !=-1)
If the OS is Windows, then set the payload and nops/return address to Windows based systems.
// Detect OSX and Firefox
else if (navigator.appVersion.indexOf("Mac") !=-1)
If browser is "Mac" then load the payload and nops/return address to Mac based systems.
// Detect Linux and Firefox
else if (navigator.appVersion.indexOf("X11") !=-1)
If system is *NIX, then load the payload and nops/return address to *NIX based systems. In this instance, it will only cause a FireFox crash.
Next:
else
{
window.location="about:blank"
}
If none of the criteria are met, then just load a blank page. This is useful for us when we have something like Internet Explorer or Safari that we know is not vulnerable, no need to actually execute the exploit right?
Alternatively using python it is also just as easy. I'll blog later on the method about detecting user-agents within the HTTP server within Python and handling those requests to perform the same functionality.
Read more!
Friday, July 17, 2009
Blogger/BlogSpot Cross-Site Scripting
So the other day I posted a unique blog on here relating to Cross-Site Scripting(XSS), and our account got suspended. I simply posted an image with the "onmouseover" attribute in the image tag to do some simple JavaScript alerts, notifying you that it could have been an XSS stealing your username/password, redirecting the page, logging your keystrokes, or even launching a buffer overflow against your browser. This is definitely not new news at all [1], however they have appeared to try to fix it (very poorly may I add) sometime in the last year. They attempt to filter certain things is a decent start since I documented over a year ago (06-27-08) [2] where you could type in a simple <script>alert('xss')</script> and it would work. Either they have failed at fixing it, or just don't care. Whether or not some of you reading our blog reported it as spam or perhaps Blogger/BlogSpot noticed my proof of concept and didn't approve, let's just hope their approach to information security is not like that of the United Nations [3], who apparently get hacked in 2007 via SQL injection and publicly blogged about, and continue to be vulnerable as of 07-01-09 [4], as gathered from Google cache data. Either way, many people wonder why there are so many breaches, and penentration testers like myself know that the vast majority go unreported. With so many vulnerabilities indexed in Google, and a slow/poor response to fix them or having them attempted to be fixed and done so incorrectly, no doubt, it is a matter of WHEN, not IF your information gets compromised.
UPDATE: Apparently I was trying to hard with my XSS proof of concept the other day and I stand to be corrected: A plain vanilla open and close script tag with an alert in it still works. From what I can tell, the input for the "Compose" view is HTML Encoded when published and the "Edit Html" view is published raw. Maybe after a few years of knowing about the problem and not doing anything about it, they should be nominated for a pwnie award[5]?
Read more!
UPDATE: Apparently I was trying to hard with my XSS proof of concept the other day and I stand to be corrected: A plain vanilla open and close script tag with an alert in it still works. From what I can tell, the input for the "Compose" view is HTML Encoded when published and the "Edit Html" view is published raw. Maybe after a few years of knowing about the problem and not doing anything about it, they should be nominated for a pwnie award[5]?
Read more!
Friday, July 10, 2009
Verizon/Cybertrust QSA in Jepoardy
Looking at the latest QSA list from PCI (https://www.pcisecuritystandards.org/pdfs/pci_qsa_list.pdf) shows Verizon/Cybertrust to be in a number of possible QSA violations and failure to comply with a number of applicable QSA Validation Requirements. We've all known for sometime that the larger scale breaches (Hanniford, TJX, and others) have occurred under the watch of Verizon/Cybertrust. While I don't think the blame solely should be placed on them..They are still under an obvious review that should have been done a long long time ago from the PCI Counil. But in mentioning that, solely relying off of a compliance standard to secure you is about as effective as getting some magic snake oil. It's still a start, its the most technical compliance standard out there, but is still a compliance standard, not an information security program.
I do think this is a rude awakening for companies that are maintaining QSA's and don't perform quality work and perform the audits to the fullest extent of what the standard requires. I really do hope that companies will start working with their customers instead of rubber stamping because they send junior consultants or lack the expertise in order to complete the entire requirements list.
This should also keep several information security professionals up at night knowing that they might be as compliant as they originally had anticipated.
One thing that has totally upset me with the entire PCI process is once the breach occurs, guess who comes in to investigate? Oh would you be surprised if Verizon/Cybertrust comes back in to see how the breach happens? Does that seem completely crazy to anyone else but me?
Read more!
I do think this is a rude awakening for companies that are maintaining QSA's and don't perform quality work and perform the audits to the fullest extent of what the standard requires. I really do hope that companies will start working with their customers instead of rubber stamping because they send junior consultants or lack the expertise in order to complete the entire requirements list.
This should also keep several information security professionals up at night knowing that they might be as compliant as they originally had anticipated.
One thing that has totally upset me with the entire PCI process is once the breach occurs, guess who comes in to investigate? Oh would you be surprised if Verizon/Cybertrust comes back in to see how the breach happens? Does that seem completely crazy to anyone else but me?
Read more!
Friday, June 26, 2009
IPhone 3g S Enterprise Ready?
With the latest release of Apple's iPhone, the question arises to most IT savvy individuals: Is it ready for our enterprise?
There are a slew of enhancements that Apple has specifically focused on in order to attract the attention of the private sector. Most people don't know that the new iPhone Configuration Utility 2.0 allows a laundry list of configurations that enhances the overall security and control through policy. While these features are a great start, they still don't match up to Blackberry's configuration overall. Most specifically the BES Server policy management and configurable options.
Some of the features out there that iPhone does support is:
Password complexity
Remote Wipe
Hardware based encryption
VPN Access
Wireless Security Policies
and much more.
Overall, do I think the iPhone is enterprise ready yet? Most people say no. I would say why not. If you can enforce security policy on the device, ensure that the information is encrypted, and protect your mobile information in a centralized manner... Then whats the big quarrel? Is it as good as Blackberry? No. But it IS manageable if you decide to move forward with it.
Some links:
http://www.apple.com/support/iphone/enterprise/
http://manuals.info.apple.com/en_US/Enterprise_Deployment_Guide.pdf
http://support.apple.com/downloads/iPhone_Configuration_Utility_2_0_for_Mac_OS_X
http://support.apple.com/downloads/iPhone_Configuration_Utility_2_0_for_Windows
Read more!
There are a slew of enhancements that Apple has specifically focused on in order to attract the attention of the private sector. Most people don't know that the new iPhone Configuration Utility 2.0 allows a laundry list of configurations that enhances the overall security and control through policy. While these features are a great start, they still don't match up to Blackberry's configuration overall. Most specifically the BES Server policy management and configurable options.
Some of the features out there that iPhone does support is:
Password complexity
Remote Wipe
Hardware based encryption
VPN Access
Wireless Security Policies
and much more.
Overall, do I think the iPhone is enterprise ready yet? Most people say no. I would say why not. If you can enforce security policy on the device, ensure that the information is encrypted, and protect your mobile information in a centralized manner... Then whats the big quarrel? Is it as good as Blackberry? No. But it IS manageable if you decide to move forward with it.
Some links:
http://www.apple.com/support/iphone/enterprise/
http://manuals.info.apple.com/en_US/Enterprise_Deployment_Guide.pdf
http://support.apple.com/downloads/iPhone_Configuration_Utility_2_0_for_Mac_OS_X
http://support.apple.com/downloads/iPhone_Configuration_Utility_2_0_for_Windows
Read more!
Thursday, June 18, 2009
Check, Please!
One of the most intriguing stories in the security world today is the lawsuit Merrick v. SAVVIS in which Merrick stipulates that SAVVIS is liable for lack of diligence on an audit of CardSystems, on which Merrick relied. This groundbreaking lawsuit could change the liability landscape, allowing assessors to be sued by indirect third parties. Prior to the more formal PCI DSS program, there were many suspicions of rubber stamp audits occurring. But even today, we see organizations pushing for the cheapest audit they can do and still get the passing “check” mark. It’s this minimum approach to security we’ve often called “malicious compliance” that leads to a lack of quality and a greater risk of breach.
Just for some more background, SAVVIS had certified CardSystems Solutions as compliant under the VISA CISP program (predecessor to PCI DSS). During a breach of CardSystems, approximately 40 million cardholder data records were compromised. At that time, this was one of the biggest breaches recorded. The forensics investigation concluded that the CardSystems firewall was not compliant and that records were not being encrypted as they should be. The big debate at this point will be to determine if SAVVIS did enough due diligence at the time or if CardSystems possibly withheld any information from the auditor.
I think that last sentence is really what irks me about the relationship between an auditor and an organization. I completely understand and agree for the need of ethical independence between both parties. But I am frustrated when it creates such a wall and lack of collaboration that the auditor just becomes someone to fill in check boxes. Then the organization takes an adversarial approach in which they ‘speak only when spoken to’ and hope the auditor doesn’t uncover the dirty little secrets of what isn’t working. It’s when something isn’t working right that a breach occurs.
So when looking for an auditor, I think it’s important that organizations make a conscious, formal decision on what they are looking for. It’s the old Chinese proverb, “Be careful what you wish for since you might just get it”. If it’s a check mark approach, then understand that may be all you are getting. You aren’t getting a consultant or an advisor. On the other hand, if you are looking for a auditor who isn’t just working off a checklist but is truly interested in your organization’s risk, then you can end up with a partner that can provide a whole lot of value, not just for compliance but also for security, since the two aren’t the same.
I guess the whole point of this exercise is to step back and take a look at your organization’s approach to the quality of the security program. The most common approach is following the “Plan, Do, Act, Check” lifecycle. As a pure security assessment firm, we feel very strongly that there needs to be a big emphasis on the “Check” step, so much that we put it first. No matter what step you are performing, you need to do it well if you want quality improvement. Security is not a very forgiving practice as a misstep in quality can quickly lead to incidents, then the blame game, then someone’s job. That’s not to say that quality is costly. But cheap certainly is. So the next time you place an ‘order’ for an auditor, think twice when you ask for the check.
Read more!
Just for some more background, SAVVIS had certified CardSystems Solutions as compliant under the VISA CISP program (predecessor to PCI DSS). During a breach of CardSystems, approximately 40 million cardholder data records were compromised. At that time, this was one of the biggest breaches recorded. The forensics investigation concluded that the CardSystems firewall was not compliant and that records were not being encrypted as they should be. The big debate at this point will be to determine if SAVVIS did enough due diligence at the time or if CardSystems possibly withheld any information from the auditor.
I think that last sentence is really what irks me about the relationship between an auditor and an organization. I completely understand and agree for the need of ethical independence between both parties. But I am frustrated when it creates such a wall and lack of collaboration that the auditor just becomes someone to fill in check boxes. Then the organization takes an adversarial approach in which they ‘speak only when spoken to’ and hope the auditor doesn’t uncover the dirty little secrets of what isn’t working. It’s when something isn’t working right that a breach occurs.
So when looking for an auditor, I think it’s important that organizations make a conscious, formal decision on what they are looking for. It’s the old Chinese proverb, “Be careful what you wish for since you might just get it”. If it’s a check mark approach, then understand that may be all you are getting. You aren’t getting a consultant or an advisor. On the other hand, if you are looking for a auditor who isn’t just working off a checklist but is truly interested in your organization’s risk, then you can end up with a partner that can provide a whole lot of value, not just for compliance but also for security, since the two aren’t the same.
I guess the whole point of this exercise is to step back and take a look at your organization’s approach to the quality of the security program. The most common approach is following the “Plan, Do, Act, Check” lifecycle. As a pure security assessment firm, we feel very strongly that there needs to be a big emphasis on the “Check” step, so much that we put it first. No matter what step you are performing, you need to do it well if you want quality improvement. Security is not a very forgiving practice as a misstep in quality can quickly lead to incidents, then the blame game, then someone’s job. That’s not to say that quality is costly. But cheap certainly is. So the next time you place an ‘order’ for an auditor, think twice when you ask for the check.
Read more!
Labels:
cardsystems,
Merrick,
pci compliance,
SAVVIS,
security assessments
The Human Exploit
So you're sitting at your desk and the phone rings. "Hey this is Mark from information security. We are noticing that your computer is creating a lot of traffic out to the internet. Are you noticing that anything on your computer is out of the ordinary lately?"
What would you say? Well, in the average Social Engineering test we perform, the answer is quite honestly a, "yeah my computer is slow... can you guys finally come and fix it?"
That’s when we say, "Sure! We’d be glad to *cough* help! Go here, download this patch, and run it..." and a couple minutes later we have fully compromised a system sitting behind a firewall in a corporate environment and easily getting past the antivirus software as well.
On average, we are able to get over 70% of end users to comply with anything we want them to do in "fixing" their computer, by just dialing their number and talking to them. How would you feel knowing that your end users are freely giving their computers and data away to attackers over the phone?
So what can you do to stop it? Well, a lot actually. Depending on your budget (which these days is low for everyone) you have the option to proxy all of your outbound connections, close down your firewall, install HIPS/NIPS protection, and the list goes on.
Sure you can do a lot to MASK the problem, but when are you going to stop the problem at its source? No, I am not advocating firing everyone you work with, but I am saying that there should be policies, procedures and MOST of all, end user training to teach people about these attacks.
People are most always willing to help, lend a hand and be polite and courteous to others on the phone. In reality, this type of attack could happen to virtually any company. In fact, the larger the company is, the easier it is to exploit.
The moral of the story is that unless you have some type of training involved for employees, they are very susceptible to Social Engineering. Even these days. Next time, it just might not be SecureState on the other end of the phone, it could be someone with a malicious intent.
Read more!
What would you say? Well, in the average Social Engineering test we perform, the answer is quite honestly a, "yeah my computer is slow... can you guys finally come and fix it?"
That’s when we say, "Sure! We’d be glad to *cough* help! Go here, download this patch, and run it..." and a couple minutes later we have fully compromised a system sitting behind a firewall in a corporate environment and easily getting past the antivirus software as well.
On average, we are able to get over 70% of end users to comply with anything we want them to do in "fixing" their computer, by just dialing their number and talking to them. How would you feel knowing that your end users are freely giving their computers and data away to attackers over the phone?
So what can you do to stop it? Well, a lot actually. Depending on your budget (which these days is low for everyone) you have the option to proxy all of your outbound connections, close down your firewall, install HIPS/NIPS protection, and the list goes on.
Sure you can do a lot to MASK the problem, but when are you going to stop the problem at its source? No, I am not advocating firing everyone you work with, but I am saying that there should be policies, procedures and MOST of all, end user training to teach people about these attacks.
People are most always willing to help, lend a hand and be polite and courteous to others on the phone. In reality, this type of attack could happen to virtually any company. In fact, the larger the company is, the easier it is to exploit.
The moral of the story is that unless you have some type of training involved for employees, they are very susceptible to Social Engineering. Even these days. Next time, it just might not be SecureState on the other end of the phone, it could be someone with a malicious intent.
Read more!
Labels:
hacking,
lieing,
phone calls,
Social Engineering
Thursday, May 28, 2009
Identity Theft: Duty of Care to a Non-Customer
Identity theft is big business, but it also makes finding the perpetrator of a crime that more difficult. Financial and fraud investigators need to look at more then just the raw data they need to get the whole picture and story before jumping the gun. As an example, the following linked article demonstrates how being a little to quick to identify the frauster lead to the wrong person. >Identity Theft: Stutzman on a Bank's Duty of Care to a Non-Customer: It just goes to show that a what appears to be a smoking gun, isn't always the truth. Our Forensic Technology Team understands this and helps you work through these investigations methodically and with due care.
Read more!
Read more!
Labels:
case law,
credit unions,
data forensics,
identity theft
Wednesday, May 27, 2009
Core Network Security: A Seldom Used Bag-O-Tricks
Walk into 9 out of 10 organizations, ask them what security controls they have built INTO the network and you'll get responses like:
"We have 800 VLANs."
"We turn off ports in conference rooms."
"Who are you, and how did you get in my office?"
It really doesn't matter what core network vendor you've chosen (Cisco, Brocade, Juniper). You can drink any Kool-Aid you want and still have an arsenal of great core network security features or techniques at your disposal. These include: Dynamic ARP Inspection, DHCP Snooping, Identity Based Network Services (or any other name you want to give an 802.1x + Certificate Authority + RADIUS solution), Infrastructure Protection Access-Lists (iACLs), Router Neighbor Authentication, etc. The list is very long, most have been around for years, and many times we see NONE of them in place at organizations big or small.
Why not? Are they that hard to implement? Not really. They require planning, a critiqued design, and a phased implementation.
We forget that the network CONTROLS TRAFFIC. If you can stop malicious traffic through the system that is controlling the transport of data, you've leveraged a powerful system that most organizations naively think should only provide speed and performance. We also forget that the network can be sliced and diced for a thousand different purposes; when was the last time you had a VLAN design discussion that was solely focused on grouping systems based off risk and criticality to the business? Probably never, unless you're currently working on PCI network segmentation.
Ask these questions the next time you're in a network design meeting:
- How are we going to prevent unauthorized access to the network? Better yet, who's authorized and who's NOT authorized?
- How are we going to protect our internal core network from attack; as in, taking over specific networking services or performing covert man-in-the-middle attacks? (Hint: go play with Yersinia)
- How do we stop someone from plugging in a rogue DHCP server?
- How will we protect one VLAN from another? (They don't form shields around themselves, promise!)
- How will we protect our network from reconnaisance? (Someone sitting on your network, passively mapping everything!)
- How will we SECURELY and STRATEGICALLY manage our network devices? (Think: Out-of-Band, management ACL's, secure protocols, SNMP restrictions)
Even though the following links are from Cisco, you can apply most of the techniques across any major core networking vendor (sorry Netgear). Have a look...you'll find that most of the options found within aren't even discussed or mentioned by Sales Engineers or Professional Service firms that are looking to help you implement a network design. Demand it from them! Or better yet, design it yourself and learn a lot.
Cisco's SAFE Blueprint (Updated recently!)
www.cisco.com/go/safe
http://www.cisco.com/en/US/docs/solutions/Enterprise/Security/SAFE_RG/SAFE_rg.html
Dynamic ARP Inspection (DAI)
http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/dynarp.html
DHCP Snooping
http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/snoodhcp.html
Identity Based Networking Services (IBNS)
http://www.cisco.com/en/US/prod/collateral/iosswrel/ps6537/ps6586/ps6638/Whitepaper_c11-532065.html
Read more!
"We have 800 VLANs."
"We turn off ports in conference rooms."
"Who are you, and how did you get in my office?"
It really doesn't matter what core network vendor you've chosen (Cisco, Brocade, Juniper). You can drink any Kool-Aid you want and still have an arsenal of great core network security features or techniques at your disposal. These include: Dynamic ARP Inspection, DHCP Snooping, Identity Based Network Services (or any other name you want to give an 802.1x + Certificate Authority + RADIUS solution), Infrastructure Protection Access-Lists (iACLs), Router Neighbor Authentication, etc. The list is very long, most have been around for years, and many times we see NONE of them in place at organizations big or small.
Why not? Are they that hard to implement? Not really. They require planning, a critiqued design, and a phased implementation.
We forget that the network CONTROLS TRAFFIC. If you can stop malicious traffic through the system that is controlling the transport of data, you've leveraged a powerful system that most organizations naively think should only provide speed and performance. We also forget that the network can be sliced and diced for a thousand different purposes; when was the last time you had a VLAN design discussion that was solely focused on grouping systems based off risk and criticality to the business? Probably never, unless you're currently working on PCI network segmentation.
Ask these questions the next time you're in a network design meeting:
- How are we going to prevent unauthorized access to the network? Better yet, who's authorized and who's NOT authorized?
- How are we going to protect our internal core network from attack; as in, taking over specific networking services or performing covert man-in-the-middle attacks? (Hint: go play with Yersinia)
- How do we stop someone from plugging in a rogue DHCP server?
- How will we protect one VLAN from another? (They don't form shields around themselves, promise!)
- How will we protect our network from reconnaisance? (Someone sitting on your network, passively mapping everything!)
- How will we SECURELY and STRATEGICALLY manage our network devices? (Think: Out-of-Band, management ACL's, secure protocols, SNMP restrictions)
Even though the following links are from Cisco, you can apply most of the techniques across any major core networking vendor (sorry Netgear). Have a look...you'll find that most of the options found within aren't even discussed or mentioned by Sales Engineers or Professional Service firms that are looking to help you implement a network design. Demand it from them! Or better yet, design it yourself and learn a lot.
Cisco's SAFE Blueprint (Updated recently!)
www.cisco.com/go/safe
http://www.cisco.com/en/US/docs/solutions/Enterprise/Security/SAFE_RG/SAFE_rg.html
Dynamic ARP Inspection (DAI)
http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/dynarp.html
DHCP Snooping
http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/snoodhcp.html
Identity Based Networking Services (IBNS)
http://www.cisco.com/en/US/prod/collateral/iosswrel/ps6537/ps6586/ps6638/Whitepaper_c11-532065.html
Read more!
Tuesday, May 26, 2009
Defining Payment Card Industry (PCI) Attestation and Data Security Standard (DSS) Compliance
A PCI merchant is any businesses that accepts credit cards as a form of payment. A PCI service provider is any company that provides a service to merchants for any aspect for their PCI environment. For both Merchants and Service Providers it is important to understand the difference between attestation of compliance (attestation) and PCI DSS compliance (compliance).
The letter of attestation can be found at the following link: https://www.pcisecuritystandards.org/saq/index.shtml. Attestation is different from compliance... Most banks currently make a distinction between attestation and compliance and request validating documents separately. Attestation is in reference to the following sensitive data whether stored electronically or on paper: Full Magnetic Stripe Data, CAV2/CVC2/CVV2/CID, and PIN/PIN Block. All of that data must not be stored in any format after a credit card transaction has been authorized aka post-authorization. To fill out the attestation form, a company must have adequately identified where any CVV information is located. Data discoveries are a typical project that is associated with this step.
To reach compliance a company needs perform all twelve requirements listed in the latest version of the PCI DSS which can be found here: https://www.pcisecuritystandards.org/security_standards/pci_dss_download_agreement.html. The PCI DSS includes attestation requirements and many other information security practices. To validate compliance a company must submit a Self Assessment Questionnaire (SAQ) or conduct an audit, which results in an Report on Compliance (ROC).
Review this blog if there is still confusion about a bank's letter asking for attestation and compliance with different dates and forms.
Read more!
The letter of attestation can be found at the following link: https://www.pcisecuritystandards.org/saq/index.shtml. Attestation is different from compliance... Most banks currently make a distinction between attestation and compliance and request validating documents separately. Attestation is in reference to the following sensitive data whether stored electronically or on paper: Full Magnetic Stripe Data, CAV2/CVC2/CVV2/CID, and PIN/PIN Block. All of that data must not be stored in any format after a credit card transaction has been authorized aka post-authorization. To fill out the attestation form, a company must have adequately identified where any CVV information is located. Data discoveries are a typical project that is associated with this step.
To reach compliance a company needs perform all twelve requirements listed in the latest version of the PCI DSS which can be found here: https://www.pcisecuritystandards.org/security_standards/pci_dss_download_agreement.html. The PCI DSS includes attestation requirements and many other information security practices. To validate compliance a company must submit a Self Assessment Questionnaire (SAQ) or conduct an audit, which results in an Report on Compliance (ROC).
Review this blog if there is still confusion about a bank's letter asking for attestation and compliance with different dates and forms.
Read more!
Friday, May 15, 2009
Security Assessments: Cheaper Not Always Better
In today's economy, money is obviously tight. As Ken pointed out in his blog post "Economy bad… breaches go up!", companies should be spending MORE money on assessments, not cutting back in the area. With that being said, here is some insight from a penetration tester who hacks daily on some of the things I've seen on the front lines...
So your company decides it is time to have a penetration test performed...whether it is an annual pen test, an RFP that was put out, or for some other reason, that reason for performing one may vary is outside the scope of this blog post. The number one factor for *many* people on anything is budget. Money makes this world go around. The root cause of cybercrime is money. Why do criminals steal SSN's and account numbers? To obtain money in the end. Since everyone seems to be short on budget these days, choosing the least expensive security assessor is not always the best way to go. In fact, the majority of the time I've seen it to turn out poorly much of the time. Of course, there are many things that factor into it. Some of these would be which assessors were considered for the work, how many there were in the running, etc.
To put things into perspective, here are some of things to consider when looking to have an information security assessment performed:
1. [insert large popular assessor name here] gives us a discount each year, and they told us we were good.
Does bigger always mean better? No. We can go back and look at several data breaches, especially those involving PCI data to see that large/well-known assessors had signed off on the companies that were broken into as being compliant despite not having fulfilled all of the requirements of the PCI DSS.
2. I used the local ISP, they are cheap and right here in my backyard.
That's great that they are local, but they are an ISP. They provide your Internet connection, and most likely don't specialize in security. At the end of the day, do you think they could tell you more about the latest attack trends out there or MPLS and OC45's?
3. We took the low bid on our RFP, and the assessor didn't break in for the internal penetration test...
This came from a multi-billion dollar company and the RFP included extensive internal penetration testing. If the penetration testing team does not break in on the internal network, their skills are quite questionable. The internal network is usually the squishy center of the network whereas the external facing portion is usually hardened. Unless it is a network with crazy security controls in place such as some government networks or other extremely confidential networks, there will be vulnerabilities present. The success rate of the assessor for breaking in might be a good thing to ask when shopping around.
4. For our last "penetration test", the vendor asked for a domain administrator account.
Are you kidding me?! Anyone can run nessus with credentials and reformat the report to look differently. A penetration test should be treated as if it were a real attack. What does this mean? This means that externally, the only information given should be the company or organization name. We can safely assume at the very least level that is what an attacker might have. For internal penetration tests, have the penetration testing team simply plug into the network and start from there. You shouldn't always have to give them an IP address, have them test out your NAC(network access control) perhaps, or even have them do a physical pen test to attempt to physically penentrate the building and do it all as if they are a true criminal targetting the organization. Whatever the scenario is, using a domain administrator account or any other credentialed automated scan is NOT a penetration test. A "vulnerability assessment" is running automated tools to identify vulnerabilities. When performing a penetration test, running such "noisy" tools will obviously get detected. A penetration test involves manual attacks targetting specific systems that in the end, will allow for unauthorized access. Don't get me wrong, automated vulnerability scanners are great for say, checking systems for the latest patches, but a vulnerability assessment does not equal a penetration test. I see this presented on many security assessors websites, and I am in disbelief every time. Be aware of what the security assessor is proposing in their statement of work.
There are tons of assessors out there. How do you know which are the best choice? Obviously price is a major factor, and when choosing, there are other things to consider too. What all does the vendor do? Do they perform penetration tests, sell products, do implementations, manage those solutions, and even support them? Can you trust them to not have a biased or swayed opinion or report if they have a financial interest in fixing what they found, selling you new products, as well as implementation and support of them? If the vendor conducting your testing does "everything" for you, the end all be all solution provider, they probably are not. Everyone says they can do everything, however it has been shown that once you "do everything", you become a generalist, and start to lack expertise in what you used to be good at. Ask for resumes or individual expertise for assessors that will be on the assessment team performing work for you. Someone who hacks all day every day versus your local ISP is going to be a no-brainer on what the outcome will be. Your local computer shop probably advertises building PC's, wireless networks, doing security, etc. Again, you may want to question any vendor that says they can do everything, and limit your selection to vendors that only do security assessments as their specialty.
Whether you choose SecureState or another security assessor for your assessments, remember that cheaper does not always mean better.
Read more!
So your company decides it is time to have a penetration test performed...whether it is an annual pen test, an RFP that was put out, or for some other reason, that reason for performing one may vary is outside the scope of this blog post. The number one factor for *many* people on anything is budget. Money makes this world go around. The root cause of cybercrime is money. Why do criminals steal SSN's and account numbers? To obtain money in the end. Since everyone seems to be short on budget these days, choosing the least expensive security assessor is not always the best way to go. In fact, the majority of the time I've seen it to turn out poorly much of the time. Of course, there are many things that factor into it. Some of these would be which assessors were considered for the work, how many there were in the running, etc.
To put things into perspective, here are some of things to consider when looking to have an information security assessment performed:
1. [insert large popular assessor name here] gives us a discount each year, and they told us we were good.
Does bigger always mean better? No. We can go back and look at several data breaches, especially those involving PCI data to see that large/well-known assessors had signed off on the companies that were broken into as being compliant despite not having fulfilled all of the requirements of the PCI DSS.
2. I used the local ISP, they are cheap and right here in my backyard.
That's great that they are local, but they are an ISP. They provide your Internet connection, and most likely don't specialize in security. At the end of the day, do you think they could tell you more about the latest attack trends out there or MPLS and OC45's?
3. We took the low bid on our RFP, and the assessor didn't break in for the internal penetration test...
This came from a multi-billion dollar company and the RFP included extensive internal penetration testing. If the penetration testing team does not break in on the internal network, their skills are quite questionable. The internal network is usually the squishy center of the network whereas the external facing portion is usually hardened. Unless it is a network with crazy security controls in place such as some government networks or other extremely confidential networks, there will be vulnerabilities present. The success rate of the assessor for breaking in might be a good thing to ask when shopping around.
4. For our last "penetration test", the vendor asked for a domain administrator account.
Are you kidding me?! Anyone can run nessus with credentials and reformat the report to look differently. A penetration test should be treated as if it were a real attack. What does this mean? This means that externally, the only information given should be the company or organization name. We can safely assume at the very least level that is what an attacker might have. For internal penetration tests, have the penetration testing team simply plug into the network and start from there. You shouldn't always have to give them an IP address, have them test out your NAC(network access control) perhaps, or even have them do a physical pen test to attempt to physically penentrate the building and do it all as if they are a true criminal targetting the organization. Whatever the scenario is, using a domain administrator account or any other credentialed automated scan is NOT a penetration test. A "vulnerability assessment" is running automated tools to identify vulnerabilities. When performing a penetration test, running such "noisy" tools will obviously get detected. A penetration test involves manual attacks targetting specific systems that in the end, will allow for unauthorized access. Don't get me wrong, automated vulnerability scanners are great for say, checking systems for the latest patches, but a vulnerability assessment does not equal a penetration test. I see this presented on many security assessors websites, and I am in disbelief every time. Be aware of what the security assessor is proposing in their statement of work.
There are tons of assessors out there. How do you know which are the best choice? Obviously price is a major factor, and when choosing, there are other things to consider too. What all does the vendor do? Do they perform penetration tests, sell products, do implementations, manage those solutions, and even support them? Can you trust them to not have a biased or swayed opinion or report if they have a financial interest in fixing what they found, selling you new products, as well as implementation and support of them? If the vendor conducting your testing does "everything" for you, the end all be all solution provider, they probably are not. Everyone says they can do everything, however it has been shown that once you "do everything", you become a generalist, and start to lack expertise in what you used to be good at. Ask for resumes or individual expertise for assessors that will be on the assessment team performing work for you. Someone who hacks all day every day versus your local ISP is going to be a no-brainer on what the outcome will be. Your local computer shop probably advertises building PC's, wireless networks, doing security, etc. Again, you may want to question any vendor that says they can do everything, and limit your selection to vendors that only do security assessments as their specialty.
Whether you choose SecureState or another security assessor for your assessments, remember that cheaper does not always mean better.
Read more!
Labels:
Information Security,
security assessments
Friday, May 8, 2009
Best Practice for Digital Forensics
We run into some very interesting situations with our clients. Sometimes you just can't make this stuff up. We've seen clients with former employees breaking back in to system to cause havoc to conducting covert data acquisition in the middle of the night of current employees suspected of wrongdoing. Often times companies are left to balance the need to get to information and gathering that information in a manner that doesn't trample all over the effectiveness of the data.
As an example, say you are laying off a key individual in your company and they have information on their laptop that you need. One approach would be just to have your technical support team come in and copy off the data via Windows copy or use Ghost to make a copy of hard drive. These two options will get the data copied, but at what cost?
Read more!
As an example, say you are laying off a key individual in your company and they have information on their laptop that you need. One approach would be just to have your technical support team come in and copy off the data via Windows copy or use Ghost to make a copy of hard drive. These two options will get the data copied, but at what cost?
- Will you have access to deleted data?
- What if the data collected reveals criminal behavior or behavior that warrants litigation - do you have the data collected in a manner that can be used in court?
- Have you taken the steps to be able to show a clear picture of what occured on the computer?
- Document everything.
- Never mishandle data. [case example]
- Never work on the original data.
- Never trust the custodian’s software/hardware.
- Maintain chain-of-custody throughout the process.
- Only use courtroom admissible and licensed tools. [see NIST CFTT]
- Be sure to be fully trained in the use of digital forensic tools.
- Don’t forget other devices such as PDAs, Blackberries, iPhones etc. [see Paraben]
- Use write-blocking hardware when doing physical acquisitions.
- Call an expert if you can't do any of the above!
Read more!
Thursday, April 9, 2009
24: Reality TV?
This following article was published in SecureState’s Winter Newsletter. With the recent story that broke regarding the international spies from Russia and China that hacked into the United States’ electrical grid (http://www.msnbc.msn.com/id/30107040/from/ET/), this story has become more relevant. It has been something that SecureState has been preaching for quite some time… CIP is not strong enough…
Fox’s TV series 24 could very well become reality TV!
Fox’s TV series 24 could very well become reality TV!
The reason is not that a simple device can be used to compromise our water, energy, transportation, etc. But because the Critical Infrastructure Protection (CIP) standard is not to the level it needs to be to protect our most critical infrastructure.
The biggest problem with the CIP standard is that it may not even be possible to be CIP compliant! The biggest issue that the North American Energy Reliability Corporation (NERC) has with its CIP standard is that it does not deal with the issue of legacy systems. For NERC itself, the problem is that it will not force vendors to upgrade their systems to become compliant.
“Until vendors are forced to upgrade their products, there is not going much in the way of actual security,” says Matt Davis, Principal of Audit & Compliance at SecureState. “100% of these EMS and GMS systems that CIP deals with were designed to do one thing… and that is work!”
These systems that do not have the option of being upgraded are then pushed aside and not tested, therefore becoming exceptions to the standard. How good can a standard be if it is not testing all systems critical to the standard?
During several CIP engagements, SecureState found that most of the systems that are in scope of CIP have never been tested to the level that they needed to be. Nor could they stand up to simple tests including vulnerability scans. In fact, CIP does not even require penetration testing!!! - A test that is required by most standards including PCI.
CIP Audits
All organizations connected to the nation’s energy grid are to begin reporting their compliance and activities this January, with audits beginning January 1, 2010.
The audits are to be performed by the seven regional NERC operators scattered throughout the country. This poses the question of how strict each individual operator will audit the organizations in their region. This could cause some heat if one group realizes they got dinged on something another organization with the same system got away with. And you can bet they are going to share and compare report cards.
“You have to wonder how much these operators are going to let slide during these audits. Is the fact that there are certain systems that cannot be upgraded going to make exception the rule? We will have to wait and see,” said Matt Davis, Partner at SecureState.
CIP Importance
The importance of the CIP Standard goes far beyond any other security regulation that there is currently in place. But CIP isn’t even as tough as PCI, for example. The net result is that there is better security in restaurants than what goes into the grid.
“PCI, SOX, GLBA, HIPAA… they all have their place in protecting the United States,” said SecureState Senior Consultant Jason Leuenberger. “But if the power goes out… those standards become obsolete!”
And the importance stretches beyond just losing a modern convenience. Because a failure in the country’s energy grid, means a weakness in the country’s security!
By Matt Franko
Read more!
Labels:
24,
china spies,
CIP,
contact securestate,
Electrical,
Ethics of Hacking,
grid,
hacker,
hacking,
NERC,
russia spies,
spies
Wednesday, March 25, 2009
Let's Get Ready to PCI Rumble!
I know this is going to sound kind of sad, but I am really excited to see what's going to happen with some of the latest PCI breaches and the organization's response. In the two the biggest, latest ones we see both Hannaford and Heartland stating their case that they were both PCI compliant. To the average person, the initial response is going to be something like, "Then the PCI DSS sucks!". However, a recent statement from a VISA exec was more along the lines of "We have yet to see a company that was breached that was compliant." So how do we judge this wrestling match?
First and foremost, we need to make a clear distinction about what it means to be PCI compliant. The position being taken by the the organizations above needs clarified. What they mean is that they had successfully completed their compliance validation activities. More specifically, that means they had their audit performed and submitted it along with the ASV scans. They met their due diligence obligation.
But now we need to reconcile that with the execs position. What he was talking about is more akin to PCI Safe Harbor. That states a company is 'safe' if they are fully compliant to the PCI DSS at the time of the breach and can be demonstrated during an audit with forensics. That, my friend, ain't easy. Compliance is something you will ultimately have to defend and is not the plaque or certificate on the wall for you did once a year.
So now we need to actually do some sort of prognostication here so you get my point. I don't have all the details for either breach, so I will make some assumptions. Let's assume both of them were breached due to malware on a system. I'll also assume they had anti-malware in place. What isn't an assumption is if it's possible to bypass malware. So I am going to assume is it did get bypassed. I am not sure if we'll know how the malware got there. Did that portion of PCI fail? Not at all. It's a good requirement and they did meet it. But PCI has many layers to it as all security programs should.
If I had to guess what will the final outcome is, it will be something like this. From what we know the big problem is that the data was breached i.e. it left their network. So just how did the data get into the hands of the the evildoers? I think that's the real question. Exactly what kind of firewall rules were in place that allowed the malware sniffers in place to send the traffic out of their network? Did the processor or the registers have direct access to the internet? I sure as heck hope not. If the attackers had some live connection into those systems, that's one thing. But I would suspect there is a lack of good egress filtering and not that the malware did some crazy cool method to bounce traffic out of their network. This kind of excessive egress would likely be caused by a lack of good process to reign in the poor rules adopted through "business justification". Personally, I think the business card trumps way too often as I have seen time and time again at clients.
So who is to blame here? I think an easy, but not necessarily correct, choice is the assessor. An auditor's job is to assure formalized, good processes and perform some testing as well. It may not be possible to audit every rule of every firewall. Even so, the auditor has to be pretty strong to push back when "business justification" has been established. It means possible rumbling with their client and their bank. But yes, the auditor may not have been thorough and some rubber stamping occurred here.
Well, only time will tell if I should quit security and become a psychic. Speaking of, my money is on the VISA exec to win the match. Even if I am wrong, there is some lessons learned here. In reality, there needs to be a strong, collaborative relationship between the auditor and the organization to both agree on a defensible position in gray or soft areas of the PCI DSS. I often say you need to imagine standing at the podium and feeling confident as you describe that position to reporters and the world. And never forget, compliance does not equal security. Business justification is not a loophole. Or if it is, a lot more can slip through that hole like malware and fines.
Read more!
First and foremost, we need to make a clear distinction about what it means to be PCI compliant. The position being taken by the the organizations above needs clarified. What they mean is that they had successfully completed their compliance validation activities. More specifically, that means they had their audit performed and submitted it along with the ASV scans. They met their due diligence obligation.
But now we need to reconcile that with the execs position. What he was talking about is more akin to PCI Safe Harbor. That states a company is 'safe' if they are fully compliant to the PCI DSS at the time of the breach and can be demonstrated during an audit with forensics. That, my friend, ain't easy. Compliance is something you will ultimately have to defend and is not the plaque or certificate on the wall for you did once a year.
So now we need to actually do some sort of prognostication here so you get my point. I don't have all the details for either breach, so I will make some assumptions. Let's assume both of them were breached due to malware on a system. I'll also assume they had anti-malware in place. What isn't an assumption is if it's possible to bypass malware. So I am going to assume is it did get bypassed. I am not sure if we'll know how the malware got there. Did that portion of PCI fail? Not at all. It's a good requirement and they did meet it. But PCI has many layers to it as all security programs should.
If I had to guess what will the final outcome is, it will be something like this. From what we know the big problem is that the data was breached i.e. it left their network. So just how did the data get into the hands of the the evildoers? I think that's the real question. Exactly what kind of firewall rules were in place that allowed the malware sniffers in place to send the traffic out of their network? Did the processor or the registers have direct access to the internet? I sure as heck hope not. If the attackers had some live connection into those systems, that's one thing. But I would suspect there is a lack of good egress filtering and not that the malware did some crazy cool method to bounce traffic out of their network. This kind of excessive egress would likely be caused by a lack of good process to reign in the poor rules adopted through "business justification". Personally, I think the business card trumps way too often as I have seen time and time again at clients.
So who is to blame here? I think an easy, but not necessarily correct, choice is the assessor. An auditor's job is to assure formalized, good processes and perform some testing as well. It may not be possible to audit every rule of every firewall. Even so, the auditor has to be pretty strong to push back when "business justification" has been established. It means possible rumbling with their client and their bank. But yes, the auditor may not have been thorough and some rubber stamping occurred here.
Well, only time will tell if I should quit security and become a psychic. Speaking of, my money is on the VISA exec to win the match. Even if I am wrong, there is some lessons learned here. In reality, there needs to be a strong, collaborative relationship between the auditor and the organization to both agree on a defensible position in gray or soft areas of the PCI DSS. I often say you need to imagine standing at the podium and feeling confident as you describe that position to reporters and the world. And never forget, compliance does not equal security. Business justification is not a loophole. Or if it is, a lot more can slip through that hole like malware and fines.
Read more!
Labels:
hannaford,
heartland,
pci,
Security Breaches
Subscribe to:
Posts (Atom)