Wednesday, December 24, 2008

Security Stuck at the Kids Table?

Where does the Security Department reside at your organization?

I am sure many readers with first answer this question with, “What Security Department?” That is a fair answer for many organizations out there in the real world and for those of you that answered the question that way, I feel sorry for you and your organization... It is only a matter of time before you end up on the front page on the newspaper with a headline reading something like, “Hacker Breaches Company ABC, takes 100,000 Social Security Numbers” or “Insider Steals 20,000 Credit Card Numbers from Company XYZ.” Trust me, I have seen it before. It is only a matter of time.
For the rest of you, where does Security sit? Under the Director of IT? Under the Chief Information Officer? How about under the Audit Department? While there are advantages to each, the disadvantages far outweigh the benefits.

Let’s examine.

Under the Director of IT: Last time I checked, the IT department’s main concern is the availability of resources and data. As a security guy, I really don’t care about availability. If our network is unavailable, we are secure. As such, every decision I make in the best interest of security is going to be analyzed based on the effects of availability, and if these decisions conflict with their goals, they are not going to go far.

Under the CIO: Same problems as above and also include problems with lack of funding, lack of power (i.e. the ability to make decisions and have them implemented), and lack of representation in senior management.

Under the Audit Department: Who is an auditor’s on friend? Another auditor! Okay, so that wasn’t a good joke, but there is truth to it. Most people don’t like auditors and being stuck under that department makes others think Security is one of them. Therefore, everyone from the bottom to the top will be generally “on guard” when you come around and resistant to your goals because of it.

So where is the best place for Security to sit? At the same level as the CIO, but independent from them. Security should be its own Department and have its own voice in the Senior Management circle. They should have their own budget and the ability to defend all decisions made in the best interest of security without being kyboshed before it makes it to the C-level.

In a world with increasingly more stringent regulations and compliances (i.e. PCI, HIPAA, SOX, GLBA), and more sophisticated hackers and hacking techniques, it’s time to move Security where it rightfully belongs, at the adult table.

Read more!

Tuesday, December 23, 2008

Economy bad… breaches go up!

Cut jobs, layoff people, hell don’t buy coffee, but don’t spend less on assessments during a down economy!

Contrary to popular belief during a down economy it is crucial that companies maintain an assessment program. Based on an article I recently wrote for law.com... when the economy is bad (which appears to be the case for 2009) the chance for theft of corporate assets increases. Based on the fraud triangle below are three areas that if aligned a person is willing to steal, commit fraud or worse.
  1. Rationalization- The day an employee starts they start to rationalize… I worked all weekend and no one else was here… especially my boss!
  2. Pressure- Given the economy this is an understatement; pressure is all over the place. With one in five homes being foreclosed on it’s a safe bet that one of your employees will have financial pressure.
  3. Opportunity- Probably the only area that we can actual control. Taking away or reducing the opportunity is key. Assessments are actually the lowest cost solution to identify the risky areas.
Correct use of assessments is key; you need to spend money wisely. What is the best use of your money and how do you maximum your return? You need understand where your greatest risk is and apply more resources in that spot. Seems easy, however most security professionals would rather secure the outside with a penetration test or scans. Spending $10k, did you actually identify the greatest risk? Probably not.

Getting budget gets tougher and tougher when you don’t know what the real risks are. Hence, next year (2009), spend money on a risk assessment. Yes risk assessments cost more, but they identify more risk and more importantly map the business requirements to those risks. Now you are telling the board or CEO of the risks, not just the results of a penetration test. This is key; we as security practitioners do not want to hold the risk!

Over the past several years I have noticed an increase in January/February breaches and hacking activity. While I can not statistically back up this observation, I can guarantee you that with a down economy and the holiday season, people will have more free time. Especially kids that are off from school, this is an ideal time to try some new hacks out, maybe the latest version of FastTrak.

Read more!

Monday, December 1, 2008

e.Discovery Planning

E-Discovery is a topic that is quickly becoming a more common conversation point. The average person really does not know what E-Discovery is and why it is important. The same can actually be said for a large amount of businesses out there today. E-Discovery is not a topic that most companies spend a lot of time talking about. That is until they find themselves part of a situation that makes them understand what E-Discovery is and they soon realize the importance of having a plan for preserving and gathering electronic data that may be used as evidence in a legal proceeding. What data would be important to preserve? The most common type of E-discovery evidence would be e-mail. A lot of transactions and conversations take place via e-mail. Companies such as internet service providers have more of a dilemma, as it is difficult for them to maintain large amounts of data, and they are often presented with requests to preserve information. Many ISP's only keep information for short periods of time due to storage space limits. Other types of data that could possibly be of importance could be database files, documents, picture files, audio, and video files. Many times there is more than is apparent on the surface of a file. The ability to hide incriminating evidence within an audio file or photo is not something that most people are aware of. Instant messaging data is also growing fast. So it becomes apparent that as technology evolves and companies are involved in legal proceedings more and more emphasis is going to be placed on E-Discovery. Many companies will experience this first hand, some will learn from the mistakes of others. Most large companies could potentially have several legal proceedings going on at any given time. This makes it necessary to implement some type of strategic plan for discovering electronic evidence.

A plan for discovering electronic data or evidence is known as Proactive E-Discovery. As mentioned earlier companies are beginning to realize the importance of E-Discovery. Electronic data is one of the most difficult forms of evidence to destroy completely. As data makes its way from system to system it creates another point of presence in which the data can be discovered or hidden. Data itself is growing rapidly, and this further complicates the matter. So why is it so important to have a plan in place for discovering electronic data? An answer that is becoming more and more a reality is "because you have to". A situation such as a legal proceeding could require the presentation or disclosure of relevant data in a very short time frame. According to the Federal Rules of Civil Procedure, data must be produced within 120 days. As part of the discovery phase of a legal proceeding it is required that known information such as data be presented to the opposing counsel. If there is not a plan in place for retrieving this data it is possible that a company could be levied with fines or sanctions for not complying or being unable to comply.

Having had the opportunity to begin studying this field as part of perusing a bachelor degree in computer and digital forensics it has become clear to me the focus the legal system is beginning to put on data that can be stored electronically, and the ability to quickly discover and present that data. As mentioned before the Federal Rules of Civil procedure have been modified, putting focus on the need for companies to know what they are storing and to have a procedure in place to quickly discover the necessary data and present it as evidence to legal counsel. Knowing what is stored is an issue companies will struggle with as any data that is stored can potentially be called into question and become part of an ongoing investigation. A major driving force behind the scramble to implement E-Discovery plans is the potential for sanctions. Courts have a good amount of leeway when imposing sanctions. These sanctions can amount to million dollar penalties that could potentially bankrupt an organization. In general the sanctions can be determined by the severity of the failure to comply and the actions taken by representatives to either help or hinder the discovery of data. The following link is an article that talks about the World Trade Center Insurer, and their legal counsel that were hit with E-Discovery sanctions: http://www.ediscoverylaw.com/2007/07/articles/case-summaries/wtc-insurer-and-its-counsel-hit-with-ediscovery-sanctions/

The E-Discovery business is the real deal, and as more and more companies are penalized for failing to comply, more and more companies will be looking to adopt policies and procedures that will allow them to know what it is they are archiving, and exactly how they will go about discovering and presenting this data. Their efforts will likely save them millions of dollars and the ugly embarrassment of their names being publicized as having violated E-Discovery regulations. E-Discovery compliance will also continue to grow and become more main-stream as society continues to move in the direction of electronic lifestyles. The more electronic data there is in the world, the more important E-Discovery will become.

Read more!

Friday, November 21, 2008

Intrepid Metasploit Ruby 1.8.7 Fix

Just a quick one here, if you use Ubuntu and have updated to the latest Intrepid release, you undoubtedly know that Metasploit hoses over short-name constants. The fix has been released already to Jaunty (the next Ubuntu release 9.04), however is still in intrepid-proposed for Intrepid. If you need Metasploit to work and can't wait for the release next week of the committed version to the normal repositories. If your seeing the following exploit failed message in Metasploit: "Exploit failed: uninitialized constant Msf::ModuleSet::NDR" or variances of that you have the issue. Additionally, when you load up metasploit you may see the following message:

***********************************************************************
***
*** This version of the Ruby interpreter has significant problems, we
*** strongly recommend that you switch to version 1.8.6 until these *
*** issues have been corrected. Alternatively, you can download,*
*** build, and install the latest Ruby snapshot from: *
*** - http://www.ruby-lang.org/ *
*** For more information, please see the following URL: *
*** - https://bugs.launchpad.net/bugs/282302 *
***
***********************************************************************

Heres a workaround:

Go to the software updates under Administration, click the "Updates" tab and select Proposed updates (intrepid-proposed)

If you don't want it to install everything in intrepid proposed you can do selective intrepid-proposed, to do this create a new file under /etc/apt/ and call it preferences and add the following:

Package: *
Pin: release a=intrepid-updates
Pin-Priority: 900

Package: *
Pin: release a=intrepid-proposed
Pin-Priority: 400

From there, simply go into synaptic package manager, reload the packages, and do a search for ruby1.8, mark for upgrade, install, and metasploit should be working without the short-name constants. The name of the new package is ruby1.8_1.8.7.72-1ubuntu1 (or 0.1).

References:
https://bugs.launchpad.net/ubuntu/+source/ruby1.8/+bug/282302

Read more!

Wednesday, November 12, 2008

And You Thought Graphics Cards Were Just For Gaming

I have always been a fan of the latest and greatest hardware and always been amazed on how fast new hardware is getting. Well now the Security field is going to have to start worrying about how this hardware is being leveraged to crack passwords. The Nvidia Corporation has harnessed the functionality of the C programming language and integrated it with their newest GPU’s to form the CUDA Technology.

In Fact, even the Lenovo T60 and T61’s are loaded with Nvidia Quadro graphics cards that can run CUDA software. There are even Python bindings for CUDA and many other languages may enter this arena. Applications for Fluid Dynamics, Digital Media, Electronic Design, Finance, Game Physics, Audio and Video, and many more have already been developed and more are on the way.

What I mentioned before is about Information Security. There is also software released to take advantage of “password recovery” and it is stunningly fast. Modern dual-core CPU’s such as the Intel Core 2 Duo and the AMD Athlon X2’s are able to test approximately 2 trillion passwords in about 3 days whereas CUDA based “Password Recovery” software can do 55 trillion in the same time frame. That is almost 25 times faster!

The reason these new cards are able to run software like this is because these new generation chips, named the G80 series, are able to compute fixed-point operations. The new Nvidia GTX 280 Graphics card boasts 1GB of 1100 MHz GDDR3 memory on a 512-bit path with 240 processing cores (actually called ALU’s) running at 600-650 MHz at a cost of $450/card.

Nvidia says the card is able to reach close to 1 Teraflop (Trillions of floating point operations per second) of compute capability. The first super computer to reach the 1 Teraflop barrier was in December of 1997 and was the size of a mid-sized house. It was 76 computer cabinets holding 9072 Pentium Pro Processors. (http://www.sandia.gov/media/online.htm) You can check back at http://www.top500.org/ for the fastest super computers in the world.

So with these Desktop Super-Computers doing tasks that multi-million dollar Teraflop computers are capable of, what if someone found a way to harness this technology to crack passwords in your organization? Or if they captured enough data and were able to un-encrypt classified data? How about AES-256 bit encrypted hard drives? Let’s look at it this way. Most likely your company’s password complexity is too weak. The only way to make it harder (still not impossible) is to force your users to use long pass-phrases, strengthen your domain policies and provide user awareness training.

The thing that makes this CUDA Technology so fast is that threads are able to communicate. These 240 cores work in tandem using Parallel Data Cache (A.K.A. shared memory http://www.beyond3d.com/content/articles/12/3) which saves clock cycles since it isn’t going all the way out to the card’s GDDR memory for additional data or temporary storage. Additionally, with the current Nvidia software and the proper hardware configuration, you can strap 1-4 of those cards to a quad core CPU and have an absolutely amazing system that could reach the 2 TeraFLOP range. And if that isn’t enough, Nvidia allows their consumers to over clock within the Nvidia Control Panel software.

One company has already gone the distance to “recover” passwords. Elcomsoft makes a software package that allows up to 10,000 distributed client workstations to “recover strong encryption keys” with each client having up to 4 GPU’s each. What government agency or research lab wouldn’t want something as powerful as that? The software is capable of “recovering” MS Office 97-07 passwords, Zip and RAR passwords, MS Money, Open Documents, all PGP passwords, Personal Information Exchange certificates - PKCS #12 (.PFX, .P12), Adobe Acrobat PDF, Domain Cached Credentials, Unix passwords, Intuit Quicken Passwords, MD5 Hashes, Oracle Passwords and WEP, WPA and WPA2 Passwords. (http://www.elcomsoft.com/edpr.html) Many of those operations are considered to be “GPU Accelerated” Options.

According to Elcomsoft’s own press release "Elcomsoft Distributed Password Recovery allows using laptop, desktop or server computers equipped with supported Nvidia video cards to break Wi-Fi encryption up to 100 times faster than by using CPU only." The software is said to support ATI Graphics cards early next year. I would find it only a matter of time until the underground community uses this technology to crack DRM as well as other cryptographic enabled media. (http://www.elcomsoft.com/pr.html)

Remember, this technology doesn’t have to be used in just password “recovery.” (http://www.nvidia.com/object/cuda_home.html#) There is such a large amount of science and technology that will benefit from this. Mathematics, Digital media, Programming and best of all, Games.

Read more!

Monday, November 3, 2008

Penetration Tests: Not just the normal cup of joe

Well, last week celebrates the ever so fun Information Security Summit in Independence Ohio. I usually try to sit in a few topics here and there to see what people are talking about. I must have sat in three different presentations where they preached that only high level risk assessments could find the core deficiencies in a security program. While I tend to agree to an extent on this, they also made the bold claim that penetration testing cannot accomplish this and is only used for "technical" findings.

Every time I hear this at presentations I wish Bobby Bouchey (pronounced "Boo-SHAY") from Waterboy would come out and pummel the guy on stage.



Unfortunately, nothing happens and I have to continue to hope one day Bobby or Terry Tate the office linebacker answers my prayers.





I'm not discrediting risk assessments at all, in fact they are a necessity, however, a penetration test can absolutely bridge technical and high level findings and identify core deficiencies in current process.

Lets take an example we see all the time during a penetration test. Lets say we do a buffer overflow on a third party application, lets pick on HP's OpenView. We see that OpenView has continuous buffer overflows all across the clients network, we also see that Veritas NetBackup has a ton of buffer overflows, lets see, so does VNC, and the list goes on.

One could logically deduce that third party patch management in the organization is failing and there is a process breakdown for the inventory and maintenance of these third party applications.

Lets take this a step further, a client has multiple domains, Domain A and Domain B. Domain A has a 10 character password lockout, 4 invalid attempts, logs everything, uses IPSEC for all communications, using NTLMv2 hashes, and all that other great security stuff you can do through group policy. Domain B has a 3 character password, unlimited attempts, no encryption, LM hashes, etc. etc. etc. We can assume that hardening techniques are not uniformly applied across the entire organization.

My point to all of this is penetration testing has evolved far beyond the "fix this patch and your golden" because we all now realize that if we fix this patch, there's another patch six months down the road that isn't patched and we have the same risk. Instead of doing the "fix this patch, fix this patch" we can say "fix your patch management process". Not many pentesting companies out there are doing this nor even understand what I'm talking about here. It's always been challenging to teach the nerds how to understand higher level functions of security.

Risk Assessments and Penetration Testing compliment each other extremely well. Where organizations say their patch management is an A+, when the pentesters come and rip open all third party applications and their grade is a D-, validation and testing offers and excellent understanding of gaps within current policies, procedures, and standards.

To close this whirlpool of thoughts, penetration testing isn't just for the techno weenies to plug holes, its to understand where the core deficiencies are within current process within the organization. Don't get me wrong, patching those holes are important and winning the battle, but identifying the root cause of those problems will ultimately help reach your secured state and win the war.

Read more!

Wednesday, October 29, 2008

Red Flag Rules - The New Deadline

For the people that are not privy to the next deadline, there is one coming up this week. On November 1st, there are three rules that are coming into play from the federal government - like we didn't have enough already to deal with!

Let me give a little background to this first.

The Fair and Accurate Credit Transactions Act (FACT Act or FACTA) was passed back in 2003 to try and get a better handle on identity theft - both on the monitoring and potential prevention on the matter. The FACT Act amended the previous passed Fair Credit Reporting Act (FCRA) that was passed way back in the 1970s. To put a little context to it, FACTA is the law that allows people to get a free credit report once a year from the credit trio (Equifax, Experian, and TransUnion).

One of the sections that were included within the FACT Act was three requirements for businesses to comply with regarding protecting identity theft. These are better known as the Red Flag Rules.

The Rules

The Red Flag Rules have three primary sections to it but I'm going to be focusing on the first and broadest applicable area: implementing an Identity Theft Prevention Program.

As part of the new regulation, companies are now obligated to develop and maintain a Identity Theft Prevention Program. Well what does that mean actually? According to the FTC, they have to include "reasonable policies and procedures for detecting, preventing, and mitigating identity theft". Additionally you need to make sure they enable companies to:

1. Identify relevant patterns, practices, and specific forms of activity that are “red flags” signaling possible identity theft and incorporate those red flags into the Program;
2. Detect red flags that have been incorporated into the Program;
3. Respond appropriately to any red flags that are detected to prevent and mitigate identity theft; and
4. Ensure the Program is updated periodically to reflect changes in risks from identity theft.

Who needs to be compliant?

Since this is a law, the applicable entities are wide spread. As defined by the law, this applies to all businesses that have "covered accounts". Covered accounts are any accounts that include foreseeable risk of identity risk. This could include credit cards, monthly-billed account like utility bills, cell phone bills, social security numbers, drivers’ license numbers, medical insurance accounts, and possibly others. Obvious that it would be a shorter list to come up with the businesses that are not affected by these rules.

How do you become compliant?

Well that's the magic question!

As of right now, there is nothing of a measurement to what is sufficiently compliant and what is not. Of course since this is a regulatory law, the actual controls dictated are very general in nature; giving broad programs that needs to be in place within the organization.

The best advice to accomplishing this is to follow general security guidelines. Some important components to include are: policies and standards, incident response program, security incident and event monitoring (SIEM), and strong logical controls within your environment. Following frameworks and guidelines, like with the ISO 27001 or NIST 800-53, can give you guidance in developing the programs and controls as well.

So who's to enforce the law?

Since this is a federal business law, this primarily falls under the Federal Trade Commission (FTC) - though there are provisions that the National Credit Union Administration (NCUA) and Federal backing agencies can also enforce this. Does this mean that they are going to be performing audits against companies - unlikely, but that doesn't mean they will not investigate organization based upon reported incidents. In the infamous case of TJX, once the information from the breach was made public the FTC came in and actually mandated controls be put in place. This includes audits on a bi-annual basis and maintaining a "comprehensive information security program".

In the end, this just strengthens the need for organizations to develop an Enterprise Security Programs (ESA) within organizations. Even though this is yet another law, bringing with them the generic mandates to companies, performing best practices and continually assessing and improving your security program will be more than enough to include these added rules.


Read more!

Friday, October 24, 2008

Building an Information Security Program, Step One

Put firewalls everywhere!

No.

Build an Incident Response Program!

Wrong.

Write Minimum Security Baselines!

I would disagree.

Classify your data?

Ding, ding, ding...we have a winner. If you guessed option four, please move to the head of the class.

When establishing a strong information security program, your primary focus is on securing data. The first step in securing data in an organization, is to know and document what data you store, transmit and receive. After going through the process of identifying the data, the next step is to classify it.

What data is public?

What data, if lost, would cause corporate heartburn? Would the loss contribute to outside competitive advantage?

What data, if lost, would be catastrophic to daily business, and potentially cost the organization millions of dollars in recovery fees, fines, and lawsuits?

There are numerous, high-level processes to use for classifying data. Most organizations can quickly tell you in five minutes what their most critical data is. But do they know exactly where that data flows within the business, as well as externally? Usually not.

Only with data classification can you perform asset classification. If a host stores sensitive data, it's criticality to the organization is raised. You then build further security controls around this sensitive host.

Only with data classfication can you build an information incident response program. How do you effectively respond to an incident if you haven't classified and identified what data is located where? Only with data classification can you provide Service Level Agreements as PART OF the incident response program.

Everything stems from classifying your data, understanding where it flows and is stored, and then placing tactical and strategic security controls in place to mitigate or eliminate risk to the integrity or loss of data.

It's the core of all great information security programs; everything else is turn-key, so spend the appropriate amount of time and thought cycles on being thorough in this area.


Justin Leapline, Senior Consultant for SecureState's Audit & Compliance practice contributed to this posting.

Read more!

Wednesday, October 1, 2008

Avoiding Risk – Why would you blatantly put your company at risk?

American investor and businessman Warren Buffet once said, “Risk comes from not knowing what you are doing.” I say that risks are a part of nature that is inescapable, especially in Information Technology. Risk avoidance comes from not knowing what you are doing.

When risks are identified at your organization, there are typically four options to choose from. You can accept the risk, mitigate the risk, transfer the risk, or avoid the risk. For clarification, let’s define each:

Accept the Risk – Accepting the risk is a Senior Management decision that should be made by comparing the cost of mitigating the risk to the potential impact if that risk is exploited. For instance, you discover a web vulnerability that could allow a hacker to launch a Denial of Service attack on your system. After researching the issue, you determine the cost to mitigate this risk is $25k and the potential loss if this occurs is nominal. The determination can be made that the cost to mitigate is too expensive compared to what will happen if a DoS attack occurs. Therefore, Senior Management makes the decision to accept the risk.

Mitigate the Risk – Mitigation the risk is the act of lessening, reducing, decreasing, or eliminating the risk. Using our scenario above, imagine the cost to mitigate is $25k and the potential loss is millions of dollars. The best decision will be to spend the $25k and fix the identified risk.

Transfer the Risk – Transferring the risk can occur in two different ways. You can outsource the function or process that is at risk to a third party contractually making them responsible for that risk, or you can choose to get insurance.

Avoid the Risk – Avoiding the risk is the act of doing nothing.

Avoiding the risk, in my opinion, should not even be an option on the list of possible choices. Avoidance is what people do when they are too lazy, too inexperienced, or too stubborn to realize they have a problem and they need a solution. Ignoring the issues does not make them go away. Over time, risks tend to have a snowball effect. It starts out small and manageable, but as it begins to roll down hill, the size and manageability of it becomes to enormous to handle. Now you are left extremely vulnerable and you don’t have the capabilities, resources, or knowledge to fix the problem. The only thing left to do is sit back and pray that you don’t get breached.

In our line of business, we identify risks and offer solutions to our clients. What option they choose is up to them. But avoiding the risks we have identified is not the solution; it only leaves them unsecure and vulnerable. Why anyone would do this to their organization is beyond me.

Read more!

Tuesday, September 30, 2008

Classic ASP SQL Injection Prevention

Undoubtedly one of the most common vulnerabilities that I run across during penetration tests or web application security assessments is SQL injection. The fix is very easy for most programming languages, however one seems to be horribly neglected on the world wide web. If you search google for SQL injection prevention along with a specific language, you will run across many forum posts suggesting fixes, many of which are incorrect or simply deterrents that don't fix the root of the problem. More specifically, there is a lack of examples online for PROPERLY preventing SQL injection on Classic ASP pages.

With that being said, simple filtration of certain characters, keywords, and other attempts to deter SQL injection are many times quite laughable to a security professional such as myself who knows many ways to circumvent such countermeasures. Aside from some of the feeble attempts at prevention I've seen, the end goal is to properly secure your resources regardless of past code written. With the lack of Classic ASP examples to properly prevent SQL injection, I am providing an example simple login page below on how to correctly and incorrectly perform database queries using Classic ASP and VBScript. There are other methods than the one shown below that work, but this seems to be the simplest. Enjoy!


<%@ Language = "VBScript" %>
<%
Option Explicit
Dim cnnLogin, rstLogin, strUsername, strPassword, strSQL
Const adCmdText = 1 'Evaluate as a textual definition
Const adCmdStoredProc = 4 'Evaluate as a stored procedure
%>
<html>
<head><title>Login Page</title>
</head>
<body bgcolor="gray">
<%
If Request.Form("action") <> "validate_login" Then
%>
<form action="login.asp" method="post">
<input type="hidden" name="action" value="validate_login" />
<table border="0">
<tr>
<td align="right">Login:</td>
<td><input type="text" name="login" /></td>
</tr>
<tr>
<td align="right">Password:</td>
<td><input type="password" name="password" /></td>
</tr>
<tr>
<td align="right"></td>
<td><input type="submit" VALUE="Login" /></td>
</tr>
</table>
</form>
<%
Else
Set cnnLogin = Server.CreateObject("ADODB.Connection")
cnnLogin.open "PROVIDER=SQLOLEDB;DATA SOURCE=localhost;UID=dbuser;PWD=dbpassword;DATABASE=test"

'============================================================================================
'BAD WAY WITH CONCATENTATION DON'T DO IT!!!
'------------------------------------------
strSQL = "SELECT * FROM users WHERE username='" & Request.Form("login")& "' AND password='"_
& Request.Form("password") & "';"
Set rstLogin = cnnLogin.Execute(strSQL)
'============================================================================================

'CORRECT WAY - Parameterized Query with dynamic sql
<!--
strSQL = "SELECT * FROM users WHERE username=? AND password=?"
Dim cmd1
Set cmd1 = Server.CreateObject("ADODB.Command")
cmd1.ActiveConnection = cnnLogin
cmd1.CommandText = strSQL
cmd1.CommandType = adCmdText
cmd1.Parameters(0) = Request.Form("login")
cmd1.Parameters(1) = Request.Form("password")
Set rstLogin = cmd1.Execute()
-->

'CORRECT WAY - Parameterized Query with stored procedure
<!--
Dim cmd2
Set cmd2 = Server.CreateObject("ADODB.Command")
cmd2.ActiveConnection = cnnLogin
cmd2.CommandText = "login_sp"
cmd2.CommandType = adCmdStoredProc
cmd2.Parameters(1).Value = Request.Form("login")
cmd2.Parameters(2).Value = Request.Form("password")
Set rstLogin = cmd2.Execute
-->
If Not rstLogin.EOF Then
%>
<p>
<strong>Successfully Logged In!</strong>
</p>
<%
Else
%>
<p>
<font size="4" face="arial,helvetica"><strong>Login Failed!</strong></font>
</p>
<p>
<a href="login.asp">Try Again</a>
</p>
<%
'Response.End
End If

' Clean Up
rstLogin.Close
Set rstLogin = Nothing
cnnLogin.Close
Set cnnLogin = Nothing
End If
%>
</body>
</html>


Read more!

Wednesday, September 24, 2008

Your WAN diagram better include Starbucks...

When performing a network architecture review (security focused), we always ask for LAN/WAN diagrams. Some LAN diagrams are detailed to an OCDegree, but other times we get some pretty lame ones (or none at all - even in HUGE organizations).

I often wonder - why aren't all the extensions of the WAN documented in the diagrams as well? When a remote worker with a laptop connects to your corporate network via VPN, isn't that truly an extension of your WAN?

Yes, yes it is.

Shouldn't laptops that are, in effect, extending your WAN to Starbucks and Panera be treated as assets with a higher rate of compromise associated with them? Let us go on record: the days of solely relying on the Windows Firewall and anti-virus software for laptop protection in the volatile network soup known as the Internet are LONG GONE. When a laptop connects to an open wireless network at (name your coffee shop of choice), your organization is inherently ACCEPTING all of the network vulnerabilities of that hotspot. You can't control the hosts that reside on the same network as your laptop, and you can't verify that there isn't already malicious activity taking place on that network.

What you can do:

  • Use laptop images as a source of creating a Minimum Security Baseline, not just an administrative Easy Button.
  • When deplying VPN to remote employees, don't enable Split Tunneling. Seriously, just don't do it. Full tunneling or bust. And to top it off, a web proxy would be great.
  • HIPS or be square. That's right, Host Intrusion Prevention System. As more of the big guys implement HIPS into their anti-everything agents, the time has come to really look at implementing the technology. Steer clear of HIPS technology that is signature-based; it will never be as strong as something behavior-based. I've got my favorite, but we won't talk about that here.
  • Network Admission Control is a good idea, just depends on how you deploy it. Enforcing security posture will always be better than what most are doing, which is nothing.
  • Don't allow employees to install full VPN clients on their home PC's for connecting back to your corporate network. Since when was "Barbie Horse Adventures" part of your trusted app list?

All I'm asking is that you're realistic. Include everything that extends your WAN beyond your border router - which is anyONE or anyTHING connecting to your network from the outside, to name a few:

  • Site-to-site VPN connections
  • Remote-access VPN connections
  • PDA's with sync'ing capabilities

This might be stating the obvious, but asking the question, "Does this extend the boundaries of my WAN?" is part of a good exercise while designing the management, technical, and operational controls associated with devices that are "ridin' dirty".


Read more!

Tuesday, September 9, 2008

"So What's Everyone Else Doing???"

As a security auditor, I can't tell you how many times I've been asked this when talking about compliance. If I only had a nickle for every time someone asked me that question... well... I'd probably want to throw it at the person who just asked me it. This is such a bad question on so many levels and it still frustrates me each time. That being said, I suppose I should answer it here so that maybe, just maybe, they won't ask next time.

My first response is, 'everyone else' is not doing a good job, not enough, and likely the wrong things. For example, take PCI compliance. Even after all this time, only 77% of Level 1 Merchants are compliant. Now if everyone is being as tough as they should be, those merchants are getting fined $25,000 per month and a possibly higher transaction rate. Compliance basically exists because when 'everyone else' was doing what 'everyone else' did, 'everyone' sucked! So somebody had to step in and raise the bar for them. It's like the flock needing a shepherd.

Now imagine that all of sudden you get breached because your 'average' organization is doing things just like 'everyone else,' which isn't enough... do you really want to stand at the podium and state that you didn't do enough because others aren't? Is that really a good, defensible position? On average, the average isn't good. So do you really want to measure yourself against them?

I think it's also ironic when you realize that just prior to this question is the statement made by the same person that "Well, we're unique here at Company X". Of course you are! If not, I can't imagine you'd have differentiators and be unique. There is no reason why that can't be security. It is probably a pretty good reason to not be like 'everyone else'. I'm hoping the next time someone asks me this, they want to know so they can use it for out marketing 'everyone else'.

Read more!

Thursday, August 28, 2008

Dear NERC, CIP needs a protein shake...

We've been posting a lot of information about compliance regulation lately, so I'll just add another scoop to this steamy pile...

The North American Electric Reliability Corporation (NERC) is a self-regulatory (non-governmental) organization subject to oversight by the U.S. Federal Energy Regulatory Commission (FERC). As of June 18, 2007, FERC granted NERC the legal authority to enforce reliability standards with all U.S. users, owners, and operators of the bulk power system, and made compliance with those standards mandatory and enforceable.


The preceding paragraph came pretty much verbatim from the NERC website. Now that we have a little insight on NERC, let's stop FERC'in around and talk about Critical Infrastructure Protection (CIP).

CIP was designed to protect the United States critical infrastructure and features a heavy emphasis on safeguarding critical cyber assets (CCA) that help run the systems that generate electricity and control the transmission of electricity. The CIP standard is broken down into 8 individual requirements (CIP-002 through CIP-009) for various areas of protection or security. Audits for NERC CIP begin July 1, 2009. You might recall a certain blackout of 2003 that affected a large number of northeastern states? Hmmmmmm?? This prompted the NERC CIP standard, much like Enron prompted SOX.

As assessors or auditors, our team works with many different standards and regulations, and we've done a lot of NERC CIP related work with our energy clients over the past year. We've heard multiple complaints from clients about the CIP standard being vague or hazy, and I tend to agree. The clarity on protection levels that are expected are muddy.

As far as standards go, CIP needs a protein shake. We're talking about a standard that's designed to protect some of the country's most critical systems. It NEEDS to be stronger.

And what's with the non-standard terms in the standard? "Cyber"? "Electronic Security Perimeter"?

Really? Who uses those?

Why don't they just throw in "microcomputer" or "World Wide Web"?

While other standards and compliance reg's require penetration testing, CIP only requires vulnerability scanning. Scanning for modems is referenced quite a bit in CIP, but there's practically nothing related to wireless. Sure, there are tons of modems out there, especially in those sectors, but NERC needs to let go of 1996. Check out some of the latest breaches across the country - I can't remember the last time I read a story about a compromise being traced back to a dusty modem. (Calm down, calm down...I know it still happens, just not as frequently.) And what about the exception for nuke plants? Why can't you apply NERC CIP to nuke plants as well? Businesses have to deal with multiple compliance efforts ALL THE TIME. Why wouldn't you use CIP as a "second set of eyes" for those sites?

And one more before I move on to the positives of NERC CIP. The standard isn't a shadow of what other regulations like PCI are requiring. You mean to tell me that the standards for the companies that allow me to turn on my lights are less than those of the companies that want to swipe my plastic?

NEWS FLASH: If the power is off, no one cares about PCI, HIPAA, or SOX.

Why?

Because the 'puters, calculators, and credit card processors don't work so well without power.

On a positive note - NERC CIP outlines a great schedule for compliance, with different progression paths. It's very detailed and could be something that other regulations take note of. The standard also breaks down what can be used as measures to demonstrate compliance, as well as specific levels of non-compliance which act as a nice grading system.

All in all, the standard has some positives but plenty of negatives. In my opinion, it has a long way to go before I stop stocking up on candles.

Read more!

Friday, August 22, 2008

Regulations Attack

I recently published top eight trends for 08’ (http://www.securestate.com/Pages/Top-8-In-08.aspx), however one topic in particular has caught my attention, why are “Regulations” being attacked?

At DefCon 16 I had the opportunity to meet some really interesting people who had different perspectives on security. However, for the first time in DefCon history (to my knowledge) “Compliance” standards opened the conference Friday morning. I was so excited to hear what the “hackers” thought about PCI, GLBA, HIPAA etc. To my disappointment, the presenter ranted about how compliance doesn’t equal security… DUH! But what they do is provide some value and the value is called “doing something!” Hell, most companies (97%) won’t do anything at all until they are forced!

Even with these standards, millions of records are still being compromised. Let’s rant about companies losing our data, not about how bad the regulations are. Let’s face it, if companies were doing what they should, there wouldn’t be a need for regulations! I am writing an article for Information Week on Malicious Compliance in Distress, which addresses companies doing the bare minimum to become compliant, instead of appropriately securing the data. If you use these regulations as a Minimum Security Baseline, you can always add additional layers of security to these regulations. For example… PCI just calls out not using WEP, but mentions the ability to use WPA and WPA2… however as security professionals we would consider WPA and WPA2 just as bad. So by PCI standards you can be compliant, however not any more secure than if you used WEP. Use the regulations to get a new stronger encryption protocol for your wireless environment.

Let’s not attack the regulations, but the reason why they were developed! View regulations as the minimum standard. If you took a comprehensive approach to security you would comply to all the regulations anyways (ISO 27001 & 27002). So instead of bitching out regulations… use them to get funding and do the right thing :-)

Read more!

Monday, August 18, 2008

Undercover at Defcon

After having attended yet another Defcon, I find myself a little frustrated. While I am a geek at heart, I am not a Linux chugging, code puking, trench coat wearing, hair dying, multi-pierced hardcore guy like many. But then again, I am not alone. Though many like to think it’s still ‘underground’, it really hasn’t been for quite a while. Security isn’t just an IT thing any more and its gaining ground in the business world. Hence there are many security professionals and vendor in attendance. So this year, I specifically set out to find that business side of security. As to being undercover, no I would not be a winner in the ‘spot the fed’ contest. I am just a security auditor that was hoping to hang out with my coworkers, learn a few things, and do a little networking.

Now I have to preface my story with some important information. Every night typically ended with the sun rising, my buzz fading, and my alarm looming just a few hours away. So perhaps I was a little tired, hung over and grumpy going into each morning – though I’m generally grumpy according to most anyway :) Still, I made my way to the conference, grabbed my new-fangled badge, and hit my first presentation. The abstract was very promising as the presenter alluded to the fact that compliance != (does not equal) security. Certainly he had a strong starting point. But, he tripped coming out of the blocks. The rest of the presentation turned into an angry IT guy condemning every standard, every certification, and pointing out how stupid and useless auditors are.

Now I’ll be the first to say there are many auditors working in areas they should not be. I think we’ve all had to deal with the Big X auditor/kid straight out of college that can’t seem to discuss anything outside the verbiage in his checklist. But it’s just as annoying to have someone unqualified lecturing about compliance. It does not make any sense to compare strength of compliance based on the length of the standard. Nor should you compare an IT standard against a security standard. And you shouldn’t even bring up standards that you don’t even know what the letters stand for. Again, I’ll be glad to raise my hands and tell you all the flaws with all the standards like my recent post on PCI. But I have at least had to actually work with those frameworks. I suppose it’s just a different view when you are subject to them.

During the rest of my Defcon experience, it was also peppered with more compliance bigotry, even from the likes of professors. But that’s not to say there weren’t some great ones too. One was on a new tool to find and perhaps exploit ModBusTCP devices on SCADA systems. That certainly piqued my interest with all the NERC CIP compliance work we are doing. There were a couple different presentations that covered different problems with RFID including devices that go beyond just cloning prox cards but also doing site codes brute force attacks on common card codes. I think the best presentation was ours – only because I got see out head geek get pummeled with lemons for his sins against humanity. Don’t ask :) After all, what happens in Vegas...

Read more!

Friday, August 15, 2008

Elements of a Good Assessor

As assessors, there are some crucial elements that you need to incorporate into your style while you are in front of a client; whether it be the way you present yourself, the way you ask questions, or just the way you collect information. All of these issues can affect the quality of the assessment and how smoothly it performs.

The following are just some quick tips to consider as you are doing your assessment, making it as thorough and as painless as possible.

Be friendly but don’t be their friend.

This is one of the most helpful items that I have taken to heart. As an assessor, you want them to feel comfortable and divulge all information that you want from them. If they feel pressured or backed into a corner, you’ll get only short and sweet answers that, depending on the situation, will not get you the information you’re looking for. Try connecting to them at the beginning of the meeting. Ask them how long they have worked at the company and see where the conversation goes from there. Magically a repore starts to develop and the auditor wall will start to crumble.

Others things to bring up: weather, news (NOT politics), and opinions on technology. Also showing a sincere interest in what they do at their job also helps. People love talking about themselves!

‘May I see an example?’ should be your motto.

People can be a great way of gathering information, but the devil is in the details. Always be in a inquisitive nature and develop an uncomfortable feeling about information when you don’t have documentation to support it.

This is especially important when the client states that they are accomplishing the control or having certain processes around it. Not always, but usually you can trust employees to be honest if they are talking about deficiencies within their processes. The concern grows if they are saying that everything is fine and all of there controls are in place and working correctly. This is a clear sign that you need to gather documentation and further information on the status of findings.

If you get into an audit situation this becomes especially important, as everything typically needs some type of paper trail as to confirm the control is functioning and in place.

If they push back, attack!

Honestly, this should be a red flag while assessing personnel. If you think you’re getting resistance, it could be one of two issues. They could feel uncomfortable about the situation OR they could be concealing something. If they are concealing something, you need to dig even more, ask for examples, and confirm the content with others within the auditing scope.

Don't be afraid to as the same question more than once. For example, asking the configuration manager about pushing code into production might reveal that they have a uniform configuration management tool - and that's the only method of getting code there. Though when talking to the software engineer about this topic, they might reveal that they often put code into production in order to test it first.

I assume you know about assumptions!

Your whole job as an assessor is to gather facts and to interpret to the results - no assumptions included. This is still important even if you are familiar with the environment. Personally, I have to watch out for this if I'm involved with follow-up assessment for organizations. It is very easy to fall into presumptive questions if you knew the answer last year. The problem comes that you do not know if their environment has changed within the last year. Also injection your own presumptions into the assessment could bite you in the end.

Try and look at each assessment engagement as a separate issue. Even if you are familiar with the organization, ask the questions to their personnel again and let them answer the questions.

Let them do the talking.

Bottom line – you don’t get any answers when you’re doing the talking. Setup questions that allow them to describe the situation or process. For example, a closed questions sets up the yes/no answer – like “Do you do this within your process?”. Alternatively you need to ask open questions whereas they are forced to describe the situation from their own point of view - “Can you walk me through how you would typically perform this process?"

If it gets into a complicate section, utilize confirmation questions at the end - example "My understanding of the current situation is like this. Am I correct?". You want to make sure that the findings you are putting down are as accurate as you can record.

Don’t report the findings until the end.

I can’t tell you how many times I get after an interview the question of “So how did I do?”. The best strategy is to just say that you need to look at all of the information holistically before bringing out the findings. Let’s take a couple of scenarios.

Scenario 1 - “Mr. Client, you’re great and I see nothing wrong out of this interview.”

Client is happy that they’ve done their job in your eyes. The person then goes to gloat to his boss on the fine work they’ve done. This is until the next day when you discover a gaping hole in their process that wasn’t discovered until you looked at either the documentation or talked with another person involved. Now you have to retract the statement you did, the client has to retract their statement, and there is a bitter feeling in the air.

Scenario 2 - “ Mr. Client – you have some major deficiencies because of the findings I saw in this particular area.”

Now the client could fight back and try and justify their position, why they didn’t do certain controls, or why they think security is a joke! Additionally if you have to go back to the person to gather more information, they are going to be a closed book for information.

Bottom line – save the findings until the end where you can present all of them in an orderly fashion.

Practice good meeting facilitation.

Lastly, you should always practice good meeting facilitation while you’re performing interviews. Some examples are introductions, setting the tone of the meeting, good time management, keeping proper focus on the objective, and closing the meeting. This is important to ensure that all of the necessary information is gathered within the appropriate time frame.

I’ll elaborate on a future blog as to the details of some of the elements to a meeting and what I like to do to open and close a meeting.

--
Keep in mind that these are all recommendations and general guidelines to an assessment. When the actual work is being performed, you are the general on the ground and no successful battle plan has been followed to the letter and the battle won. Adjust to the changes within the organization and environment and everything will complete successfully!

Read more!

Wednesday, August 13, 2008

Defcon – "And this is very illegal! So the following material is for educational use only."

I’m not a hacker, but I live with them. I took the pilgrimage to Defcon, attended by many of the world's best-known security experts, and felt much like the kid reporter in the movie “Almost Famous.” Among other (sometimes bewildering) presentations, Defcon showcases demonstrations of the latest discovered weaknesses in computer systems.

The big brew-haha this year was “The Anatomy of a Subway Hack” of the Boston T that got blocked. A federal judge ordered three college students to cancel a Sunday presentation where they planned to show security flaws in the automated fare system used by Boston's subway. I wouldn’t have thought this was any different than the presentation the SecureState team gave where we released various new tools, including SA Exploiter. However I guess when one of your slides proclaims: "And this is very illegal! So the following material is for educational use only," it draws attention to you.

At SecureState, we believe everyone (most especially those organizations trying to protect themselves) should have access to all information available. The belief is if you hide the findings (zero-day exploits) it’s not going to stop the bad guys who have the time and incentive to find the vulnerabilities themselves. It just keeps the good guys on the forefront.

Many organizations without the resources to properly research the latest and greatest vulnerabilities use penetrations tests to get the results of the research with the ability to see how it affects them specifically. Penetration tests are the foundation of security since you don’t know what you don’t know. Thus, keeping security problems secret, or the “Security through obscurity” idea, doesn’t protect the businesses relying on those systems.

In short, our goal at SecureState is to make security better. We don’t look to disclose things that can hurt people. That’s especially true if there is nothing they can do about it. Releasing exploits and tools gives researchers and ethical hackers the opportunity to learn from the experience we have, gives organizations a better idea about the attacks that are possible, and the steps they need to take to prevent them. The bottom line is that while there are risks, the public good is better served by having knowledge freely available. Besides, H4CK3RS are people too.

Read more!