You can Google "trust, but verify" and come up with hundreds of articles regarding one of Ronald Reagan's signature catch phrases, accountability, auditing, etc. It can also be considered the default credo of the auditing community. Regardless of where it came from and the potential overuse of the phrase, it's what I live by and is a code that should be followed by anyone responsible for their company's compliance/governance programs and the security of sensitive data. Just about every regulation that deals with the protection of sensitive information requires some form of risk management and/or validation of controls. Proper compliance and risk management programs will not be successful without a high level of verification that proper security controls are in place and operating effectively.
If you are subject to the PCI DSS and are a level one merchant or service provider, your QSA will thoroughly verify your compliance status annually. However, the QSA will only be attesting to your compliance at a point in time. What happens between audits is the responsibility of the organization. There should be no need to start prepping for an audit two months before the QSA comes on site because being compliant means following the standard at all times. An organization must continuously verify their controls. This is done through following your internal change control process whenever changes are made to the cardholder data environment (PCI DSS 6.4), secure development and verification testing of applications (PCI DSS 6.5), vulnerability scanning (PCI DSS 11.2), and numerous other requirements.
How many times have you sat in a status meeting, change control review, vendor meeting or whatever and just accepted a 'yes' or 'no' answer from someone, only to find out later the task wasn't finished, was performed incorrectly, or some functionality was not designed as expected. At the time you trusted what the person said was true, but what rings true in their mind can mean something else entirely to you. Do not leave the task of verification up to an auditor that may show up once a year. I'm not saying that everyone is a liar or incompetent, just that everyone doesn't interpret things the same way as you do.
The latest example of a breakdown in the "trust, but verify" cycle is another highly publicized security breach at one of the country's largest health insurers. These guys fall under numerous regulatory requirements and may have exposed personally identifiable information and cardholder data belonging to close to half a million members. Why? A vendor 'mistakenly' told them that they completed their work and all the security controls were operational (Indianapolis Star, 6/29).
Basically, someone trusted a yes or no answer from a vendor and left it at that. The high price of trust in this case is the costs associated with breach notifications, potentially paying for identity/credit protection services, and most importantly a loss of trust in the company by anyone who hears about the breach. The loss of trust by your customers can result in high levels of member attrition translating to lower revenue.
How can these situations be prevented? Trust, but verify! All parties should have had testing procedures in place to conduct application security testing to ensure that the code functioned like it was supposed to. Let's reenact what may have happened and contrast that with what should have happened.
Instead of:
Vendor management (VM): "We're about at our deadline for that code; was the work completed?"
Vendor Developer (VD): "Yep."
VM: "Hey client, we wrapped up our work…here's the code."
CM: "Awesome, we'll get this into production right away."
How about:
VM: "Show me the test procedures and results for that code before we notify the client we're done."
VD and QA: "Sure, here's the detailed report of what was tested and the results. As you can see, we found some issues, but got them fixed and re-ran our test procedures until it was clean."
VM: "Great! Everything seems to be in order; let's get this over to the client for their review." "Hey client, we wrapped up development and QA and are ready for your review."
CM: "Beautiful, we'll get this over to our internal QA personnel and verify that everything is working like you said."
VM: "Not a problem, we have an excellent quality assurance program. We develop and test based on OWASP best practices. If you find any issues, just let us know and we'll send it back through the change control process."
CM: "Hey QA, we've received the latest revisions from the vendor. Please send it through our change control process and get me the test results before we go live."
Client QA: "Will do. We'll take a look at the vendor's report and run a vulnerability scan along with internal and external penetration tests. We need to validate that this module is secure before our customers start using it."
CM: "Absolutely, we already had two previous breaches and I don't want a repeat performance. My butt is on the line. Let me know if you find any issues."
Okay, so I over simplified a bit and we don't know exactly what happened or the circumstances that really led to the "mistake." I don't want to bash anyone without knowing all the details, but you get the point. If you are going to implement changes to production, public-facing systems that have access to sensitive information, verify those changes BEFORE an incident happens. Trust, but verify is a key component of any risk management program and compliance requirement. Yes, verification does come with a price, but it will more than likely be much lower than the price you pay in restitution and loss of reputation if you just go with explicit trust alone.
No comments:
Post a Comment