Implied Security Policies Create Added Risk

The US Supreme Court has overturned a lower-court ruling and concluded that management has a right to review employee text messages on company-issued devices.  If used as a precedent, this case may have far-reaching consequences for employee expectations of privacy in workplace communications.  However, the ruling should also serve as a wake-up call for organizations that do not have explicit written security and privacy policies.

The case stemmed from an incident (reported here) where management at the Ontario Police Department reviewed text messages to investigate compliance with bandwidth usage policies on company-issued devices.  The department had an informal policy that limited usage to a fixed amount each month.  Once several employees began exceeding these limits on a regular basis, management began a review and in the process discovered that some people were sending personal messages containing explicit sexual content.  Once sanctioned, the employees sued the department claiming that their privacy had been violated.

Written or Implied Security Policies

A key issue in the case is that the Ontario Police Department, while it did have a written Acceptable Use Policy, it did not cover the monitoring of text messages.  Instead, there was an “implied” policy that employee messages would not be audited if they paid for their text message overage out of their own pockets.  This was enough to enable the 9th U.S. Circuit Court of Appeals in San Francisco to rule that the informal policy was enough to give the officers a “reasonable expectation of privacy” in their text messages and establish that their constitutional rights had been violated.

The Supreme Court overturned this ruling, sending a clear message that organizations have a reasonable right to inspect employee communications while attempting to assess compliance with corporate policies.

The Good News

This ruling confirms a common security policy in place today in many organizations – namely that employees should not expect privacy when using company-issued equipment.  However, the fact that this case went all the way to the Supreme Court illustrates an important policy-related lesson for organizations.

The Policy Lesson

All security and privacy-related policies should be in written documents, not implied in any verbal or informal communication. In this case, having an “implied” policy, rather than a written one, set up a risky environment for the Ontario Police that landed them in the news and exposed their business at the highest levels.


Five Reasons Why Security Policies Don’t Get Implemented

This article will explore five serious problems preventing information security policies from being implemented, even though these policies may have been written with the best of intentions. Cutting across all five of these causative factors is a theme involving a lack of understanding about the nature of policies. All too often policies are written in a rushed and narrowly-scoped effort, which is in turn responding to circumstances such as: (a) a request or complaint issued by a business partner or important customer, (b) an adverse audit finding, (c) a necessary step on the way to obtaining some type of security certification (such as the Payment Card Industry’s Data Security Standard), (d) a legal or regulatory requirement, or (e) a serious security breach or some other costly operational problem.

Information security policies are by their very nature cross-disciplinary, cross-departmental, cross-organizational, and cross-national. That means that, to be successful, they must embrace many different considerations. These considerations may on occasion be at odds with each other. For example, an access control policy at a bank may need to be rigorous, and may on one hand need to satisfy prevailing laws and regulations, keep fraud and other problems to minimum, and allow participation in inter-bank networks. On the other hand, this policy must also be simple and easy to use, and not be so onerous that workers are prevented from efficiently doing their work, and not be so burdensome that staff at customer organizations accessing the bank’s systems are encouraged to do business elsewhere. So the best overall advice, to avoid implementation problems such as those discussed below, is to study the big picture in advance, to understand all the requirements that may have a potential influence on the final version of a policy.

While the performance of such a holistic review of requirements may at first seem economically infeasible, much of this information should already have been collected in the course of performing a risk assessment. Some other essential pieces of information, needed to paint a holistic picture of the environment into which a policy will be placed, will be gathered by well-run information security operations. For example, a loss history database, documenting the incidents occurring over time (and also categorizing these, analyzing these, and ranking these incidents), will in turn provide a perspective that is essential to understanding the real-world operating environment into which a new policy will be inserted.

(1) Failure To Explicitly Define Long-Term Implications: While keeping presentations about proposed policies both short and to the point is certainly desirable, sometimes information security specialists go overboard with this objective. They may then leave out the important long-term implications of adopting a specific policy. Management may later be dismayed to discover that these implications contradict, or are otherwise in conflict with, other organizational objectives. This description of the problem of course assumes that the information security specialist making a request for approval has himself or herself thought through these long term implications; certainly there are many cases where this has not been done before a request is made of management. Either way, the net result is the same: management is angry or upset because they feel as though they have been led through a “bait and switch” process, where they thought they agreed to X, but really what they got was X, Y, and Z.

When adopting a policy, we information security folks need to be specific about what the policy means in the long-run. For example, if we suggest a wide-open bare-bones privacy policy because we don’t want to be bothered with all the controls that may otherwise get in our way, we may not have mentioned to top management that this same bare-bones policy may have adverse marketing implications.

This may have been the experience of Facebook when it came to light that third parties could do automated data mining employing the personal data supplied by Facebook users. While the privacy policy originally allowed Facebook to gather information about users from other sources, such as blogs, instant messaging services, and other Facebook users, this policy was later changed. The revised policy allowed users to remove content about themselves, and to further limit the visibility of their own personal information. While this example implies that Facebook staff did not fully investigate the long-term implications of their original privacy policy, the approach did nonetheless interfere with what appeared to be the firm’s intention, which was to establish a reputation of due diligence in the privacy area.

The top managers who approve policies should not be expected to ask about long-term implications. In many cases, because they are not technical, these managers cannot imagine the implications of proposed policies. Instead, the responsibility should rest on the shoulders of the middle manager proposing an information security policy. Even if this information is never mentioned in a presentation, it should nonetheless still be delivered to management, perhaps as an appendix to a written proposal to upgrade a policy.

(2) Full Cost Analysis Not Employed: With the severe economic pressure that so many IT shops are facing these days, it can be tempting to only request the first step in a long implementation process. For example, next year’s budget proposal, in support of a newly adopted policy, may request only the purchase of new security software and training for operations staff to use that same software. The requested budget may fail to mention the cost to handle frequent fixes and patches, on-going maintenance charges for the software, end-user training required in order to properly utilize the software, etc. Management is never going to be happy to hear that there are hidden costs, and this unhappiness may mean that the whole project gets dropped midway across the stream toward implementation.

While in some instances the requestor of funds many not have understood the full cost of a proposed policy, failure to employ a full-cost approach is clearly out of step with current IT management practices. For example, management is increasingly looking at TCO, or the total cost of ownership. Management really needs to know the long-term costs and benefits associated with a proposed policy. This includes the implications for many things including: production system downtime to install a new security system, future upgrade and scalability expenses associated with a new system, and the impact of the new system on response time.

It is thus advisable that the requestor perform this research before a management request is made. The requestor should have identified the technology that can be used to implement a policy, before that policy is proposed. This will give the requestor an opportunity to talk with the vendor about full life cycle costs, as well as the other implications of adopting a particular approach to improve security or privacy. This background research will also prevent the occasionally occurring (and highly-credibility-eroding) situation, where it comes to light that there is in fact no commercially available software that can be used to implement an adopted policy, and that an in-house solution must instead be developed.

(3) User Training & Acceptance Efforts Not Undertaken:  Failure to convince those who will be affected by a policy that the new policy is indeed in their best interests is most likely going to be a serious problem. As much as those responsible for meeting a deadline might like to autocratically dictate that such-and-such a policy will be adopted by everyone — period, this approach sets up a resistance dynamic that will interfere with the consistent implementation of a new policy. Users need to be respected, and convinced that the new policy is in their best interests, and that it also protects the organization as a whole, and then they will be much more likely to go along with a changed operating environment.

Granted, sometimes user resistance is illogical or out-of-touch with the facts. But it will nonetheless be encountered more frequently if we do not take time to explain and sell new policies and the systems that implement these policies. For example, many years ago, a programmer didn’t like a new policy that required an automatic session time-out to kick in after a certain number of minutes had elapsed without any key on the keyboard being depressed. He was not part of the discussion about adopting this policy, and he hated to repeatedly sign-in because he had been signed-out automatically. While it was certainly not a good use of organizational resources, while at work, he designed a small script to automatically issue a space after every period of 14 minutes had elapsed that involved no typing on his keyboard (the auto-logout feature would kick in at 15 minutes). Because his job involved programming, and a random space interspersed into his code would not affect the operation of the code that he was developing, this solution was an elegant one.

It would have been much better for information security (especially because his workaround became a popular topic of internal discussion), if he had understood why an auto-logout feature was now being required. Those folks proposing policies really need to be intimately in touch with the user community, and they need to know how the proposed policies will affect the user community. To meet this objective, and to get the best results, have representatives from the user community provide input to the writing of policies, the implementation of policies, and the development of related user training.

(4) Discovery Of Unforeseen Implementation Conflicts: Failure to research the cultural, ethical, economic, and operational implications of the policy implementation process is often a problem. This is particularly serious in multi-national organizations where the local way of doing things may be at great variance from a newly adopted policy. For example, while consolidation of data centers may seem like a good idea from a cost containment standpoint, and it may also provide new opportunities to consistently apply privacy measures to all human resources data from various countries. Such an approach may run afoul of so-called trans-border data flow laws. More specifically, privacy laws in western Europe may prevent human resource data from leaving the country where it is now stored, because then the government’s ability to oversee and regulate this data would be eliminated. While there are some workarounds, such as the “safe harbor” agreements that permit the international movement of such data, this example illuminates what may become an unforeseen conflict blocking the implementation of a particular policy — the existence of different local way of doing things.

This particular example also points to what has become one of the most complex areas of modern information security work. We are talking about the harmonization of security and privacy laws and regulations across countries, so that organizations can have clarity about what is permitted and what is not, and so that organizations can then go on to adopt a certain policy across-the-board throughout the world. To be required to develop special approaches for users in certain countries, or for certain departments within an organization, that approach violates the secure systems design principle of “consistent application.” Security is eroded each time an exception is made; to increase the level of security, all users should be required to abide by the same policy.

(5) Communications Gap Between Technologists & Management: In many instances, the information systems technical infrastructure is modified regularly to respond to new security threats, to adjust new software so that it operates better, to accommodate a new business partner, to improve user response time, etc. These changes don’t necessarily go through a formal change control process, instead being a reflection of the formal duties assigned to technical staff such as systems administrators.  While these technical staff may be diligently attempting to keep their own portion of the information systems infrastructure secure, reliable, and responsive, the changes that they adopt locally may close doors, and thus prevent the future changes required in order to implement an organization-wide policy.

The communications gap here is between the technical and administrative staff who are often running around handling problems, sometimes in crisis mode, and talking with each other. These people don’t necessarily discuss their activities with the top managers who approved a policy, who in turn may be occasionally talking to middle managers. This gap in internal communications can be especially problematic if the Chief Information Security Officer, the Chief Privacy Officer, or some other middle manager who ordinarily would be expected to act as a bridge, is non-technical in his or her orientation.

This communications gap may also come about when one department decides it needs a specific control, and it goes ahead and writes a local policy, and then unilaterally implements a control supporting that local policy. This local implementation may later be shown to be in conflict with an organization-wide policy and/or security architecture. For example, at one high-tech company where the author worked, the research & development group was very concerned about their latest product designs walking out the door on removable storage media. They accordingly adopted a policy mandating a digital rights management (DRM) system implemented on thin workstations, so that the sensitive information could only be accessed via those workstations physically located within the office. This approach, although effective in terms of meeting departmental goals, ended up being incompatible with the new encryption policy adopted for the organization as a whole.

Communication is Key: By paying attention to these five common mistakes, the information security practitioner can save his- or herself and the organization a lot of wasted time and effort.  As the reader will note, the common thread running through these is communication.  More and more, the information security team must have both technical and interpersonal skills to be successful.


Security Policies to Address Internal Threat

We hear reports of new data breaches almost daily.  While most of them are fairly complex stories, they most always begin at some point with a human “insider” making a mistake.  In fact, 2011 could be considered the “Year of the Insider.”  From the RSA hack and Sony Playstation breach, to the Epsilon e-mail breach and the Oak Ridge Lab phishing attack, database breach announcements that started with insider mistakes have become common news. Malicious threats are also on the rise, as recently Bank of America was hit with over $10 million in losses due to a malicious insider.

But who IS the insider and how can we implement controls to help stop them? In this new Information Shield white paper, The Insider Threat – Security Policies to Reduce Risk, we break down the various attributes of the insider threat, and suggest some information security policies that can help reduce the likelihood of current and former employees causing harm to the organization.  We illustrate some of these controls will sample policies from our security policy sample library.

Since the very notion of an insider threat involves the risk of people’s behavior, and since information security policies are design to impact behavior, it makes sense to look at the problem of the insider threat from the perspective of the “lifecycle” of an employee’s access to information.  (This is represented in sections 8.1 to 8.3 of the ISO 27002 framework.)


One Security Policy Document Or A Series Of Documents?

Plan First: We all know that it’s advisable to create a plan before undertaking a large and complex project. For instance, most reasonable people would not consider building a modern residential house, with plumbing, heating, electrical, lighting, and communications systems, if they did not first have a clear and specific plan (aka blueprint). Of course, all of these systems could be added-on after the house was built, but the result will look jury-rigged, function much less efficiently, and be subject to breakdown in ways that would not plague a well-planned house.

The writer of new or substantially revised information security policies faces a similar situation. Yet many of these policy writers, especially the ones new to the task, think they can just slap a few sentences together (ideally sentences copied out of somebody’s book or another organization’s policy statement). After they do this, they think they’re done, uttering words like “there you go.” Those who have been down the security policy development road before, particularly those who have revised policy statements written by others, well … to be kind, let’s just say they know better. The surprisingly high level of complexity in the modern information security environment is revealed in these moments.

So here’s the practical advice: to keep your job as well as your good reputation, if you’re writing information security policies, be sure to sketch out a plan for the structure of the policies before the policies themselves get written. This structure for policy statements should define the titles of, scopes of, and release dates of specific areas to be addressed in the information security policies.

Risk Assessment Directs: There are of course a few types of security policies that apply to everybody, such as an access control policy based on the need to know. Beyond those, things get a lot more complicated. The types of policies that need to be written will be reflected by a recent broadly scoped information security risk assessment. If, for instance, the risk assessment speaks to risks associated with the release of classified government information, then a set of policies consistent with the department of defense would be appropriate. Alternatively, if the risk assessment talks about the risk of unauthorized software copying, then a separate set of policies — perhaps addressing access controls which prevent the downloading of any unauthorized software — would be appropriate.

A well-known intellectual property attorney recently mused about writing information security policies in a conversation with the author, saying: “The law doesn’t require that you be perfect, only that you’re taking prudent steps, and clearly making demonstrable progress.” To have a schedule for the release of various policies, with dates, and scope statements, and titles — that type of documentation can indicate that your organization is addressing what needs to be addressed, that it has a plan, and that it is making progress. In that respect, this plan for policies can at least partially document management’s due diligence process in the information security area.

Short & Modular: If the organization in question is very small, perhaps a startup cloud service provider with a handful of employees, then for a while, they can get away with a single document. But soon they will need to break their information security policy document into a series of segments. Larger and more complex organizations will find that short, modular, and tightly focused policy statements are easier to index, search, and update. Short and modular policy statements are also ideal for easy insertion into pop-up help screens, application system user manuals, and other system-resident text that is relevant to a specific task.

The title of, scope of, topics covered by, and scheduled delivery date of policy statements is additionally a function of the delivery systems to be employed. If more than one major delivery system is to be employed, the same ideas may need to be expressed in very different ways, compatible with the delivery systems involved. For example, if policy statements are slated to be delivered by broadcasters  on a periodic management satellite-broadcast TV show, to a worldwide network of sales offices, and if the recipients are non-technical, typically impatient, and not particularly motivated to pay attention, then very short abbreviated verbal policies would be appropriate. But if policy statements will be delivered via an intranet, and users will have a wide variety of automated tools at their disposal, such as key word search utilities, then a longer more literate text-oriented style can be, and probably should be used.

Know your Audience: The way in which policy statements should be scoped, and hopefully modularized into bite-sized pieces, is also a function of the audience to be receiving the information contained therein. Policy writers should define who the major audience members are. Three favorites of this author are: end-users, technical information systems staff, and management. Another example of audiences, for an Internet merchant, would be: customers, third party business partners, in-house technical staff, in-house marketing staff, and top management. Still another example for a multinational manufacturing firm would be: staff in countries where there are operations, headquarters staff, and IT staff (both in-house and outsourced).

Of course there will be some redundancy of the messages delivered across these audiences, but it is important for the policy writer to define, in advance of writing policies, just what messages need to go to which audiences. After these messages and recipients are defined, the policies will often — of their own accord — naturally fall into certain categories, and these categories can then be used as a guide to segment the policy statements into different documents.

Start with the Essentials: So for now, concentrate only on what’s essential, but map out how all the rest of the policies will be structured, what they will entail, when they will be delivered, and who will receive them. See the big picture now, but don’t issue too many policies all at once. The ability of users to metabolize information security policies is surprisingly limited. Care and feeding requires well-thought-out, bite-sized pieces that are well tailored to the needs of the organization, and clearly viewed as essential at the time they are published. A steady diet of this type of policy will gradually raise the awareness level of the audiences receiving the material. Overfeeding will result in indigestion and push-back, and the recipients will then be unwilling to receive more policy material for a considerable period of time.

So the answer to the question “one or several policies?” should almost always be: “never just one policy, and not just several policies either, but instead a regular stream of tasty interesting policy vignettes.” These policy vignettes should be supported by examples, delivered only to relevant recipients as required, and consist only of information that the recipients absolutely must have now in order to maintain good information security. If the policy writer consistently uses this approach, he or she will see that the recipient appetite for more remains strong, and the credibility attached to each policy vignette likewise remains high.


Charles Cresson Wood, MBA, MSE, CISA, CISM, CISSP, is an independent technology risk management consultant with InfoSecurity Infrastructure, Inc., in Mendocino, California. In the field since 1979, he is the author of a collection of ready-to-go information security policies entitled Information Security Policies Made Easy. His latest book is entitled Kicking The Gasoline & Petro-Diesel Habit: A Business Manager’s Blueprint For Action (see www.kickingthegasoline.com). He can be reached via www.infosecurityinfrastructure.com.


Selling Management On Information Security Policies

Laws & Regulations: This post is for organizations that could use help raising the level of management awareness and support for information security policies. From the get-go, let’s be clear that this post is not for established organizations that are already far along when it comes to their information security efforts. They will have long ago sold management on the importance of, and in fact, on the critical nature of, information security policies. But small and mid-sized organizations, especially newly formed ones, often don’t yet have information security policies, nor does management in those organizations necessarily consider policies to be a priority.  The first hurdle to jump over with top management involves the erroneous notion that information security policies are optional. Perhaps that was the case in certain industries back in 1980. But that’s unquestionably no longer the case. So it’s up to us technologists to show top management what they’re required to do when it comes to information security policies. Reflecting this “no question about it” status, in many situations, written security policies are now required by laws and regulations. For example, if your organization is a financial services firm in the USA, then the Gramm-Leach-Bliley Act (GLBA) requires it to have a privacy policy.

So, if you have not already done so, this is a good opportunity to speak with your organization’s lead attorney about information security. In that conversation, see if you can identify all the laws and regulations that your organization must comply with, the ones that mandate certain information security measures. Many of these laws and regulations will require policies, deeming them to be as one of the most fundamental information security control measures that an organization can adopt. This author suggests a spreadsheet as a quick-and-dirty way to organize the investigation. Some vendors also sell ready-to-go templates that give you a quick overview of the relevant laws and regulations. But even if you buy these templates, nonetheless be sure to have a conversation with the lead attorney, just to make sure that all the bases are covered.

So, let’s assume that there’s no law or regulation in your country that requires organizations in your industry to have an information security policy. What do you do then? Or what if there is a law or regulation that applies to your organization, which requires a policy statement, but top management at your organization still believes that it’s unimportant to have an up-to-date information security policy? What next in those situations?

Standard Of Due Care: The next conversation that you need to have with top management has to do with the legal notion of the standard of due care. Mind you, this author is not an attorney, so to prepare for your top management meeting, you should once again go back and see the lead attorney at your organization. In this attorney meeting, you should discuss the principles of liability, specifically what would make management liable for not having adequate information security measures. You should also attempt to define the information security related standard of due care for your organization, in your country, at this point in time. The standard of due care defines what a prudent manager is expected do, at a minimum, or from another vantage point, what is legally required of all well-managed organizations.

Beyond statutory laws and regulations, there are a number of ways to go about illuminating the standard of due care, and all of these should be pursued, with the hope that at least one of them will end up being convincing to management. You can for example reference case law, which unfortunately is not as well developed as many of us would like (the dearth of case law reflects the fact that information security is still a relatively embryonic field). By the way, one of the classic cases in this area is T. J. Hooper v. Northern Barge. On a similar note, regulatory agency guidelines or policies may make a point of requiring information security policies at the organizations they regulate.

Well-known international information security standards, such as ISO 27001, are also a good reflection of what’s generally accepted, and what goes into the prevailing standard of due care. Policies are for example a key part of the “information security management system” defined in ISO 27001. A few highly respected books, used as references by practitioners in the field of information security, can also serve as an authoritative source of information defining the standard of due care. In this category we will for instance find the Information Security Management Handbook, edited by Hal Tipton and Micki Krause (Sixth Edition, 2009). This book likewise defines policies as an essential part of every information security management effort. Published legal books, which address the requirements for information security also fit into this category. In the latter group we find Readings & Cases In Information Security: Law & Ethics by Michael Whitman and Herbert Mattford (2010). Again, security policies are highlighted as essential.

Your organization may also have an industry association that writes information security related technical standards. For example, the American Banker’s Association publishes a great deal of material dealing with information security. In one of their sponsored webinars, for example, well-known information security consultant Peter Browne spoke to the Foundations Of Information Security. Information security policies showed up there as a key ingredient to a successful information security effort. Likewise, if government agencies have issued books or pamphlets about information security, these too will often cite the need for information security policies. For example, the Federal Deposit Insurance Corporation (FDIC) in 2002 gave a presentation about e-Banking Information Security Guidelines, and that too cited the need for written policies.

Professional associations in the information security field, such as the Information Systems Audit & Control Association (ISACA), have also issued relevant publications such as COBIT: The IT Governance Framework (use Version 5, 2010). This highly respected reference again makes the case why information security policies are an essential component of all successful information security efforts. There are other definitive sources you could consult, such as a list of security requirements that all organizations must have in order to join a multi-organizational business network. Dig around, and you will often find that information security policies are a requirement for joining such automated business networks. Keep going with the reference gathering effort, because sometimes management will only be convinced when a long list of these references is presented to them.

Role Of Security Policies: While it is beyond the scope of this posting to go into the many and varied roles to be played by information security policies (see for example the post entitled “The Security Policy Hierarchy: A Governing Policy & Subsidiary Policies”), it is important that management understand how critical information security policies are. For example, they need to know how policies are at the apex of a pyramid of documents that guide and focus internal efforts. They need to know how policies can help save their neck when there is an allegation of unfair treatment after someone was fired because they violated a security-related rule. So make a long list of how policies support and buttress information security work, and show how policies are on the critical path to moving ahead with many other related efforts. For example, if policies have not yet been written, it will be very hard for management to successfully negotiate an information systems outsourcing contract with a third party service provider, because written policies will need to be incorporated into the agreement with an outsourcing firm.

Risk assessment: While there are other ways to convince management to support and fund an information security policy development effort — ways that go beyond the amount of space available in this post — this author will just mention one more approach. This involves performing an internal risk assessment, where all the major risks and vulnerabilities are examined. By performing such a risk assessment, top management obtains a clear snapshot of what the story is, right now. If policies have not yet been prepared, no doubt that fact will be highlighted in the risk assessment. You can then embellish on the findings of the risk assessment, by writing a memo about what would happen if policies are not promptly written and disseminated via awareness raising efforts. Both of these documents put management “on notice” (a legal term), where they are now in receipt of a report about a serious problem, and they need to do something about it. Doing something might be deciding that they aren’t going to do anything, but that’s still a decision. You have put them on the spot now, and they can’t ignore the matter any longer. You have gotten it in writing so that there’s no dispute about it, if (heaven forbid), you should ever be up there on the witness stand. You have passed the buck, and management should be uncomfortable about that, that is until they move ahead with the policy development and dissemination effort.

Still Required: In 2011, it’s surprising that there are still many organizations that don’t have an information security policy that is both responsive to their current situation and up-to-date. With all that we know about information security risks, this should be a no-brainer. Hopefully staff at these organizations will soon convince top management to support and fund an information security policy development, dissemination, and implementation effort.


Charles Cresson Wood, MBA, MSE, CISA, CISM, CISSP, is an independent technology risk management consultant with InfoSecurity Infrastructure, Inc., in Mendocino, California. His latest book is entitled Kicking The Gasoline & Petro-Diesel Habit: A Business Manager’s Blueprint For Action (see www.kickingthegasoline.com). He can be reached via www.infosecurityinfrastructure.com.


The Shared Password Strikes Again!

One of the most intriguing cyber-security stories ever is the recent hack and public smearing of information security from HB Gary by hacker group Anonymous.  The incident relates to the WikiLeaks scandal, and the ongoing fear that major corporations might be the next victims of embarrassing document leaks.  Tech writers Michael Riley and Brad Stone provide a detailed account of the entire episode in Bloomberg Businessweek.

But in a story packed with egos, headline-grabbing hacks, political connections, law firms and finger-pointing, one of the most interesting facts was buried deep in the details:  What could have been a relatively harmless hack turned into a PR nightmare because the executives of HB Gary failed to follow one of the most basic information security policies – Don’t share passwords between systems.

The shared-password is becoming like the germ that killed the invading Martians in War of the Worlds.  The tiny, invisible bug is able to quickly spread a vulnerability in one system to many others.  Here is a group of established security professionals, with stellar credentials and capabilities to hack into complex systems with ease.   And yet when it comes down to a simple rule like not sharing userids and passwords between two systems, they are just like the rest of us.  Convenience trumps security.

Over a year ago, we published an updated policy in our PolicyShield library that went something like this:  “Users must not reuse company login credentials on social networking sites.”

This security policy was basically an extension of a much older policy (prohibited sharing of passwords between systems) into the realm of the internet.  The basic premise is that reverse engineering a password was much easier using all of the information available on social networking sites like Facebook.  Indeed, within a few months after we published the policy a real incident happened where a compromised Facebook account led to a network intrusion.

The take of HB Gary and Anonymous worked in reverse.  By hacking into a web-based application, Anonymous was able to gain access to userids and passwords that were re-used on social networks sites like Twitter – enabling Anonymous to send fake tweets and other offensive messages posing as the team from HB Gary.

So is there a lesson in this?  It might be that when it comes to information security policies – we really do have to sweat the small stuff.  We always need to be on the lookout for the newest, most complex threats.  But we still cannot forget the basic foundations of information security.


A Security Policy Standard of Due Care

Divergent Directions: Looking back over the last 30+ years of my work in information security, I see two diverging trends when it comes to defining the information security-related standard of due care. By the “standard of due care,” in this column I mean the actions that management needs to take (for instance the controls that need to be deployed), in order to avoid legal problems such as charges of negligence.

The first of these two directions involves the definition of a set of controls that all organizations should subscribe to, across the board.  Examples include the ISO 27002 information security standard or the recommended controls from NIST SP800-53.  The components of this set are for the most part already defined, but the size of the set is still expanding slowly over time. The second of these directions involves situation-specific requirements. The components in this set are for the most part still being defined, and the size of this set is rapidly expanding over time. In the long run, most information security requirements will be situation-specific requirements. This is so because the information security measures expected in banking would not necessarily also be expected in manufacturing (more about this below).

Management is not at liberty to choose one or the other set of requirements. Instead, in the future, they will be expected to meet both sets of requirements. This column explores some examples of the emergent situation-specific standards of due care, which of course should be expressed in an information security policy.

Evolving Legal Requirements: Unfortunately, decades of experience has proven that many top managers won’t spend money on, or devote significant attention to information security — unless they are forced to do so. I won’t name names, although it would be easy to do so. Top management at many large and reputable organizations has been taking an amazingly lax attitude about information security. For example, a few years ago, a large French bank was hit by a $4.9 billion computer-assisted fraud perpetrated by a rogue trader. In an effort to prevent these amazingly large losses, a variety of new information-security-related laws have been passed. For example, the Sarbanes-Oxley Act of 2002 (aka SOX) and the Federal Information Security Management Act (FISMA) both mandate a higher level of organizational vigilance, as well as a higher level of management accountability for information security. These laws are an example of the first category of requirements defining an across-the-board standard of due care. There are many others that could have been mentioned here, but this column is focused on the second category of requirements.

Besides the new laws and regulations, case law is defining the ways that the Board of Directors and top management must be involved with information security matters. For example, the 1996 Caremark International case establishes that directors have a duty to monitor compliance programs related to information security, to make sure that controls are operating as they should be. In that case, directors were held liable because they “should have known” that Caremark staff were violating Federal anti-kickback laws. Likewise, the 2003 Walt Disney case further clarified this standard of oversight that directors must exercise. In that case, the directors allowed the CEO to walk away with a $140 million golden parachute deal, even though he had been working on the job less than a year. Again, the directors “should have known” that these things were going on. The court decided that the directors did not act in good faith, that they had a conscious disregard for matters to which they should have paid attention.

Risks Define Policies: The information security risks facing a bank are really quite different from the risks facing a manufacturing firm. The former is very concerned about fraud and privacy, whereas the latter is very concerned about business interruption and quality control problems. Yet, because the information security field is still in such a young state, most firms are being subjected to a “one size fits all” approach. Granted, certain fundamental management duties associated with the information security function (sometimes called a “baseline”) can be defined across all firms. One function that goes into such a baseline is the performance of a regular risk assessment. In fact, ISO 27001 has defined such a baseline applicable to all organizations. But when it comes to the specific controls to be adopted, that conversation will often take us in a very different direction because controls must be a function of the risks, and the risks will be different from organization to organization.

Robert Courtney used to be head of information security for IBM. In a discussion with him years ago, he said, “you cannot determine whether a specific system is secure if you look only at the technology.” What he meant by that was that we, the assessors of the level of information security, must take the whole situation into consideration, not just the technology. For example, we must ask: “what are the business risks, the legal risks, the financial risks, and the other circumstantial factors in this particular environment?” Only when these factors are collectively considered can an assessor give any sort of a meaningful opinion about the prevailing level of security.

This situation-specific viewpoint is manifest in a host of new computing-environment-specific standards that are cropping up. Consider the Trusted Cloud Initiative. This new standard defines what controls cloud service providers should be providing to their customers. It focuses on the nature of the relationship between customers and cloud service providers, notably the need for transparency, so that customers can understand what cloud service providers are doing. The Trusted Cloud Initiative also focuses on the integration of security systems between customers and providers, for instance in the identity and access management area. Thus this set of information security requirements is largely defined by the nature of the outsourcing relationship and the new technology that goes along with that relationship.

The situation-specific viewpoint is furthermore manifest in requirements that define the controls relevant to a specific type of information. For example the Payment Card Industry – Data Security Standard (PCI-DSS) defines the controls that merchants — and also third-party transaction processors who are handling credit card data — must deploy. Among other things, PCI-DSS discusses how encryption must be used in order to protect credit card data. Here we see that the nature of the information involved (valuable, critical, and/or sensitive) defines the controls that should be deployed.

The situation-specific definition of controls is additionally now evident in a number of high-risk environments. For instance, about a decade ago, a handbook called OCC 99-9 was released by the Office of the Comptroller of the Currency (OCC). This handbook defined how banks should be handling information security. This handbook goes into a number off specific controls, such as what unauthorized attempts should be reported to the Federal Bureau of Investigation. In a similar way, the Gramm Leach Bliley Act (GLBA) also defines specific information security requirements for financial institutions. These requirements include an information security plan and policies detailing the ways that financial institutions are going to protect restricted-access personal information. Here we see situation-specific controls defined on an industry-by-industry basis.

The Best Approach: The military provides us with a phrase which defines the best approach to defining the standard of due care, as it will be observed within a specific organization. That phrase is “system high.” In the military, those words mean that the level of security is a function of the most sensitive piece of information resident on a certain system or network. For example, if the most sensitive type of information is only “confidential,” then the whole system or network must operate with confidential security measures. But if a new piece of information comes onto that system or network, and it is “top secret,” then the security on this system or network must be upgraded, so that it will then be operating according to a more stringent standard.

The system high approach can, and in many instances should be, applied to the definition of an organization-specific standard of due care. Your organization will find that it is subject to information security requirements defined by different entities such as government regulators (at different levels of government), courts issuing case law in the contract and tort areas, industry associations, and information security community groups. The system high approach dictates that all those relevant requirements be combined in a patchwork way, so that the most stringent of these then collectively make up the baseline, the minimum standard to which your organization should subscribe. This baseline would of course be explained in your information security policy document, and (if yours is a larger organization) probably an information security architecture document as well.

Part of the reason why we must go with the most stringent of the requirements is that there is not a direct match-up between legal and regulatory jurisdictions on one hand, and the scope and breadth of organizational or multi-organizational networks and operations on the other hand. For example, the requirements defined in European data protection laws are not found in many non-European countries. At the same time, multinational business operations are generally not restricted only to western European countries with these privacy laws.

To assure continued business operations without the need for special silos or separate collections of data, and without special content filters or walls to block the exchange of data, your organization should go with the system high approach. Yes, this will initially cost more, but in the long run it will probably not cost as much as you might at first blush believe. This approach brings many benefits, such as reducing costs because: (1) organizational-wide vendor discounts are obtained, (2) staff needs to be trained in fewer approaches to security, and (3) the computing environment is thereby simplified and standardized.

Legal Collaboration: To come up with an approach that makes sense for your organization, this author recommends that you discuss the matter with your organization’s attorney early in the development process, not just later on after you’ve got a specific proposal. If you collaborate in this way, the law can be used as a compelling force driving greater top management engagement with information security, and also clearly defining the information security related situation-specific standard of due care.


Charles Cresson Wood, MBA, MSE, CISA, CISM, CISSP, is an independent technology risk management consultant with InfoSecurity Infrastructure, Inc., in Mendocino, California. In the field since 1979, he is the author of a collection of ready-to-go information security policies entitled Information Security Policies Made Easy.  His latest book is entitled Kicking The Gasoline & Petro-Diesel Habit: A Business Manager’s Blueprint For Action. He can be reached via www.infosecurityinfrastructure.com.


The Information Security Policy Hierarchy

Developing A Governing Policy & Subsidiary Policies

A Maturing Field: As the discipline of information security becomes more sophisticated, codified, standardized, and mature, it is not surprising that the old-fashioned approach to information security policy writing is no longer appropriate. We are talking here about the “one-size-fits-all” information security policy that is supposed to apply to all workers in a specific organization. Different people within an organization have different things that they need to know from an information security policy. This diverse set of readers should not be required to wade through a lot of irrelevant material in order to find the sought-after information.

More and more organizations are breaking their single information security policy document into various information security policies. What we often see is an umbrella information security policy relevant to all readers, accompanied by policies intended for specific readers only. In the latter category we see policies for systems developers, quality control engineers, and other functional groups. Most readers do not need to read the latter type of narrowly scoped policies, so it’s best if this information is separated from the main “everyone has to read this” material.

Getting User Friendly: After this separation between an umbrella policy and subsidiary policies, on a level of sophistication scale, the next stage is breaking down information security policies by job title. A very large American bank did this with great success via an intranet, and the workers really appreciated knowing what exactly they were responsible for, and also what they didn’t have to worry about. Using a more progressive perspective, it would be better to structure this document breakdown by specific cross-departmental business process. As information security policies continue to expand in size, and as they become ever more detailed, this type of audience targeting is increasingly necessary.

On one more sophisticated level still, a level to which very few organizations have presently gone, is a breakdown of policies into very brief statements relevant to a specific task. For example, if someone wanted to gain access to a new computerized business application, a privilege that they currently didn’t have, the organization could have built a series of web forms that such a person could fill out to submit a request. Pop-ups would appear instructing them as they fill out these forms. On selected pop-ups, and also available as links (to be followed as desired), they would see a paragraph or two of information security policy material, but only material relevant to this specific task. Unfortunately this approach takes a lot of effort, is rather time consuming, and in some instances can’t be done at all (if off-the-shelf packages are used for example). Nonetheless, this approach integrates information security with automated processes, and in that respect is desirable because it communicates the message that “information security is a normal part of how things are done around here.”

This last approach eliminates the whole question of people ignoring information security policies, for example because they failed to consult the policies that were found in a separate place. Instead the policies are merged with business processes, and compliance is achieved via one or more action-forcing mechanisms. An example might be a digital signature from a departmental manager being required before access to a specific system privilege is granted. A systems administrator would be blocked from changing the privileges for a normal end user unless that digital signature has first been obtained and confirmed as legitimate.

Long Term View: If one uses an umbrella information security policy, sometimes called a governing policy, and then develops specific policies that fall under that umbrella policy, you will find that maintenance and updates will be considerably easier. Approval of a short and narrowly-scoped document will be conceptually easier for many people, especially non-technical managers. Breaking things down in this way also supports making a clear and sharp distinction between different types of information security documentation, for instance distinguishing between policies, guidelines, procedures, technical standards, and contingency plans. Clearly differentiating between these document types allows information security policies to be kept on a high-level of abstraction, and thus at least potentially be in force for five years without modification (although policies should be reviewed annually for relevance and needed changes).

In an umbrella policy, we will typically see a statement of objectives for information security, which explains how these objectives support organizational goals. We would also typically see a statement from the CEO stating his or her expectation that everyone working at the organization comply with all information security requirements (policies being just one of these). We would additionally expect to see human resource related matters applicable to all readers. For example, a discussion of the disciplinary actions that will be taken in response to a violation would be found in an umbrella policy. Training and awareness matters, such as a required annual refresher course, those too would be addressed in an umbrella policy. Structures used in other policies, ways of looking at information security that everybody needs to understand, such as the user-custodian-owner model, these would also typically be explained in an umbrella policy.

Links to Specific Policies: In an umbrella policy, we would furthermore expect to see links to more specific policies, sometimes called technical policies. These more specific policies could address generic areas like access control, user authentication, system logging, physical security for computers, and encryption. Although some organizations have chosen to organize their more specific policies along the lines of vendor technologies, like the Windows operating system, this author recommends against such an approach. Information is not confined to only one operating environment, and a consistent approach to security is needed across all vendor technologies, across all operating systems, and for that matter across all organizations that have access to the information in question. It is far better to take an information sensitivity oriented approach, for example breaking statements about required controls down by a data classification system. This approach reflects the perspective that technology should follow business needs, and of course information security is a business need.

So when structuring the subsidiary policies, you can use the traditional role-based approach, you can use a business process based approach, you can use an information sensitivity based approach, or you can use an issue based approach. With the latter approach, in one subsidiary policy document, we would for example talk about what to do after there has been an intrusion. That document would address who makes executive decisions such as shutting the affected system down, what information abut the intrusion must to be recorded for legal and insurance purposes, how to gather and properly store evidence, who acts as a public spokesperson, etc.

Subsidiary Components: More specific subsidiary policies should for example address matters such as: (a) who is responsible for buying, renting or leasing new systems, (b) who must approve of the security measures on new production systems, (c) who is responsible for managing the security on these systems, and (d) how these systems must comply with standardized configurations. The operating system configurations can and should be defined in separate documents, for example a set-up procedure for systems administrators. In general, if a separate person is going to handle the more detailed matters, such as configuring a new computer, then this is a good point at which to have a separate document.

Using this — or a similar — rigorous top-down hierarchical approach will bring a discipline to information security policy writing that will be much appreciated down the road. This author has seen far too many cases of spaghetti-style policy writing, where everything is hopelessly interconnecting and overlapping, and it’s very hard to figure out what the policies actually require. Of course, in the latter case, update and maintenance is a nightmare. Often, the lowest-cost and most-expedient approach is to replace the whole spaghetti-style document with an entirely new set of clearly-structured documents.

A Bonus: Another significant benefit of breaking things down as suggested here is that access control, based on the principle of “least privilege” can easily be maintained. For example, if a contractor is going to help with the systems design of the accounts receivable system, then only the security policies applicable to the accounts receivable and collections areas need to be disclosed to him. And for many other people too, both inside and outside an organization, both separation of duties and dual control can be better supported if we have a combination of multiple narrowly-scoped policy documents, and access control restrictions at the document level.

Whatever your reasons for structuring information security policies the way you do, make sure they reflect the business needs of the organization in question. More specifically, before you make a decision about the appropriate policy document structure, make sure your organization has a recent risk assessment that talks about the most pressing information security issues confronting the organization. It’s important to use that information to then create a structure for policy documents that fits the prevailing organizational structure, vulnerable business processes, and important information security tasks.


Levels Of Maturity In The Security Policy Development Process

Litmus Test: One high-tech company that this author was working with recently was considering the acquisition of another high-tech company. In order to gauge the sophistication of the information security effort at the target company, top management at the acquiring company requested a copy of the information security policy. The policy document in that moment became the litmus test, the single method to quickly measure the sophistication of the target organization’s efforts. Top management at the acquiring firm was surprised at how backwards and old-fashioned the target was. They got to thinking about the extent to which the target firm would be able to safeguard their valuable intellectual property, and it did not look encouraging. For the time being, the merger is off. It was simply too scary for the acquiring managers to proceed.

This sequence of events reveals one role that information security is increasingly playing in a stressed and financially uncertain economy: assurance of “mission integrity.” By mission integrity, this author means helping to guarantee that the organizational mission will be fulfilled. In other words, information security is a critical supporting effort, a critical supporting infrastructure and business function, without which the organization’s mission could not be successfully achieved. In the example just cited, the target organization’s mission was impaired because information security was not, or more specifically the information security policy was not, up to snuff.

Different Uses: The example cited here shows how information security policies are increasingly coming to be recognized as calling cards indicating just where an organization stands when it comes to information security. While a policy document is the most prominent and best-known information security document, the same can be said these days about the collection of information security documents found within an organization. The collective status of these documents reveals what has gotten attention and what has not, in addition to in what areas the organization is leading, and in what areas it is lagging.

Some time ago (Note #1), this author defined an information security sophistication curve. This hypothetical curve has discrete data points indicating the existence of, and level of refinement of, internal information security documents, which collectively indicate where a particular organization stands. For example, if no Information Security Officer has been designated within the organization, if no job description for such a person has been developed, then the organization is most likely way down on the curve, in other words its information security effort is still embryonic. But if an employee has been assigned this role, and their job description reflects that, and if there is a budget line item for information security, then the organization is getting a lot more sophisticated when it comes to information security. And taking the analogy one step further, if an organization has a refined process for managing information security risks, perhaps even with a formal documented risk management process that would be eligible for ISO 27001 certification — a process that names an information security management committee and other responsible managers — then the firm is very far along when it comes to the sophistication curve.

Levels Of A Policy: If we narrow our discussion, and apply this same notion of a sophistication curve to an information security policy document only, then three distinct hypothetical sophistication levels can be defined. Mind you, this author has no empirical evidence to back these assertions up. They are based only on his 30+ years of working in the field, and what he has observed when doing consulting work for 110+ organizations around the world. His viewpoint is also likely to be skewed because he has been working primarily in certain industries (notably finance and high-tech). But the reader can hopefully take the basic idea discussed here, and use it in a conversation with, or presentation delivered to management. Mentioning this notion of sophistication levels to management could get managers to seriously think about whether they have allocated sufficient resources to information security, and whether the organization is in fact doing all that is expected of an organization in a similar position.

So let’s define three data points on the sophistication curve. At an organization that was low down on the sophistication curve, a policy document — assuming that they had published a policy document — would typically be primarily an Internet Acceptable Use Policy. This document would most likely cover downloading porn, playing computer games while at work, using the systems for personal purposes, and related topics. The document may include words about reporting problems, specifically what needs to be reported, and to whom the reports should be directed. Typically a rudimentary access control policy based on the need-to-know will also be included in this short policy document. In many cases these policies were adopted because an auditor or business partner said it was necessary. Management’s begrudging attitude toward information security is also often evident in the policy document because whole clauses have been copied wholesale from other firm’s policy documents (or perhaps copied directly from this author’s policies book). Unfortunately, these clauses are often written in different styles, and/or make reference to terms that don’t occur elsewhere in the document.

As a second data point, consider an organization that has been doing some significant work, that is closer to average in terms of sophistication. In their information security policy, we would expect to see a data classification policy, including how to mark paper documents with different classification levels. Such an organization would also have developed a rudimentary information security architecture, which would be referenced in a policy document, and which would be designated as the authoritative source of guidance in terms of building information systems. As a more specific example, the technical components to be used in all in-house desktop computers would be defined in such an architecture. Likewise, the policy document would make reference to a systems development process, and how information security is considered in this process (for example in the requirements definition phase). An organization like this would also most likely have developed several teams, for example for incident handling, and separately for electronic discovery orders. A policy of this nature would additionally be expected to include a documented process for getting management approval to access different application systems, servers, networks, and the like.

As a third data point, a highly sophisticated organization would be expected to have a document management system into which versions of policies would be logged, and in which reviewers and approvers would be noted. This document control system would be part of a formal quality control process. The QA process would likewise show up in the testing routines used for in-house software. Such a firm may even have adopted ISO 9000 style documentation management procedures and applied them to the information security function. In such a sophisticated organization, the policy would reflect a high level of granularity in privileges for different types of workers, such as outsourcing firm personnel, business partner staff, contractors, consultants, and temporaries. This policy would also make reference to sophisticated automated controls such as encryption, digital certificates, and digital signatures. The policy would also show that the fixed password based access controls of yesteryear have given over to more sophisticated extended user authentication, such as hand-held tokens that have passwords that change once a minute. This policy would also make reference to separate extensively developed documents for both business continuity and disaster recovery. There would be more, but the reader hopefully gets the gist of what would be included.

How To Use These Levels: After reviewing these three levels of sophistication for an information security policy document, the reader might pause to discern into which category their organization is likely to fall. The important question to ask within that firm: “Does our information security policy reflect the work that has been done, and is it suitably sophisticated for our organization?” If not, perhaps it’s time for you to request some money to update the information security policy.

Of course, these three levels of sophistication are just snapshots. Many other considerations could have been mentioned. And the exact nature of the policy document’s contents will be fluid, and vary from organization to organization. These contents will change based on organizational size, home country, industry, size of the firm, technology employed, management’s attitude toward risk, and other factors.

In a more general sense, this author urges the reader to initiate an information security document audit at their organization. Such an audit can be used to determine what information security documents exist, and what documents need to exist in order to fully support current information security activities. Typically such an audit reveals the need for more awareness and training material, among many other things. An audit of this nature can also show management that it is unreasonable to expect certain results when the supporting documentation has not been prepared. For example, why would management expect systems administrators to consistently and securely configure their systems if the organization had not yet issued any explicit written guidance on how to do this?

Updates Required: The preparation of information security documents has been erroneously viewed by many to be a one-time activity. Instead, organizations need to realize that the preparation and periodic revision of policy documents is an on-going activity essential to information security success. In recognition of the many changes taking place in both the business and technical environments, information security documents need to be periodically reviewed to determine whether they are relevant, accurate, and responsive to an organization’s needs. A document audit can help organizations achieve these aims.

Note #1: See “An Overview Of Information Security Documents,” by Charles Cresson Wood and Juhani Saari, published in the Auerbach journal Information Systems Security, Summer 1992.


Charles Cresson Wood, MBA, MSE, CISA, CISM, CISSP, is an independent technology risk management consultant with InfoSecurity Infrastructure, Inc., in Mendocino, California. In the field since 1979, he is the author of a collection of ready-to-go information security policies entitled Information Security Policies Made Easy.  His latest book is entitled Kicking The Gasoline & Petro-Diesel Habit: A Business Manager’s Blueprint For Action. He can be reached via www.infosecurityinfrastructure.com.


Using Security Policies As Catalysts For Internal Change

Security Quality Control: There is much to recommend about the ISO 9000 quality control approach as it is applies to the discipline of information security. In fact the ISO 27001 standard, entitled Information Security Management System (ISMS), in large measure reflects that same methodology. In other words, ISO 27001 suggests a continuous improvement approach to information security, where processes are documented and standardized, where adherence to these processes is regularly measured, and where improvements are made over time to the processes themselves. The standard makes reference to the “plan, do, check, act” approach, alias PDCA, which is another way to describe this same continuous improvement quality control approach.

If you’re approaching information security in this manner at your organization, your organization’s information security function is in a relatively evolved state. In contrast, many organizations are still trying to get the budget to occasionally perform credible risk assessments. Likewise, many organizations are still desperately seeking more management attention, trying to get enough budget to adequately accomplish the basics, like user awareness and training — in many cases things that should have been completed ages ago.

So how can these information security staff seeking to up the security level at their organizations accelerate the evolution of security? How can they catalyze the change process? One highly recommended way is to start putting things much more seriously down on paper. To start this process, they can write a variety of information security policies.

Importance Of Security Policies: But before you run off and start writing a bunch of policies, or better yet, tailoring a template of already written policies, there are some important initial steps. You or an independent third party should have performed a recent risk assessment, and this report should make it clear that substantial improvement is still needed in the information security area. Thus top management will need to be legally put on notice that they need to do something in the information security area.

With top management, you should also talk about the importance of policies in the information security area. Tell them how policies are like the backbone on which many other documents are built, and from which general direction for an information security effort will come. Talk about how policies are one of the most basic pieces of evidence that auditors look for when they determine the sophistication of an organization’s information security effort. Make reference to the group of laws and regulations (relevant to the your organization, of course) that all require the existence of an information security policy document. For example, if your organization works with medical information, the Health Insurance Portability & Accountability Act (HIPAA) requires a privacy policy. Thus management will come to appreciate the centrality and importance of putting things in writing, specifically in the form of policy documents.

If it doesn’t already exist within your organization, (and it’s best to piggyback on an existing process if you can), then you should create a formal documented risk management process. Implementing the PDCA approach mentioned above, this procedural approach provides a mechanism to hold people’s feet to the fire, to get them to act, and to regularly report back to management if and when they don’t act. This same risk management process checks to see if people are in compliance with policies and other internal requirements. Policies are of very little use if they are never enforced. In fact, they could do more harm that good if everybody knows that policies are simply a smokescreen obscuring the fact that top management doesn’t care about information security.

Provocation Begins: The beneficial provocation, which catalyzes the evolutionary process, can for example take place when there is a budget showdown, where management must decide between an information-security project and a non-information-security project. If you’ve documented an information security requirement in the form of a policy, gotten top management’s written approval, and you’ve sent it out to the whole organization, maybe even distributed it to customers and business partners, you should stress that management must now follow through. Emphasize how they must establish supporting infrastructure, and consistently maintain a system that supports this requirement.

This is a great time to talk about compliance and how important it is to get consistent compliance, from all people, from all systems all the time. This is also potentially an important teaching moment with top management. They may not like the fact that they are required to spend money on information security. But if you’ve done your job well, specifically if you’ve documented everything and tied it all back to legitimate and authoritative sources, then you’re standing on firm ground. Of course, management can always decide to take a risk, and not to fund the information security project implementing this policy’s requirement, but if they do, at least you’ve got it in writing that the onus of responsibility is on their shoulders.

Another great provocation approach, an approach that prods management to take more action, is to spend some time with your internal legal counsel. Speak to your attorney about what it means for top management to have a fiduciary duty to protect assets, and how top management is legally bound to protect assets (which of course include both information and information systems). Talk to them about negligence, professional duty, due diligence, and related legal concepts that squarely place the ultimate responsibility for information security on top management’s shoulders. If you can, reference specific laws and regulations, and/or case law (although the latter is hard to come by because information security is still such a new field). For example, if you work in the US government, could make reference to FISMA, the Federal Information Security Management Act, which requires every Federal agency to develop and document an information security effort.

Building A Security Pyramid: So what we’re doing here is collecting a bunch of authoritative sources that say that you must document, document, and then document some more the requirements that go into your information security program. Then we’re showing those sources to management, and impressing them with the fact that it in their best interests to document, document, document the information security function. For example, top management can demonstrate that they have exercised due diligence in the information security area if the organization has adopted authoritative new information security policy documents. So in the process, we’re building a pyramid of documents, a hierarchy of documents with policies at the top.

Some readers may think this strategy seems manipulative. But I suggest that’s an unnecessarily negative spin. Think of it instead as the establishment of action-forcing mechanisms. I’m not asking you to do anything that isn’t absolutely tied-in with fundamental business needs. What’s new, and perhaps at least in the beginning a bit hard for some readers to follow, is understanding how very successful a documentation-oriented approach can be, when it comes to forcing management to do what they have all along been required to do. So tie all this documentation in well with business requirements [who knows sometimes management justifies this important kind of work for the wrong reasons — for example as a way to get a new customer to sign-up for their service — but don’t complaint, just go with whatever works]. At any rate, this hierarchy of documents will define what needs to be done, and the risk management process mentioned above (aka PDCA) then will start to illuminate the fact that actual conditions are a significant distance away from the documented requirements. This gap needs to be closed, and that means that we need to go back to management to ask for more budget dollars.

In these tough financial times, going back to management and asking for more money can admittedly be hard to do. But it’s important, so do it anyway. You should feel good about the fact that all we’re doing here is getting management to do what they are legally, ethically, and prudently required to do. What’s different about this documentation-based approach is that it’s codifying the information security function; it’s establishing the information security function as an on-going organizational function like accounting and marketing. What’s different about this approach is that we’re moving information security out of the project-to-project mode, and we’re creating on-going business processes that top management must engage with periodically. For example, as part of the documentation approach, this author recommends the establishment of an Information Security Management Committee, which meets quarterly to supervise and oversee information security activities.

Nobody’s Irreplaceable: One of the great parts about this documentation-oriented approach is that it help address the information-security related complexity management problem. If we don’t extensively use documentation, then we can’t really understand what’s happening in the information security area. If we don’t understand what’s happening, then we can’t successfully manage information security.

As one example of this point, consider how this documentation approach creates enough documentation in the pyramid (standards, guidelines, procedures, contingency plans, etc.), so that no one individual can unexpectedly leave an organization, and thereby interfere with the continuation of the work as it should be performed. Along with a host of other objectives, as a design objective, the system of documentation and related organizational structure should be created so that it is not dependent on any one individual. That is both good information security practice and good human resource management practice.

A highly publicized example of what can go wrong when this good practice isn’t pursued was recently revealed by a court case involving Terry Childs and the City of San Francisco. Childs was a systems administrator who had extensive privileges over one of the City’s internal networks. He changed the password and locked people out of the network, and then refused to divulge the password. Later convicted of a type of denial of service attack under California law, Childs had no peer at the City who knew the password, nor was there a work-around that could be employed, so he could effectively hold the City hostage. Although this author doesn’t know about the status of documentation at the City, it appears that if excellent documentation was in fact prepared and used, this excessive reliance upon a single individual would have stuck out like a sore thumb, and it would have been promptly remedied.

You should talk about cases like this with management, and help them to understand how having good checks and balances, that are in turn well documented, will in fact help to prevent problems such as this. And of course, as you prepare this documentation, you will also be building the documentation pyramid mentioned above, and coming that much closer to having information security be recognized as a legitimate on-going organizational function, just like accounting and marketing.


Charles Cresson Wood, MBA, MSE, CISA, CISM, CISSP, is an independent technology risk management consultant with InfoSecurity Infrastructure, Inc., in Mendocino, California. In the field since 1979, he is the author of a collection of ready-to-go information security policies entitled Information Security Policies Made Easy.  His latest book is entitled Kicking The Gasoline & Petro-Diesel Habit: A Business Manager’s Blueprint For Action. He can be reached via www.infosecurityinfrastructure.com.


Security Policy Lessons from SCADA Attacks

Reports from the last few months have generated another wake-up call for those concerned with the security of the nation’s critical infrastructure.  In addition to audit reports of widespread vulnerabilities among agencies managing the infrastructure, the first malicious software was discovered “in the wild” that specifically targets the SCADA system employed to manage these networks.

While there have been many warnings in the past about vulnerable systems, this was the first attack targeted at software used to manage large-scale industrial control systems used by manufacturing and utility companies.  The malware was unique in both its sophistication (combining multiple vulnerabilities) and specificity (targeting specific systems and software). Several governments (including Iran) confirmed that their networks had been infected.

What went Wrong?

While it might be tempted to write off these incidents as specific to SCADA systems, doing so would miss a great learning opportunity.   In fact, the vulnerabilities that were exploited in these attacks are quite common in many production applications and can teach us some valuable lessons about the importance of security policies.  So let’s walk through a few of the lessons as they relate to these SCADA attacks.

Policy Lesson 1: Identify Critical Systems – It appears that the computers running control system software generally had lax security controls.  While there is no way to know this, we can assume that either (1) the infected organizations did not know that these systems were critical or (2) failed to identify them as part of a risk-assessment process.  Because of their ability to control large-scale systems, it is obvious that these systems are business critical.  The more likely scenario was that they were not identified as needing any specific security policy controls.

Q: Do you have any critical systems lurking in your network that have not be identified and rated?  Do your written policies require a critical system inventory and risk assessment?

Policy Lesson 2: Disable USB Drives to Critical Systems – The primary attack “vector” for this malicious software was propagation through USB drives.  The simplest policy control is to disable USB drive access for critical systems.  A less-restrictive policy is to require scans of all connected drives for malicious software.

Q: Do your security policies address the issuing, labeling and control of portable storage?

Policy Lesson 3: Prohibit Fixed Passwords in Software – One of the key elements of the attack was that the malicious software searched for specific versions of the SCADA software (written by Siemens).  The software employed an embedded and fixed password for access to the SQL system database.  The great tragedy of this approach is that the logical fix (change the password) will likely break the software.  (In fact, Siemens admitted this in their instructions to customers.)

Q: Does your organization develop or buy custom software?  If so, do you have a secure application development policy that prevents these problems?

Policy Lesson 4: Watch for your Software IP in the wild – Apparently this fixed password has been discovered and published for some time on the internet.  That enabled the malware authors to craft the specific attack.  Several releases ago we included several security policies that enabled organizations to perform basic searches to see if their software source code and had been compromised and available on the web.  (A technique called “Google Hacking.”) Organizations that develop mission-critical software can implement these policies with a minimum of effort to reduce the likelihood of vulnerabilities being publicly posted.

Q: If your organization does develop custom software, do you know if it has been compromised and available online?

Policy Lesson 5: Don’t focus only on Regulations – This attack is a classic example of how focus on “compliance” can lead organizations to miss a large number of “persistent” threats.   Both FISMA (NIST) and NERC-CIP have been recently criticized for focusing more on compliance documentation while not addressing evolving threats.   That is why we incorporate lessons from real-world incidents into our security policy libraries.  In some cases (like this one) most of the policy controls already existed.  But in many other cases these attacks lead us to specific policies that would never be identified by focusing on regulatory compliance.

Q:  Is your security policy program designed to incorporate new controls to address the latest technologies and real-world threats?

The GOOD News

Looking at this attack in some detail, the good news is that any one of these existing security policies could have helped eliminate this threat.  In fact, each of these topics has already been addressed within our PolicyShield Security Policy Subscription.  That is why our mission is to help organizations get security policies implemented and updated as effectively as possible.  While there are certainly many legacy systems in production that were developed using security principles, we know enough today to build the secure applications of the future.

Older posts «