Friday, January 7, 2011

Mistaken assumptions about authorized users constrains the trustworthiness of information systems

The National Institute of Standards and Technology (NIST) released an updated guide to its Risk Management Framework (RMF) in December when it published the final public draft of Special Publication 800-39. Among several areas where the document has changed substantially from the previous draft version is its treatment of trust and trustworthiness, with a revised section on trust and trustworthiness of both organizations and information systems, and a newly added appendix describing several trust models or approaches to establishing trusted relationships between organizations. The choice of which model (or models, as they are generally not mutually exclusive) to use depends on a variety of factors about the context in which trust is sought and the nature of the entities that need to trust each other. By separately addressing trust between organizations and trust between systems, SP 800-39 illustrates the different types of factors used to assess the trustworthiness of other entities and, perhaps unintentionally, highlights some of the limitations inherent in trusted computing models that can lead to unintended gaps in security.

No recent incident highlights this problem more effectively than the leak of hundreds of thousands of State Department cables and other documents, apparently downloaded and exfiltrated without detection by an authorized user of the Net-Centric Diplomacy database. The system holding the classified information was deployed on a secure, access-controlled military network, with no individual user-level authentication and authorization mechanisms, and no capabilities to monitor activity by users accessing the database. Considering the system from the perspective of a computer security assurance model like ISO/IEC 15408 (Common Criteria), even in this vulnerable implementation, the system satisfies one of the two properties used to establish assurance levels for a system:  the security functionality provided by the system works as specified. Where the system appears to come up short on assurance is with respect to the second property:  the system cannot be used in a way such that the security functionality can be corrupted or bypassed. In hindsight it seems fair to suggest that the security requirements for this system were not well thought out; an argument can be made that since the deficiencies in its security posture result from the failure to implement some types of controls rather than any malfunction or evasion of the controls that were implemented, the system might actually qualify as "trusted" under at least lower evaluation assurance levels. The most relevant weakness in the system as implemented is not about the trustworthiness of the system, but of the users authorized to access it.

Authorized use of the Department of Defense's secure internet protocol router network (SIPRnet) is limited to users with security clearances sufficient to access classified information. Receiving such a security clearance requires a fairly extensive investigation, and the result is that individuals deemed qualified for a secret (or higher) clearance are considered to be trustworthy. Few systems that contain or process sensitive classified information rely only on a user's security clearance for authentication and authorization, but for the Net-Centric Diplomacy database, it would seem that if a user could log on to SIPRnet, the user could get access to information stored in the database. The trustworthiness of a system envisioned for this kind of use should be directly tied to the trustworthiness (or lack thereof) of the authorized users of the system, but in this case, either no such personnel-level evaluation occurred, or the assumption of trustworthiness associated with clearance holders resulted in a gross underestimation of the threat posed by authorized insiders. The consequences are now readily apparent.

As NIST defines in SP 800-39, trustworthiness "is an attribute of a person of organization that provides confidence to others of the qualifications, capabilities, and reliability of that entity to perform specific tasks and fulfill assigned responsibilities." When considered for people or organizations, trustworthiness determinations may take into account factors beyond competency, such as reputation, risk tolerance, or interests, incentives, or motivation of the person or organization to behave as expected. There is no expectation of course that every situation requires entities to demonstrate trustworthiness, or to be trusted at the same level across different contexts or purposes.  With respect to information systems, among the factors NIST cites that contribute to determinations of trustworthiness are the security functionality delivered by the system and the assurance or grounds for confidence that security functionality has been implemented correctly and operates effectively. Unsurprisingly, this perspective fits nicely with the concepts of minimum security controls requirements and standard control baselines that NIST also uses, in Federal Information Processing Standard 200 and Special Publication 800-53, respectively. IT trust frameworks focused on system assurance defined in terms of predictability, reliability, or functionality tend equate assurance with trustworthiness, characterizing trusted systems in a way that dissociates the system from those that use, operate, or access it. Such an approach ignores the constraints on trustworthiness that might be applied if the system was evaluated in concert with the non-system actors (organizations and people) that have the capability to influence the system's behavior or the disposition of the information the system receives, stores, or disseminates. A highly trusted system (in the common criteria sense) that is designed to be used in a particular way can and sometimes is misused. Data exchanged with trusted organizations or trusted systems is only as secure or private as the authorized users within the organization or that have access to the system choose to keep it. This is why the provision of organizational capabilities to monitor user actions with trusted systems should be an essential prerequisite to establishing trust relationship, particularly when those relationships are negotiated only at the organization-to-organization or system-to-system level.

Wednesday, October 20, 2010

Trust enables, but is not required for, both cooperation and collaboration

There is wide variation on the most effective means to foster or achieve cooperation between organizations, but trust is one of several mechanisms often suggested that can have an enabling effect on cooperation, alone or in the context of related factors such as reciprocity, negotiation, reputation, or a historical relationship based on prior interactions among the parties. Even without specifics as to the precise role trust plays in facilitating cooperative interaction between entities, much of conventional management theory implicitly or explicitly accepts the assertion made by Kenneth Arrow (1974) that "trust is an important lubricant of a social system," despite its somewhat intangible nature. This wording came to mind when reading a tweet from Jill Wanless during last week's Collaborative Culture Camp in Ottawa that "Trust is the oil that greases the wheels of collaboration." During this event, organized by Library and Archives Canada, focused on sharing ideas and fostering discussion on ways in which the government can engender greater levels of collaboration, both within government and between government and the public and other outside entities. True to its name, the event is about collaboration, not about trust, but based on comments and notes from attendees like Tanya Snook, trust is considered to be a prerequisite for collaboration in many contexts.

Much of the empirical research on cooperation makes no assumptions about the presence or absence of trust ex ante, and to the extent that decisions to cooperate are explained in terms of the parties’ self-interest (Axelrod, 1984), institutional incentives to cooperate (Farrell, 2009), or even simply in terms of mutual assurances (Blackburn, 1998), there appears to be little need to introduce trust into the discussion, other than to suggest that repeated cooperative exchanges among two or more parties may, over time, produce trust that may facilitate future interactions. While successful acts of cooperation can often be explained without reliance on trust as a factor, trust can facilitate decisions to cooperate, however, cooperation can and does occur in the absence of (or with insufficient) trust.
 
Collaboration and cooperation are similar concepts, but from the perspective of trust and establishing trust-based relationships, the terms differ in important ways. Collaboration involves the joint effort of two or more parties in order to accomplish or produce something, where each party makes some contribution to the collective outcome. Because successful collaboration depends on everyone doing what they are supposed to do to (or what they commit to do), collaboration often involves cooperation. The contributions made by collaborators may or may not be uniquely provided by the entities that participate in a collaborative activity, so for instance if one party fails to deliver as expected, the intended outcome may still be realized through the compensating action of other parties. Cooperation also involves coordinated, sequential, or reciprocal action by two or more parties, but in general the outcomes sought through cooperation are ones that could not be achieved by one party acting alone. In this way cooperation differs fundamentally from collaboration, in that the failure of one party to a cooperative relationship to fulfill its expectations or obligations typical results in the failure to realize the optimal outcome of the cooperative effort (Axelrod, 1984).

One similarity between collaboration and cooperation, despite the value of trust in enabling collaborative and/or cooperative activities, is that trust is not explicitly required in order for success to occur, at least if sufficient common interests exist for the parties to the relationship in question. Much of the commonly accepted theory on cooperation suggests that outcomes resulting from the pursuit of self-interest are sub-optimal, leaving room for improvement where reciprocal consideration for the interests of the other parties is taken into account. This idea is reflected in scholarly literature and popular management guidance alike, and supported by observations on the role of trust in collaboration by data-quality guru Jim Harris. In collaborative environments both within and across organizations, it seems trust is a powerful enabler, but also something that takes time to develop, sustain, and, where failures against expectations occur, to re-build.

References:

Arrow, K.J. (1974). The limits of organization. New York, NY: W.W. Norton.

Axelrod, R. M. (1984). The evolution of cooperation. New York, NY: Basic Books.

Blackburn, S. (1998). Trust, cooperation, and human psychology. In V. Braithwaite, & M. Levi (Eds.), Trust and governance (pp. 28-45). New York, NY: Russell Sage Foundation.

Farrell, H. (2009). The political economy of trust: Institutions, interests, and inter-firm cooperation in Italy and Germany. Cambridge, England: Cambridge University Press.

Tuesday, October 19, 2010

Decisions to trust others are both personal and subjective

One of the more challenging aspects of addressing organizational trust (whether between individuals and organizations or between two or more organizations) is the inherent subjectivity involved in determining the trustworthiness of organizations and making decisions to act (or refrain from acting) on the trust one party has in another. Differences in the relative willingness of people to trust seem to derive from a variety of factors, particularly including influences on individuals from family, community, and culture, and from experiences built up over time from interactions with others. Encountering a situation in which someone else exhibits trusting behavior outside the norms we come to expect can bring these differences into sharper focus, as evidenced in a story recounted by my colleague Sara Peters from a recent visit to the island of Iona in western Scotland. It seems while attempting to make a credit card purchase at a local shop that doesn't accept credit cards, the shop owner offered — absent sufficient cash at hand — to provide a self-addressed envelope to the shoppers so that they could mail the money to her later. This gesture is of course a business decision, but it also reflects a willingness on the part of the shop owner to place her trust in people who are essentially complete strangers to her. This scenario illustrates exactly one of the central characteristics of trusting relationships — whether interpersonal or otherwise — both in the asynchronous nature of the proposed transaction and in the explicit willingness for the truster to make herself vulnerable to the action of the trustee. Historically, there has been some debate about what characteristics of the exchange must be in place in order to invoke trust as opposed to, say, a purely economic or probabilistic calculation comparing potential loss with potential gain. Where the trusting party appears, as in the case of the Iona shop owner, to be chiefly concerned with something other than transaction cost economics, the trust involved goes beyond expectations about another's intentions, and crosses over into the realm of character, both for the truster and potentially the trustee.

Morton Deutsch considered trust primarily for its role in cooperation and, specifically, the need for trust and trustworthiness between parties to cooperative exchanges, which in Deutsch's view requires "mutual trust," particularly because the benefits each party to cooperation receives are not realized at the same time (Deutsch, 1960). The key aspect of trust-based decisions as distinct from other types of decisions where risk or uncertainty exists regarding the outcome is the relative difference between the potential positive and negative outcomes. Deutsch posits that trust is only involved in situations where a party holds an expectation about an event in which the potential detriment if the expectation is not fulfilled is greater than the potential gain if the expectation is met (Deutsch, 1958). In situations where the calculus is reversed, and the upside outweighs the downside, Deutsch characterizes a choice to cooperate as "gambling" rather a decision based on trust, and extends the formulation generally to decisions in the face of risk, in that when a party trusts, the risk of what can be lost is relatively large compared to what can be gained (Deutsch, 1960). Deutsch places the idea of confidence firmly within his conceptions of trust and of risk-taking behavior, repeatedly characterizing for instance the willingness to participate in a given cooperative action as a function of the "confidence that his trust will be fulfilled" (Deutsch, 1958, p. 269). This formulation seems to co-mingle not only trust with confidence, but also trust with expectation.

Francis Fukuyama's widely read work on trust (1995) focuses on the social and cultural underpinnings of trust, but stresses the importance of trust not only for the overall benefit and sustainment of society, but to enable economic success as well. Fukuyama squarely positions organizations and institutions as embedded in the societal contexts in which they operate, suggesting not only that social norms and cultural characteristics are significant factors influencing the economic performance of these organizations, but that prospects for economic success are predicated on the levels of trust and the factors contributing to its development present in a give society or cultural environment. This explains in part Fukuyama's belief that declining levels of societal trust and corresponding failure to embrace individual cultural elements that help define a society represent a state of crisis for the United States and other "low-trust" countries (Fukuyama, 1995, pp. 310-311). For many of us, it requires little more than anecdotal evidence to support the idea that "doing the right thing" is more inherent in a given culture or country or community.

References:

Deutsch, M. (1960). The effect of motivational orientation upon trust and suspicion. Human Relations, 13, 123-139.

Deutsch, M. (1958). Trust and suspicion. Journal of Conflict Resolution, 2(4), 265-279.

Fukuyama, F. (1995). Trust: The social virtues and the creation of prosperity. New York, NY: Free Press.

When does technical competence trump historical performance

The joint announcement last week by the Department of Homeland Security (DHS) and the Department of Defense (DoD) to formalize a cooperative relationship between the two agencies to provide coordinated cybersecurity operations to protect government computing networks directs the National Security Agency (NSA) to furnish personnel, equipment, and a variety of support services to DHS. The collaboration between DHS and NSA -- and in particular, the NSA's role in government network monitoring activities given its recent practices of warrantless surveillance -- has raised concerns in some circles as to whether the efforts under the joint initiative will be conducted with sufficient consideration and protection of privacy. One response to these concerns sidesteps the privacy issue somewhat as suggests that the technical expertise that the NSA possesses is sufficiently valuable that know-how should trump any lingering worries about safeguarding civil liberties. Leaving the details of the argument aside (and without making a determination of the validity of this argument one way or the other), this line of reasoning can be seen as an exercise in trust, or more specifically, of how to properly assess the trustworthiness of those involved in the joint cybersecurity initiative.

While the NSA's warrantless electronic surveillance program was halted in 2006 following a federal court ruling that found the program activities to be unconstitutional, the policy and legal debates about the NSA's actions are far from settled, as the government continues to argue the justification for the program on national security grounds and that federal officials should not be held accountable (a position with which the federal courts have, to date, agreed in holding the activities to be illegal, but disagreed in finding government employees liable). The fact that the legal issues surrounding the NSA's program remain unresolved (one of the key cases is currently under appeal to the Ninth Circuit Court) with the government persisting in its assertions that individual victims of surveillance shouldn't be able to present challenges (or even to know the full details of the program) helps explain why the NSA is not widely viewed as trustworthy with respect to looking out for the interests of the citizens the government ostensibly represents and protects. There is little debate over the position the NSA enjoys as the government's premier agency where cybersecurity matters are concerned, so the implicit argument that goes along with placing trust in the NSA is that its superior technical expertise outweighs any potential misgivings about respecting privacy or otherwise behaving in ways that encapsulate individual interests.

Illustrating different applications of the concept of trust

While the core topic of this blog is managing trust, one recurring theme that serves as a sort of preliminary consideration to trust management is making sense of trust as a fundamental concept and, especially, understanding the differences among the many ways in which the term is applied in general and scholarly usage. The recent appearance of several independent written items — each of which emphasize trust, but with very different perspectives — provides a helpful illustration of some of the more common ways trust is perceived in common usage, including trust in technical competence, trust in the intentions of others, and trust as a moral, cultural, or societal characteristic. These perspectives are often considered together as complementary conceptions, but seem in some cases to compensate for each other, such as when evaluating the trustworthiness of a given entity.

A preliminary challenge to discussing trust at all is arriving at a suitable conception of the term, appropriate for the context and consistent with the prevailing theories on trust and definitions of the word that these theories assign. Definitions of trust vary widely among scholarly treatments of the term and in familiar business usage, leading many authors to characterize a common definition of trust as elusive (Gambetta, 1988; Kramer, 2006). Providing a semantically precise definition of trust can prove especially challenging given the tendency to substitute attributes of organizational trust such as confidence, predictability, or reliability (Mayer, Davis, & Schoorman, 1995; Luhmann, 1988) for trust itself. Trust has been defined as tantamount to placing a bet (Coleman, 1990; Sztompka, 1999); as confidence in the expectation of future actions (Misztal, 1996), whether attributed to a given party as trustee or in society overall (Barber, 1983); or, in the contexts of mutual exchanges, as simply a set of expectations shared by all parties (Zucker, 1986). The terms reliability, confidence, and expectations feature prominently in numerous conceptions of trust, both within and outside organizational contexts, but these definitions offer no insight into the relational nature of trust between truster and trustee, or to the basis of that trust, and so are insufficient to provide workable definitions for trust in any of the disciplines in which the concept is studied. A key distinction articulated repeatedly in research and theories on trust is that risk must be present for trust to exist, and more specifically that trust is the willingness to take risk (Mayer, Davis, & Schoorman, 1995).

If no single definition seems able to satisfy the many contexts in which trust is brought into play, then perhaps multi-faceted presentations of the concept can provide the flexibility lacking in more succinct but ultimately unsatisfactory characterizations. Bernard Barber offers a concise general definition of trust as "expectation of the persistence and fulfillment of the natural and the moral social orders" (1983, p. 9). Beyond this overarching statement, Barber offers two additional definitions that he suggests have more explicatory value in social relationships: first, "trust as the expectation of technically competent role performance," and second, "the expectation that some others in our social relationships have moral obligations and responsibility to demonstrate a special concern for other's interests above their own" (Barber, 1983, p. 14). These two themes, separately or together, have been applied in subsequent sociological discussions about trust, including the central aspect of role expectations in Adam Seligman's examination of trust as an essential public good in modern society (Seligman, 1997). Barber's multi-part definition of trust is reflected in some popular management literature, such as Reina and Reina’s separation of transactional trust into competence, contractual, and communication dimensions — what the authors respectively term "trust of capability," "trust of character," and "trust of disclosure" (2006, p. 14). Concern for the trustor's interests, which Barber terms "fiduciary obligation" (1983, p. 15) has antecedents in many early organizational management concepts, and occupies a significant position in modern business principles such as principal agent theory and stakeholder theory. Barber's separate consideration of the performance of a trustee and its motivation for the actions it chooses also broadens the applicability of his conceptions of trust, with the potential to explain the sort of system or institutional trust and its reliance on expert knowledge that Luhmann (1979), among others, set apart from interpersonal trust.

References:

Barber, B. (1983). Logic and the limits of trust. New Brunswick, NJ: Rutgers University Press.

Coleman, J. S. (1990). Foundations of social theory. Cambridge, MA: Belknap Press.

Fukuyama, F. (1995). Trust: The social virtues and the creation of prosperity. New York, NY: Free Press.

Gambetta, D. (1988). Can we trust trust? In D. Gambetta (Ed.), Trust: Making and breaking cooperative relations (pp. 213-237). Oxford, England: Basil Blackwell.

Kramer, R. M. (2006). Organizational trust: Progress and promise in theory and research. In R. M. Kramer (Ed.), Organizational trust (pp. 1-17). Oxford, England: Oxford University Press.

Luhmann, N. (1988). Familiarity, confidence, trust: Problems and alternatives. In D. Gambetta (Ed.), Trust: Making and breaking cooperative relations (pp. 94-107). Oxford, England: Basil Blackwell.

Luhmann, N. (1979). Trust and power: Two works by Niklas Luhmann. Chichester, England: John Wiley.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709-734.

Misztal, B. A. (1996). Trust in modern societies. Cambridge, England: Polity Press.

Reina, D. S., & Reina, M. L. (2006). Trust and betrayal in the workplace (2nd ed.). San Francisco, CA: Berrett-Koehler Publishers.

Seligman, A. B. (1997). The problem of trust. Princeton, NJ: Princeton University Press.

Sztompka, P. (1999). Trust: A sociological theory. Cambridge, England: Cambridge University Press.

Zucker, L. G. (1986). Production of trust: Institutional sources of economic structure, 1840-1920. Research in Organizational Behavior, 8, 53-111.

Wednesday, October 13, 2010

Evaluating technical tools and services as an exercise in trust

People often seek tools and technology services to help protect security and privacy of information, but when evaluating such technical tools, it can be equally important to consider the source of the tool to determine whether you can have sufficient confidence that the tool will do what it purports to do with respect to security, and not expose vulnerabilities of its own. This sort of thinking is seen in the recommendation (reported last week in The Washington Post) received by AT&T from the National Security Agency (NSA) to avoid sourcing telecommunications equipment from Chinese manufacturer Huawei, due to concerns the company might embed capabilities that would enable the equipment to be used for eavesdropping. Chinese companies in general and Huawei in particular have established successful market presence internationally, including in the U.S., but at least with the prospect of equipment from the company being deployed by AT&T in support of its government infrastructure operations, being an established provider apparently does not translate into being trusted.

On a somewhat smaller scale, some initial excitement in the blogosphere over email address shortener scr.im was quickly tempered by a realization that the online service had some flaws in the way it implemented security features like captchas that left it quite vulnerable to attacks that would compromise the psuedonymity of its users. Reactions to the service, which offers users a way to "share your email in a safe way," were cited as an example of the need to "trust, but verify" when it comes to technology, including security technology. The underlying message may be appropriate but the invocation of the phrase popularized by Ronald Reagan when applied to computing systems results in an overly narrow connotation of the word trust, in this case to mean confidence that a system will perform as expected. As I have argued previously in this space, substituting the word trust where "reliability" or specific functionality is all that can be expected stops far short of the criteria that might actually need to be satisfied to establish the trustworthiness of a system, a service provider, or the parties behind them.

Thursday, September 16, 2010

Trustworthy organizations do what they should even in the absence of legal enforcement

Joseph Conn of Modern Healthcare called attention in a blog post yesterday to the almost complete absence of civil penalties imposed against violators of the HIPAA Security and Privacy rules, pointing out that without some credible evidence of enforcement for legal regulations, regulations such as HIPAA are an empty threat. In his post, he points to the frequently repeated public emphasis on privacy and security and their essential role in engendering trust among patients and other health care stakeholders as incongruous with the "friendly persuasion" HIPAA enforcement approach employed by the HHS Office of Civil Rights during both the current and previous administrations, basically concluding that the only way to achieve better compliance with the law is to strengthen enforcement. The statistical highlights provided by OCR itself regarding HIPAA compliants, investigations, and negotiated settlements and other resolutions certainly seem to suggest that non-compliance is a widespread issue, but in suggesting that legal requirements will be ineffective without more substantial enforcement, Conn suggests that at least a significant subset of HIPAA-covered entities and business associates consider the lack of enforcement an invitation to violate the law. Whether or not you agree with this specific argument, if its reasoning is correct, then the recommended corrective action (stronger and more proactive enforcement measures) on health care privacy and security cannot produce the trust that the government appears to be seeking. In an environment where individuals or organizations can only be expected to behave as they should due to the presence of legal or other sanctions, the participants cannot be considered to be trustworthy, and therefore should not expect to be trusted by those they interact with, whether individual patients, peer organizations, or government regulators. It seems entirely likely that relationships between different health care stakeholders — perhaps especially between health care entities and their regulators — are marked by distrust, rather than trust, and current government-led efforts to put effective governance, oversight, and enforcement mechanisms produced under the rubric of "trust frameworks" are more characteristic of distrusting relationships than they are of trust.