02 May 2014

Big Privacy?

In conjunction with the PCAST 'Big Data' report noted earlier today the White House has released a report [PDF] titled Big Data: Seizing Opportunities, Preserving Values.

The report features recommendations of particular interest in relation to Australian law reform -
  • Advance the Consumer Privacy Bill of Rights. The Department of Commerce should take appropriate consultative steps to seek stakeholder and public comment on big data developments and how they impact the Consumer Privacy Bill of Rights and then devise draft legislative text for consideration by stakeholders and submission by the President to Congress. 
  • Pass National Data Breach Legislation. Congress should pass legislation that provides for a single national data breach standard along the lines of the Administration’s May 2011 Cybersecurity legislative proposal. 
  • Extend Privacy Protections to non-U.S. Persons. The Office of Management and Budget should work with departments and agencies to apply the Privacy Act of 1974 to non-U.S. persons where practicable, or to establish alternative privacy policies that apply appropriate and meaningful protections to personal information regardless of a person’s nationality. 
  • Ensure Data Collected on Students in School is Used for Educational Purposes. The federal government must ensure that privacy regulations protect students against having their data being shared or used inappropriately, especially when the data is gathered in an educational context. 
  • Expand Technical Expertise to Stop Discrimination. The federal government’s lead civil rights and consumer protection agencies should expand their technical expertise to be able to identify practices and outcomes facilitated by big data analytics that have a discriminatory impact on protected classes, and develop a plan for investigating and resolving violations of law. 
  • Amend the Electronic Communications Privacy Act. Congress should amend ECPA to ensure the standard of protection for online, digital content is consistent with that afforded in the physical world—including by removing archaic distinctions between email left unread or over a certain age.

PCAST Big Data Report

The US President’s Council of Advisors on Science and Technology (PCAST) Big Data and Privacy Working Group has released a report [PDF] titled Big Data and Privacy: A Technological Perspective.

The release coincides with the White House's Big Data 'Opportunities & Values' report noted here.

PCAST states that it
examined the nature of current technologies for managing and analyzing big data and for preserving privacy, it considered how those technologies are evolving, and it explained what the technological capabilities and trends imply for the design and enforcement of public policy intended to protect privacy in big-data contexts. 
Big data drives big benefits, from innovative businesses to new ways to treat diseases. The challenges to privacy arise because technologies collect so much data (e.g., from sensors in everything from phones to parking lots) and analyze them so efficiently (e.g., through data mining and other kinds of analytics) that it is possible to learn far more than most people had anticipated or can anticipate given continuing progress. These challenges are compounded by limitations on traditional technologies used to protect privacy (such as de-identification). PCAST concludes that technology alone cannot protect privacy, and policy intended to protect privacy needs to reflect what is (and is not) technologically feasible. 
In light of the continuing proliferation of ways to collect and use information about people, PCAST recommends that policy focus primarily on whether specific uses of information about people affect privacy adversely. It also recommends that policy focus on outcomes, on the “what” rather than the “how,” to avoid becoming obsolete as technology advances. The policy framework should accelerate the development and commercialization of technologies that can help to contain adverse impacts on privacy, including research into new technological options. By using technology more effectively, the Nation can lead internationally in making the most of big data’s benefits while limiting the concerns it poses for privacy. Finally, PCAST calls for efforts to assure that there is enough talent available with the expertise needed to develop and use big data in a privacy-sensitive way.
The report offers several recommendations -
R1. Policy attention should focus more on the actual uses of big data and less on its collection and analysis. By actual uses, we mean the specific events where something happens that can cause an adverse consequence or harm to an individual or class of individuals. In the context of big data, these events (“uses”) are almost always actions of a computer program or app interacting either with the raw data or with the fruits of analysis of those data. In this formulation, it is not the data themselves that cause the harm, nor the program itself (absent any data), but the confluence of the two. These “use” events (in commerce, by government, or by individuals) embody the necessary specificity to be the subject of regulation. By contrast, PCAST judges that policies focused on the regulation of data collection, storage, retention, a priori limitations on applications, and analysis (absent identifiable actual uses of the data or products of analysis) are unlikely to yield effective strategies for improving privacy. Such policies would be unlikely to be scalable over time, or to be enforceable by other than severe and economically damaging measures. 
R2. Policies and regulation, at all levels of government, should not embed particular technological solutions, but rather should be stated in terms of intended outcomes. To avoid falling behind the technology, it is essential that policy concerning privacy protection should address the purpose (the “what”) rather than prescribing the mechanism (the “how”). 
R3. With coordination and encouragement from OSTP [ie White House Office of Science and Technology Policy], the NITRD [Networking and Information Technology Research and Development program] agencies should strengthen U.S. research in privacy‐related technologies and in the relevant areas of social science that inform the successful application of those technologies. Some of the technology for controlling uses already exists. However, research (and funding for it) is needed in the technologies that help to protect privacy, in the social mechanisms that influence privacy‐ preserving behavior, and in the legal options that are robust to changes in technology and create appropriate balance among economic opportunity, national priorities, and privacy protection. 
R4. OSTP, together with the appropriate educational institutions and professional societies, should encourage increased education and training opportunities concerning privacy protection, including career paths for professionals. Programs that provide education leading to privacy expertise (akin to what is being done for security expertise) are essential and need encouragement. One might envision careers for digital‐privacy experts both on the software development side and on the technical management side. 
R5. The United States should take the lead both in the international arena and at home by adopting policies that stimulate the use of practical privacy‐protecting technologies that exist today. It can exhibit leadership both by its convening power (for instance, by promoting the creation and adoption of standards) and also by its own procurement practices (such as its own use of privacy‐preserving cloud services). PCAST is not aware of more effective innovation or strategies being developed abroad; rather, some countries seem inclined to pursue what PCAST believes to be blind alleys. This circumstance offers an opportunity for U.S. technical leadership in privacy in the international arena, an opportunity that should be taken.
Those recommendations reflect the assessment provided in the report's summary -
The term privacy encompasses not only the famous “right to be left alone,” or keeping one’s personal matters and relationships secret, but also the ability to share information selectively but not publicly. Anonymity overlaps with privacy, but the two are not identical. Likewise, the ability to make intimate personal decisions without government interference is considered to be a privacy right, as is protection from discrimination on the basis of certain personal characteristics (such as race, gender, or genome). Privacy is not just about secrets. 
Conflicts between privacy and new technology have occurred throughout American history. Concern with the rise of mass media such as newspapers in the 19th century led to legal protections against the harms or adverse consequences of “intrusion upon seclusion,” public disclosure of private facts, and unauthorized use of name or likeness in commerce. Wire and radio communications led to 20th century laws against wiretapping and the interception of private communications – laws that, PCAST notes, have not always kept pace with the technological realities of today’s digital communications. 
Past conflicts between privacy and new technology have generally related to what is now termed “small data,” the collection and use of data sets by private‐ and public‐sector organizations where the data are disseminated in their original form or analyzed by conventional statistical methods. Today’s concerns about big data reflect both the substantial increases in the amount of data being collected and associated changes, both actual and potential, in how they are used. 
Big data is big in two different senses. It is big in the quantity and variety of data that are available to be processed. And, it is big in the scale of analysis (termed “analytics”) that can be applied to those data, ultimately to make inferences and draw conclusions. By data mining and other kinds of analytics, non‐obvious and sometimes private information can be derived from data that, at the time of their collection, seemed to raise no, or only manageable, privacy issues. Such new information, used appropriately, may often bring benefits to individuals and society – Chapter 2 of this report gives many such examples, and additional examples are scattered throughout the rest of the text. Even in principle, however, one can never know what information may later be extracted from any particular collection of big data, both because that information may result only from the combination of seemingly unrelated data sets, and because the algorithm for revealing the new information may not even have been invented at the time of collection. 
The same data and analytics that provide benefits to individuals and society if used appropriately can also create potential harms – threats to individual privacy according to privacy norms both widely shared and personal. For example, large‐scale analysis of research on disease, together with health data from electronic medical records and genomic information, might lead to better and timelier treatment for individuals but also to inappropriate disqualification for insurance or jobs. GPS tracking of individuals might lead to better community‐based public transportation facilities, but also to inappropriate use of the whereabouts of individuals. A list of the kinds of adverse consequences or harms from which individuals should be protected is proposed in Section 1.4. PCAST believes strongly that the positive benefits of big‐data technology are (or can be) greater than any new harms. 
Chapter 3 of the report describes the many new ways in which personal data are acquired, both from original sources, and through subsequent processing. Today, although they may not be aware of it, individuals constantly emit into the environment information whose use or misuse may be a source of privacy concerns. Physically, these information emanations are of two types, which can be called “born digital” and “born analog.” 
When information is “born digital,” it is created, by us or by a computer surrogate, specifically for use by a computer or data processing system. When data are born digital, privacy concerns can arise from over‐collection. Over‐collection occurs when a program’s design intentionally, and sometimes clandestinely, collects information unrelated to its stated purpose. Over‐collection can, in principle, be recognized at the time of collection. 
When information is “born analog,” it arises from the characteristics of the physical world. Such information becomes accessible electronically when it impinges on a sensor such as a camera, microphone, or other engineered device. When data are born analog, they are likely to contain more information than the minimum necessary for their immediate purpose, and for valid reasons. One reason is for robustness of the desired “signal” in the presence of variable “noise.” Another is technological convergence, the increasing use of standardized components (e.g., cell‐phone cameras) in new products (e.g., home alarm systems capable of responding to gesture). Data fusion occurs when data from different sources are brought into contact and new facts emerge (see Section 3.2.2). Individually, each data source may have a specific, limited purpose. Their combination, however, may uncover new meanings. In particular, data fusion can result in the identification of individual people, the creation of profiles of an individual, and the tracking of an individual’s activities. More broadly, data analytics discovers patterns and correlations in large corpuses of data, using increasingly powerful statistical algorithms. If those data include personal data, the inferences flowing from data analytics may then be mapped back to inferences, both certain and uncertain, about individuals. 
Because of data fusion, privacy concerns may not necessarily be recognizable in born‐digital data when they are collected. Because of signal‐processing robustness and standardization, the same is true of born‐analog data – even data from a single source (e.g., a single security camera). Born‐digital and born‐analog data can both be combined with data fusion, and new kinds of data can be generated from data analytics. The beneficial uses of near‐ubiquitous data collection are large, and they fuel an increasingly important set of economic activities. Taken together, these considerations suggest that a policy focus on limiting data collection will not be a broadly applicable or scalable strategy – nor one likely to achieve the right balance between beneficial results and unintended negative consequences (such as inhibiting economic growth). 
If collection cannot, in most cases, be limited practically, then what? Chapter 4 discusses in detail a number of technologies that have been used in the past for privacy protection, and others that may, to a greater or lesser extent, serve as technology building blocks for future policies. 
Some technology building blocks (for example, cybersecurity standards, technologies related to encryption, and formal systems of auditable access control) are already being utilized and need to be encouraged in the marketplace. On the other hand, some techniques for privacy protection that have seemed encouraging in the past are useful as supplementary ways to reduce privacy risk, but do not now seem sufficiently robust to be a dependable basis for privacy protection where big data is concerned. For a variety of reasons, PCAST judges anonymization, data deletion, and distinguishing data from metadata (defined below) to be in this category. The framework of notice and consent is also becoming unworkable as a useful foundation for policy. 
Anonymization is increasingly easily defeated by the very techniques that are being developed for many legitimate applications of big data. In general, as the size and diversity of available data grows, the likelihood of being able to re‐identify individuals (that is, re‐associate their records with their names) grows substantially. While anonymization may remain somewhat useful as an added safeguard in some situations, approaches that deem it, by itself, a sufficient safeguard need updating. 
While it is good business practice that data of all kinds should be deleted when they are no longer of value, economic or social value often can be obtained from applying big data techniques to masses of data that were otherwise considered to be worthless. Similarly, archival data may also be important to future historians, or for later longitudinal analysis by academic researchers and others. As described above, many sources of data contain latent information about individuals, information that can be known only if the holder expends analytic resources, or that may become knowable only in the future with the development of new data‐mining algorithms. In such cases it is practically impossible for the data holder even to surface “all the data about an individual,” much less delete it on any specified schedule or in response to an individual’s request. Today, given the distributed and redundant nature of data storage, it is not even clear that data, even small data, can be destroyed with any high degree of assurance. As data sets become more complex, so do the attached metadata. Metadata are ancillary data that describe properties of the data such as the time the data were created, the device on which they were created, or the destination of a message. Included in the data or metadata may be identifying information of many kinds. It cannot today generally be asserted that metadata raise fewer privacy concerns than data. Notice and consent is the practice of requiring individuals to give positive consent to the personal data collection practices of each individual app, program, or web service. Only in some fantasy world do users actually read these notices and understand their implications before clicking to indicate their consent. 
The conceptual problem with notice and consent is that it fundamentally places the burden of privacy protection on the individual. Notice and consent creates a non‐level playing field in the implicit privacy negotiation between provider and user. The provider offers a complex, take‐it‐or‐leave‐it set of terms, while the user, in practice, can allocate only a few seconds to evaluating the offer. This is a kind of market failure. 
PCAST believes that the responsibility for using personal data in accordance with the user’s preferences should rest with the provider rather than with the user. As a practical matter, in the private sector, third parties chosen by the consumer (e.g., consumer‐protection organizations, or large app stores) could intermediate: A consumer might choose one of several “privacy protection profiles” offered by the intermediary, which in turn would vet apps against these profiles. By vetting apps, the intermediaries would create a marketplace for the negotiation of community standards for privacy. The Federal government could encourage the development of standards for electronic interfaces between the intermediaries and the app developers and vendors. 
After data are collected, data analytics come into play and may generate an increasing fraction of privacy issues. Analysis, per se, does not directly touch the individual (it is neither collection nor, without additional action, use) and may have no external visibility. By contrast, it is the use of a product of analysis, whether in commerce, by government, by the press, or by individuals, that can cause adverse consequences to individuals. 
More broadly, PCAST believes that it is the use of data (including born‐digital or born‐analog data and the products of data fusion and analysis) that is the locus where consequences are produced. This locus is the technically most feasible place to protect privacy. Technologies are emerging, both in the research community and in the commercial world, to describe privacy policies, to record the origins (provenance) of data, their access, and their further use by programs, including analytics, and to determine whether those uses conform to privacy policies. Some approaches are already in practical use. 
Given the statistical nature of data analytics, there is uncertainty that discovered properties of groups apply to a particular individual in the group. Making incorrect conclusions about individuals may have adverse consequences for them and may affect members of certain groups disproportionately (e.g., the poor, the elderly, or minorities). Among the technical mechanisms that can be incorporated in a use‐ based approach are methods for imposing standards for data accuracy and integrity and policies for incorporating useable interfaces that allow an individual to correct the record with voluntary additional information. 
PCAST’s charge for this study did not ask it to recommend specific privacy policies, but rather to make a relative assessment of the technical feasibilities of different broad policy approaches. Chapter 5, accordingly, discusses the implications of current and emerging technologies for government policies for privacy protection. The use of technical measures for enforcing privacy can be stimulated by reputational pressure, but such measures are most effective when there are regulations and laws with civil or criminal penalties. Rules and regulations provide both deterrence of harmful actions and incentives to deploy privacy‐protecting technologies. Privacy protection cannot be achieved by technical measures alone.

Terrormetrics

'Terrorism and Freedom of Expression: An Econometric Analysis' by Wah-Kwan Lin offers what for me is a distinctly underwhelming 74 page analysis -
We apply empirical methods to examine the relationship between the frequency of terrorist incidences and freedom of expression, which we measure using Freedom House's Freedom of the Press report, and access to landline telephones, mobile phones, and the Internet. Using standard ordinary least squares econometric methods, we ultimately find that greater mobile phone access tends to increase the frequency of terrorist incidences at a diminishing rate and that greater Internet access tends to decrease the frequency of terrorist incidences at a diminishing rate. On average, the frequency of terrorist incidences is highest when societies have access to 137.89 mobile phone subscriptions for each 100 people, and lowest when the percentage of people with Internet access is at 64.79%. Based on our results, we cannot conclude that either freedom of the press or landline telephone access have any statistically significant influence on the frequency of terrorist incidences.
Lin concludes -
The purpose of this study is to employ empirical methods to examine the relationship between terrorism and freedom of expression. Using standard ordinary least squares econometric methods, we ultimately find that greater mobile phone access tends to increase the frequency of terrorist incidences at a diminishing rate. We also find that greater Internet access tends to decrease the frequency of terrorist incidences at a diminishing rate. On average, the frequency of terrorist incidences is highest when societies have access to 137.89 mobile phone subscriptions for each 100 people. Additionally, the frequency of terrorist incidences is lowest on average when the percentage of people with Internet access is at 64.79%. Based on our results, we cannot conclude that either Freedom of the Press or telephone access have any statistically significant influence on the frequency of terrorist incidences. 
We may apply the framework of repression, release, rebellion, and replication, as initially represented in Figure 3.1. By combining our calculated results and the proposed framework, we conclude that increases in mobile phone access tend to shift societies from either a state of repression or rebellion towards a state of replication, where higher allowances for freedom of expression result in greater occurrences of terrorist incidences. Greater levels of Internet access tends to shift societies from either a state of repression or rebellion towards a state of release, where higher allowances for freedom of expression result in fewer occurrences of terrorist incidences. These conclusions are summarized in graphical form in Figure 5.1. The existing literature on terrorism and its root causes minimally discusses the relation- ship between mobile phone access and terrorism. In most cases, discussion of the connection between modern technologies and terrorism is largely anecdotal in nature and typically do not present comprehensive theories of the relationship between access to modern technologies and terrorism. The literature on terrorism that do mention mobile phones do little more than indicate that mobile phones were used in some capacity to orchestrate or implement ter- rorist attacks. For instance, the National Commission on Terrorist Attacks Upon the United States report on the September 11 attack is replete with details of how the terrorist hijackers utilized phone communications throughout their planning process to bring their attack to fruition. Accounts of the March 11, 2004 Madrid train bombings reveal that mobile phones were used as remote triggers for explosive devices. Similarly, the official House of Commons report on the July 7, 2005 London bombings provide details of how the various bombers utilized mobile phones to communicate amongst themselves.  In almost all similar accounts of major terrorist incidences, there is some indication that mobile phones were utilized to some extent to coordinate or implement terrorist attacks. Based on the accounts of how terrorists have used mobile phones, we can surmise that mobile phones and the frequency of terrorist incidences share a positive relationship because mobile phones serve as an operational enabler for terrorists. In a manner, the availability of mobile phones makes terrorism easier. New technologies, such as mobile phones, do not directly cause more terrorism; rather, terrorist groups are often adept — perhaps more so than law enforcement forces that attempt to thwart terrorism — at leveraging new technologies as they arise to serve terrorist agendas more effectively. 
There is a comparatively richer body of literature that discusses the relationship between Internet access and terrorism. However, much of the existing literature pre-supposes a positive relationship between Internet access and terrorism. Certainly, there is a bounty of evidence that suggests that the rise of the Internet has been a great boon for terrorism. Terrorist movements, favoring such characteristics of the Internet as it unregulated nature and its potentially broad audience, have increasingly relied on the Internet to engage in such diverse activities as fundraising, propagandizing, data mining, and coordinating terrorist operations. Prominent terrorist movements like al-Qaeda strongly believe that the Inter- net is an effective platform to disseminate radicalizing propaganda and to build connections between individuals who are prone to supporting terrorist causes. Such figures as Anwar al-Awlaki, a senior al-Qaeda official, lauded the potency of the Internet, stating, “The internet [sic] has become a great medium for spreading the call of Jihad and following the news of the mujihadeen.” 
The results of this study are contrary to the conventional wisdom that greater Internet access leads to more terrorism. This study indicates that to a statistically significant degree, greater Internet access actually reduces the frequency of terrorist incidences. There are a number of possible explanations for why Internet access might reduce the frequency of terrorist incidences, including providing a social release valve through which societies can peacefully voice their discontent in place of engaging in violent outbursts, allowing for the rise of multiple accounts and views that can undermine the credibility of terrorist narratives and thus mitigate the processes of radicalization, and facilitate access to social and economic opportunities that can dampen perceived grievances that often promote terrorism.  
Perhaps as notable as what we find to be statistically significant is what we do not find to be statistically significant. Our study does not provide sufficient evidence to lead us to conclude that Freedom of the Press and telephone access are statistically significant influencers on the frequency of terrorist incidences. We are especially surprised by the lack of a statistically significant effect of the Freedom of the Press measure on terrorism. This result may be explained by the possibility that qualities that are frequently associated with environments of high freedom of expression are reflected in certain control variables utilized in this study, such as the democracy, or it may very well be that the qualitative nature of the environment of freedom of expression is not a factor significantly related to terrorism. 
One important policy implication of these findings relates to the importance of Internet access in challenging terrorism. Our results lend greater support to efforts intended to expand Internet access, a trend that is already well underway around the world and which has demonstrated significant impacts. Though we do find a positive relationship between mobile phone access and terrorism, curtailing mobile phone access for the express purpose of challenging terrorism may not be advisable as it would represent an effort to resist technological progress that has broad social and economic benefits. 
We acknowledge potential deficiencies that might impair the external validity of our conclusions. One prominent deficiency is the fact that the two primary components that we attempt to relate — terrorism and freedom of expression — are inherently difficult to define, and our attempts to define terrorism and freedom of expression are to a certain extent subjective. The manner in which we define terrorism and freedom of expression may not be universally accepted, and may actually be contrary to certain cultural and societal norms. Additionally, the data sources we utilize for the independent variables are characterized by their respective deficiencies, such as the subjective aspects of the measure of the qualitative characteristics of freedom of expression, the possibility of perception biases in the measure for corruption, and the limited numbers of observations in the measure of income inequality, amongst other potential deficiencies. Furthermore, despite our best efforts to account for significant systemic variables in our model, there inevitably exists the risk of omitted variable bias. Nonetheless, we have made an effort to address the possible deficiencies in the study design and implementation, and we believe that alternative methods would not necessarily result in more reliable outcomes. 
This study is an effort to fill a conspicious void in the existing body of terrorism scholarship. It represents a reasoned attempt at applying empirically driven methods at examining the relationship between terrorism and freedom of expression. While a great deal of research and writing has already been produced on the linkage between terrorism and freedom of expression, until this writing, no attempt has been made to examine the relationship between terrorism and freedom of expression quantitatively. We hope that this study ultimately proves to be beneficial to understanding the underlying risk factors of terrorism.

Pragmatism

Legal pragmatism, Jim, but not as we know it?

'Holmes, Cardozo, and the Legal Realists: Early Incarnations of Legal Pragmatism and Enterprise Liability' by Edmund Ursin in (2013) 50 San Diego Law Review comments that
Enterprise liability is a term associated with the tort lawmaking of the liberal “Traynor era” California Supreme Court of the 1960s and 1970s. Legal pragmatism, in turn, is associated with the conservative jurist Richard Posner. This manuscript examines the evolution of each of these theoretical movements from Holmes’s great 1897 essay, “The Path of the Law,” to the present day. Its focus is on the great judges and scholars whose views have shaped our own: Holmes, Cardozo, the Legal Realists Leon Green and Karl Llewellyn, Traynor, and Posner. 
Stated simply, the shared jurisprudential view of these great judges and scholars is that in our system judges are legislators as well as adjudicators — and policy plays a role in their lawmaking. In the common law subjects, in fact, judges are the primacy lawmakers. In constitutional adjudication they are also lawmakers but lawmakers aware of the general need for deference to other branches. No fancy formulas such as “neutral principle “or “original meaning” can capture this role. Indeed, the leading academic theorists of the past century — and today — have been out of touch with the reality of judicial lawmaking as it has been expressly articulated by these great judge. We also see in the works of these judges and scholars the origins of the enterprise liability doctrines that the pragmatic Traynor era court of the 1960s and 1970s, would adopt, including the doctrine of strict products liability and expansive developments within the negligence system.
Australian readers might gain more value from Richard Posner, ‘What Has Pragmatism To Offer Law?’ (1990) 63 Southern California Law Review 1653; Alfonso Morales (ed) Renascent Pragmatism: Studies in Law and Social Science (Ashgate, 2003); Margaret Radin, ‘The Pragmatist and the Feminist’ (1990) 63 Southern California Law Review 1699; Michael Sullivan, Legal Pragmatism: Community, Rights, and Democracy (Indiana University Press, 2007); Thomas Grey ‘Holmes and Legal Pragmatism’ (1989) 41 Stanford Law Review 787; Ronald Dworkin, ‘Pragmatism, Right Answers and True Banality’ in Michael Brint and William Weaver (eds) Pragmatism in Law and Society (Westview Press, 1990) 359; Frederic Kellogg, ‘American Pragmatism and European Social Theory: Holmes, Durkheim, Scheler, and the Sociology of Legal Knowledge’ (2012) IV(1) European Journal of Pragmatism and American Philosophy 107;  Brian Tamanaha, ‘Pragmatism in U.S. Legal Theory: Its Application to Normative Jurisprudence, Sociolegal Studies, and the Fact-Value Distinction’ (1996) 41 American Journal of Jurisprudence 315; Susan Haack, ‘On Legal Pragmatism: Where Does ‘The Path of the Law’ Lead Us?’ (2012) 3(1) Pragmatism Today 8; John Diggins, The Promise of Pragmatism: Modernism and the Crisis of Knowledge and Authority (University of Chicago Press, 1995); and R George Wright, ‘The Consequences of Contemporary Legal Relativism’ (1990-91) 22 University of Toledo Law Review 73.

US Whistleblowing Law

'Four Signal Moments in Whistleblower Law: 1983-2013' by Geoffrey Christopher Rapp in (2013) 30 Hofstra Labor and Employment Law Journal identifies
four signal legal changes in the law governing whistleblowers between 1983 and 2013. Three of these are well known and easily identified -- the amendments to the federal False Claims Act enacted in 1986, the Sarbanes-Oxley whistleblower protection scheme enacted in 2002, and the Dodd-Frank securities fraud whistleblower bounty program enacted in 2010. Equally important may prove the Deficit Reduction Act of 2005 (actually enacted in 2006), which created an unusual carrot for state law whistleblower reward and protection reform. 
After discussing the impact of each signal change, I discuss the future of whistleblowing -- with national security whistleblowing, internal corporate programs, on-line whistleblowing, and continued doctrinal development being important developments to watch.
Rapp comments
Consumer activist Ralph Nader is sometimes credited with having coined the term “Whistle Blower” in the early 1970s. In the decades since, much has changed. The term lost one space – becoming “whistleblower” – but came to occupy a new space in the public’s understanding of the best ways to root out fraud and criminality in a wide range of activities and organizations. Whistleblowers helped end a war and bring down a United States President; changed the landscape of environmental protection; exposed fraudulent practices at tobacco companies; and, in more recent memory, highlighted patterns of fraud in publicly traded companies  and helped destroy one of America’s most beloved sports icons, cyclist Lance Armstrong. 
The legal landscape relating to whistleblowers has changed dramatically as well. In 1983, the Supreme Court dismissively made reference to “so-called ‘whistleblowers.’” In the years since, however, the whistleblower has been elevated to a far more prominent position. A whistleblower is typically (though not exclusively) an employee of a corporation, government agency, or educational institution, who comes into possession of information on an ongoing criminal, fraudulent, unsafe or otherwise questionable practice. The whistleblower reports information or makes allegations, sometimes to external regulators, watchdog groups, or the media, and sometimes internally (though often outside of the chain of command). From the beginning of the practice we now call whistleblowing such employees have faced retaliation – the threat of which, along with other incentives, favored remaining silent. The law, thankfully, has evolved to provide both protection and positive incentives for whistleblowers. 
This Article identifies four signal legal changes in the treatment of whistleblowers that have helped propel those who speak out into their current prominent role in policy discussions. The first moment was the passage of amendments, in 1986, to the Federal False Claims Act (FCA). These amendments created a structure by which whistleblowers in the federal procurement context could claim a share of recovered funds connected with fraud perpetrated against the government. Importantly, the FCA amendments gave whistleblowers increased control over litigation involving fraud against the government. In addition to helping the government recover large sums of money, the 1986 FCA amendments provided financial resources to an emerging plaintiff’s whistleblower bar. The lawyers who got rich using the newly strengthened FCA provisions became a powerful force for policy advocacy in contexts outside of the somewhat narrow purview of the FCA. 
The second signal moment for whistleblower law was the passage of the Sarbanes-Oxley Act of 2002 (SOX), which, for the first time, provided seemingly uniform protections for whistleblowers who raised concerns about accounting fraud in publicly held companies. Although many scholars have deemed SOX’s whistleblowing provisions ineffective, the statute itself represented an important shift in national thinking about the best ways to uncover financial fraud. Whistleblowers, long second fiddle to private securities lawsuits, came to be recognized as an important avenue for detecting fraud. 
The third signal moment was perhaps the least noticed. In a section of the Deficit Recovery Act of 2005 (DRA), the federal government deployed a rather novel carrot for stimulating state law change. The DRA gave states that enacted their own false claims acts a significant windfall in terms of their split of recoveries in federal False Claims Act cases involving joint state-federal Medicaid expenditures. As a result, in just a few short years, the number of states with such statutes rapidly increased, and state false claims acts have been a hot area of litigation in the ensuing time. 
Finally, in reaction to the financial market meltdown of 2008-2009, Congress imported the FCA’s bounty model for stimulating whistleblowers to the SOX context, as a number of scholars, including myself, had argued was needed. Although Dodd-Frank’s whistleblower provision, like SOX’s before it, suffers from important limitations, it represents a major shift in direction toward empowering and incentivizing whistleblowers in the financial arena. 
After discussing each of these signal moments, I speculate about the future of whistleblower law. Important questions remain to be answered. Will Congress respond to Dodd-Frank’s invitation to consider allowing securities whistleblowers, like FCA plaintiffs, to pursue an action on their own? Will companies adjust their historical reluctance to whistleblowing and begin to implement their own internal whistleblower reward systems? Will protection for whistleblowers on the federal and state level help stimulate changes in the common law’s treatment of whistleblowers, which may have lagged behind? Only time will tell, but one thing is certain: the prominence of whistleblowers, for better or for worse, is here to stay.

Online Marks and Secondary Liability

'Secondary Liability for Online Trademark Infringement: The International Landscape' by Graeme B. Dinwoodie in (2014) 36 Columbia Journal of Law & the Arts comments
 In U.S. law, the term “secondary liability” is an umbrella term encompassing a number of different types of trademark infringement claim. Its essence is that liability does not turn on the defendant itself using the plaintiff’s mark, but rather on the defendant being held responsible for the infringements occasioned by the use of the plaintiff’s mark by a third party infringer. Secondary liability claims might be strategically preferable to trade mark owners over bringing actions against the primary infringer. The advent of the Internet has only enhanced some of these strategic benefits. However, secondary liability also creates the spectre of highly intrusive regulation of the business of intermediaries operating in the online environment. In these cases, we must balance the rights of the mark owner with enabling legitimate development of innovative technologies that allow new ways of trading in goods. 
The online context of many contemporary uses of marks has also, as in other areas of intellectual property law, prompted a demand for international solutions to the potential liability of online intermediaries for secondary trademark infringement. And indeed most countries have long recognised a cause of action for engaging in conduct that U.S. law would call secondary trade mark infringement. This Article assesses the international treatment of these causes of action, first by looking at international law principles conventionally understood, namely, state to state obligations regarding the content of domestic law. There is little of relevance to the secondary liability question if the international landscape is understood in those terms. Thus, the Article proceeds also to analyse commercial practices that are contributing to soft transnational law and to compare the regimes adopted by the United States and the European Union as leading examples of approaches to the secondary liability question. 
The Article focuses on parallel fact patterns that have been litigated to appellate level in a number of countries. Thus, the paradigmatic cases considered are (1) claims brought against online auction sites, each essentially alleging that the auction site could have done more to stop the sale of counterfeit and other allegedly infringing items by third parties on its web site; (2) claims brought against search engines alleging that the sale of keyword advertising consisting of the trademarks of parties other than the mark owner resulted in infringement (normally, by causing actionable confusion).

Multicard

The Office of the Australian Information Commissioner has found that Multicard, the company handling the Maritime Security Identification Card (MISC), has been responsible for a data breach, with the personal information of some 9,000 people (including first and last names, date of birth, addresses, partial credit card numbers and expiry dates, and photographs) accessible via a Google search.

Multicard failed to take reasonable steps to ensure the security of data it held and disclosed personal information other than for a permitted purpose.

Problems with the MISC have been noted in this blog over several years. The Card is a border protection mechanism in the form of  a national photo ID card issued to people who have met background checks. It signifies that the holder has met the minimum security requirements necessary to work unescorted or unmonitored in a maritime security zone.

The breach occurred after Multicard stored the information on a publicly accessible web server without appropriate security controls to prevent unauthorised access. The personal information was discoverable via Google search over a four-month period. As a result, unauthorised parties accessed and downloaded the information.

 The OAIC investigation found that Multicard  failed to implement several basic security measures that resulted in a large amount of personal information being exposed. The Privacy Commissioner commented that "It was disappointing to find that, amongst other issues, there was no requirement for a password, username or other authenticator to establish the identity of the user before the information could be accessed."

Multicard "acted appropriately" to contain the data breach by immediately disabling its website and restricting access once the breach was known.

The OAIC has requested that the independent auditor engaged by Multicard certify the company has implemented the planned remediation steps. Multicard is to provide the OAIC with that certification and a copy of the independent auditor's report on its information holdings and security systems by 30 June.

Bodies dealing with the MISC and other border protection cards might be reasonably expected to gain and maintain certification.

30 April 2014

Noise

In Durney v Victoria University & Ors [2014] VSC 161 the University has been found by the Victorian Supreme Court to have not met expectations regarding procedural fairness in excluding a law student.

Plaintiff Paul Durney, a law student at Victoria University, has made "numerous complaints" from 2008 onwards about noise in the University Law Library. The Victoria U administration "considered that the manner of his complaints and conduct" was "disruptive and disturbing for university staff and students", reflected in a decision to exclude him from "being on, in or using any or all premises of the University". In response Durney successfully challenges that decision, claiming  a lack of procedural fairness, failure to afford natural justice, bias, unreasonableness, and ultra vires.

In August 2012 the University wrote to Durney stating that it was very concerned about his health, wellbeing and recent behaviour. It recommended that Durney seek assistance from qualified professionals.

The expression of concern reflected indications that  Durney had "behaved inappropriately",  including a complaint to the Law Librarian that "caused the Librarian to sound the duress alarm and call security staff with Mr Durney ultimately being removed by police". Presumably the complaint was somewhat more forceful than a brisk unemotive statement about chatter or the need for peers to use headphones.  Durney shouted at a professor - often quite tempting -  and on another visit to the Library shouted about the noise. He twice stated that he intended to commit suicide. A subsequent email to Law School and Library staff communicated his intention to conduct a "hunger strike" outside the Law Library until the conditions in his email were met or until he died. He later  lay on the footpath outside the main entrance of the Law School in view of students until police and ambulance personnel attended. In September Durney reportedly entered the "secure, staff only area at the Footscray Park campus library and refused to leave when asked to do so".  The University claimed that Durney subsequently made an attempt to hang or choke himself] outside the Law School, again in view of students and staff, some of whom we might infer would be distressed. Police and ambulance attended the scene. Later in the month he visited the Footscray Park campus library once again and indicated that he wasn't leaving until the librarian met to discuss providing a silent library space.

In responding to a letter from the Vice-Chancellor reiterating concern regarding Durney’s health  and noting the importance of a safe environment for staff and students Durney  indicated that he was suffering from significant health issues.

He went on to state that
  • Victoria U continued to refuse to meet obligations under its Student and Library Charters to provide silent study areas for private study and "a traditional library function"
  • the University failed to meet the requirements of a disability assessment for Durney
  • the University was refusing to meet its obligations to students and silencing him
  • Victoria U was in breach of several of its own policies. 
  • there was no lawful authority or ground for his exclusion, and no legitimate case for exclusion (e.g. he had not been charged with or found guilty of any disciplinary or criminal offence.
A memorandum by staff to the Vice-Chancellor (accompanied by documents that were not provided to Durney)  indicated that hs behaviour had "caused very significant concern and distress to University staff over recent months", his demands and complaints had been disruptive and difficult, his "disruptive and disturbing" behaviour "posed a threat to the mental health of staff, his behaviour was erratic and apparently growing more and more extreme, an increased security presence had to be maintained at the libraries and the Law School, and staff were "in a constant state of tension and unease" wondering when Durney would "next cause a distressing incident".

The Vice-Chancellor appears to have been persuaded and excluded the unhappy student.

Durney was advised that a Faculty representative would be in contact to make arrangements for Durney to sit the exams. Durney would continue to "have access to WebCT, Lectopia and all the online resources of the Victoria Law Library", along with email access to lecturers and tutors. The exclusion would be revoked
at such time as I can be provided with some independent assurance that your behaviour does not pose a risk to staff and students, or indeed to yourself. This would need to take the form of a report from a medical practitioner, psychiatrist, or Forensic Psychologist of the university’s choice, at the university’s expense. ...   
I once again entreat you to obtain professional assistance. I reiterate that Dr Darko Hajzler, the University’s Manager of Counselling Services is available to assist you, and can be contacted on  …
Durney contended that procedural fairness and natural justice were not observed by the University when the exclusion decision was made.
One concern is that he was given no notice at all of the October 2012 incidents relied on by the Vice-Chancellor in the exclusion decision, and no opportunity to address them. 
A second concern is that he was not provided with most of the documents relied on by the Vice-Chancellor when he made the exclusion decision, including a copy of the joint memorandum. 

29 April 2014

Pharmaceutical Regulation, Costs and Torts

‘Liability risk in the pharmaceutical industry: Tort law in the US and UK’ by Kristina Lybecker and Lachlan Watkins in (2014) Social Science Journal examines
the extent to which product liability risk contributes to the high costs of pharmaceuticals in the United States relative to prices in the United Kingdom. Research on pharmaceutical prices rarely accounts for the impact of liability risk, and none that we are aware of compares the United States and United Kingdom. Drawing on a dataset of 77 brand name drugs sold in both the US and the UK, we analyze relative manufacturers’ factory prices in each nation. We utilize several proxies for liability risk including drug litigation history, the percentage of plaintiff wins, and controlled substance classification. Importantly, under US law there are no caps on the amount that can be awarded to a plaintiff claiming economic losses in the US. However, payouts in the UK are limited. Accounting for market differences and regulatory environments, we find liability risk can account for a portion of the price differential that exists between the US and UK, warranting further investigation.
The authors conclude -
Few studies account for differences in product liability environments between countries, and none to our knowledge account for non-tort liability in the form of civil litigation. This study demonstrates the potential impact that product liability risk, under tort law, plays in the pricing of pharmaceuticals in the United States. Our findings suggest that it may be wise to investigate how product liability environments contribute to drug price differentials across nations. Further study is clearly warranted to comprehensively investigate how a wide range of liability risk, including non-tort law, can affect drug prices. Using the data from a GAO study comparing the prices of pharmaceuticals in the U.S. and U.K., this study controls for a portion of liability risk under tort law in the U.S., demonstrating the important role liability risk plays across markets. 
Our results indicate that some portion of the high observed drug price differential between the U.S. and U.K. is derived from the potential litigation costs that firms face when selling pharmaceuticals in the U.S. market. The high potential liability costs are directly associated with the legal structure of state and federal tort law in the U.S. U.S. tort law is stricter than in the U.K., as well as most other industrialized nations, helping to explain why U.S. pharmaceutical prices are the highest in the world. 
With the passing of the Patient Protection and Affordable Care Act (PPACA), comparisons of the costs of pharmaceuticals and healthcare across nations will become even more important. The PPACA, as well as the Healthcare and Education Reconciliation Act of 2010, will make drastic changes to the healthcare system of the United States beginning in 2014. Continued concern for escalating healthcare costs will ensure that policymakers will look to the pharmaceutical industry for reform. This study demonstrates that high pharmaceutical prices are, in part, derived from the prospective costs of litigation in the U.S. As such, future studies should focus on the impact of tort law reform as a potential means of reining in the high costs of pharmaceuticals in the United States.
They note that
Few studies consider the impact of tort reform on pharmaceutical prices. The first work on this topic was done by Manning (1994, 1997, 2001) who examined how the liability environment may impact prices and whether reducing liability costs can lead to lower prices for both prescription drugs and vaccinations. Helland, Lakdawalla, Malani, and Seabury (2011) confirms that liability risk increases drug prices but that it also reduces medical side effects. Their analysis shows that liability reduces supply but enhances expected safety and thus increases demand. Overall their results suggest that tort liability improves social welfare.  Beyond the impact on pharmaceutical prices, Garber (2013) analyzes the economic effects of several kinds of legal liability for pharmaceutical companies, focusing on three forms of economic efficiency: increasing the population-level health benefits, reducing the social costs of drug-related injuries, and decreasing the transaction costs of legal disputes. Garber examines product liability as well as three other types of litigation related to safety, efficacy, and marketing and promotion. Safety and product liability risk are described by Offit (2005) as reasons why pharmaceutical manufacturers are abandoning vaccine production. 
More generally, other studies find that tort law reform actually decreases both the number of lawsuits and amount of damages awarded in tort liability cases. Rubin and Shepherd (2007) argue that tort reform could decrease expected liability costs for firms thus resulting in lower prices allowing consumers to purchase more risk-reducing products like pharmaceuticals. According to Rubin and Shepherd, as the prices for goods covered by tort law increase, consumers will be less willing to pay for these goods. Thus reform that increases tort liability may increase risk and keep the prices of pharmaceuticals high. Alternatively, they argue that tort reform that decreases tort liability may make pharmaceuticals and other risk-reducing products more affordable and available to consumers. 
In a similar vein, Browne and Puelz (1999) analyze the effects of tort reform on total claim severity as well as on reducing the likelihood that an injured party will seek litigation. Their study suggests that tort reform has a statistically significant effect in reducing economic and non-economic damage awards. In contrast, in an analysis of the effect of noneconomic damage caps for awards in the U.S., Durrance (2009) finds no evidence that these caps would decrease malpractice litigation frequency. Danzon (1986) finds that tort reforms capping awards reduced the severity of awards by 23%, while two additional studies (Thorpe, 2004; Viscusi & Born, 2004) show that states that enacted damage award caps experienced a significant decrease in loss ratios – the ratio of losses to insurance premiums – and the costs of insurance premiums. Yoon (2001) also finds that several types of tort reform decreased the number of lawsuits filed as well as the damages awarded by actually lowering liability costs. This is echoed in a study by Viscusi, Zeckhauser, Born, and Blackmon (1993) which analyzes the impacts of 1980s tort reforms, finding that states that enacted tort reforms between 1985 and 1988 restrained the growth of liability costs. The study also finds that tort reforms also reduce the cost of insurance premiums. Finally, Lasker and Womeldorf (2013) analyze the consequences of pharmaceutical firm risk in potentially massive punitive damages sanctions in prescription drug product liability litigation for conduct taken in full compliance with FDA regulations. They describe a recent Supreme Court Decision in which the court “laid the groundwork for significant protections for FDA-compliant companies, both by strengthening the hands of individual state legislators to establish the scope of punitive damages liability for pharmaceutical companies located in their states and by highlighting the reasons why the imposition of punitive damages for FDA-compliant conduct improvidently intrudes upon FDA’s plenary enforcement authority” (p. 35). 
In the context of liability risk in other healthcare settings, several studies examine the effect of liability reform on physician behavior and costs (Helland, Lakdawalla, Malani, & Seabury (n.d.); Kessler & McClellan, 1996, 2002a, 2002b; Shapiro, McGarity, Vidargas, & Goodwin, 2012). Helland et al. (n.d.) examine product liability risk and show that while liability reform does not change the safety of drugs produced by global firms, it does reduce liability faced by doctors who respond by prescribing more drugs. Kessler and McClellan have written several papers on liability and tort reform. In a 1996 study, they analyze the effects of malpractice liability reforms and find that they directly reduce provider liability, resulting in reductions of 5–9% in medical expenditures without substantial effects on mortality or medical complications. Another Kessler and McClellan study (2002a) considers elderly Medicare beneficiaries and establishes that liability reducing tort reforms reduce defensive practices in areas with high and low managed care enrollment. This work is extended in another 2002 study in which they find that direct tort reforms improve medical productivity through reduced malpractice claim rates and a reduction in the time spent defending such claims. Finally, in contrast, Shapiro et al. (2012) argue against malpractice reform, claiming that torts have little impact on medical decisions or medical practice costs. 
While pharmaceutical liability risk is largely overlooked by existing studies, other forms of pharmaceutical regulations, specifically market regulations, feature prominently in recent examinations of international price differentials. Sood et al. (2009) analyze pharmaceutical regulations in nineteen developed countries from 1992 to 2004, focusing on how different regulations impact pharmaceutical revenues. They find both that there has been a trend toward increased regulation and the majority of regulations result in reduced pharmaceutical revenues. Moreover, in the context of a largely unregulated market such as the United States, the introduction of new regulations could significantly reduce pharmaceutical revenues. Their analysis demonstrates that if the United States implemented price controls and negotiations similar to those in other developed countries, U.S. pharmaceutical revenues would fall by as much as 20.3%. Their results show that the introduction of price controls and other regulations in largely unregulated markets will significantly reduce costs, but perhaps at the cost of future innovation.
Sood et al is 'The Effect Of Regulation On Pharmaceutical Revenues: Experience In Nineteen Countries' by Neeraj Sood, Han de Vries, Italo Gutierrez, Darius N. Lakdawalla and Dana P. Goldman in (2009) 28(1) Health Affairs w125-w137.

The authors comment
 We describe pharmaceutical regulations in nineteen developed countries from 1992 to 2004 and analyze how different regulations affect pharmaceutical revenues. First, there has been a trend toward increased regulation. Second, most regulations reduce pharmaceutical revenues significantly. Third, since 1994, most countries adopting new regulations already had some regulation in place. We find that incremental regulation of this kind had a smaller impact on costs. However, introducing new regulations in a largely unregulated market, such as the United States, could greatly reduce pharmaceutical revenues. Finally, we show that the cost-reducing effects of price controls increase the longer they remain in place. 
If the United States implemented price controls and negotiations similar to those in other developed countries, U.S. revenues would fall by as much as 20.3%. 
Rapid growth in pharmaceutical spending is a worldwide phenomenon. According to a recent study, spending on pharmaceuticals in Organization for Economic Cooperation and Development (OECD) countries has increased by an average of 32% in real terms since 1998, reaching more than U.S.$450 billion in 2003. However, there is wide variation in the growth of pharmaceutical spending across countries. For example, during this period, the annual growth rate of pharmaceutical spending in the United States (9.6%) was nearly triple the growth rate of spending (3.5%) in Germany. 
This growth in pharmaceutical spending has increased demands for regulating pharmaceutical markets by imposing limits on prices, profits, or total spending on pharmaceuticals. The likely effect of such regulations on social welfare is a contentious and much-debated public policy issue. On the one hand, regulations curb expenditures and thus potentially improve the welfare of the current generation. On the other hand, such regulations limit incentives for research and development (R&D) and thus might hurt future generations by reducing the pace of innovation. In addition, some argue that pharmaceutical regulations might also have negative consequences for consumers today. For example, price regulation can lead to less competition in markets for generic drugs, delay launch and limit availability of new drugs, and could lead firms to follow costly strategies to “game” regulations. 
Pharmaceutical regulation thus involves a potential trade-off between curbing costs today and having fewer drugs to treat current and future generations. Thus, the first step in examining this trade-off is estimating the effect of regulations on pharmaceutical revenues. However, there is little consensus about whether or not real-world pharmaceutical regulations have any impact on revenues. Some believe that these regulations have little “bite,” especially over time, as pharmaceutical firms learn to work their way around them. Others believe that regulations have big impacts on revenues and consequently limit the pace of innovation. 
The lack of consensus is driven in part by the lack of systematic information about current trends in pharmaceutical regulation and its effect on revenues. One strand of existing studies compares pharmaceutical prices or spending across regulated and unregulated markets. For example, a recent study by the U.S. Department of Commerce reviewed pricing in eleven OECD countries and found that for patented drugs that were best sellers in the United States, the prices in other OECD countries were 18–67% less than U.S. prices, depending on the country. The study concluded that price deregulation in these countries would increase pharmaceutical revenues by 25–38%. In general, these studies are limited by their reliance on cross-sectional variation in revenues or prices and by their resulting vulnerability to heterogeneity across countries in type of regulation and other determinants of prices. 
There are some studies that address the heterogeneity problem by analyzing longitudinal data and comparing pharmaceutical spending before and after policies take effect. For example, Nina Pavcnik estimated a 10–26% decrease in drug prices as a result of a reference pricing policy introduced in Germany after 1989. However, most of these studies only examine the effects of a limited range of regulations in one country or a small group of countries. 
In this paper we attempt to fill this gap in the literature by characterizing the pharmaceutical regulatory environment in nineteen developed countries over a thirteen-year period. We take advantage of the substantial variation in pharmaceutical policies within a country to identify the causal effect of a wide variety of pharmaceutical regulations on revenues. We also examine the extent to which different regulations complement or substitute for each other. For example, to what extent are the effects of a particular policy on drug revenues determined by other regulations that were already in place before the new policy was introduced? Finally, we examine whether the effects of regulations change over time.

Performativity

'‘Prince Harry’: performing princeliness' by Oliver Watts in Peer Reviewed Proceedings of the 4th Annual Conference, Popular Culture Association of Australia and New Zealand (PopCAANZ), Brisbane, Australia, 24-26 June, 2013 249-258 is characterised as combining
visual studies and jurisprudence in a reading of popular images of a princely body. Last year Prince Harry threatened to bring the Royal Family into disrepute after a certain night in Las Vegas. Grainy phone images, and the accounts of Olympic swimmers and party girls all drew a picture of debauchery and orgiastic display. On the other hand Prince Henry (Harry), like certain Shakespearean Henrys, is the perfect warring prince, leading his men in Iraq in an outpouring of princely virtue and Renaissance civic humanism. The mass media have publicised shots of Harry in his white steed, an Apache helicopter. The sovereign body has recently been revisited in jurisprudential scholarship from Agamben to Pierre Legendre. The sovereign body is a site of refusal or failure of symbolic interpellation, like the grand criminal or terrorist. The sovereign body marks the mythical site of law’s authority. One small group of performance artists, the English royal family, live the fantastical structures of the law as their quotidian reality. In a democracy such an icon of transcendental founding of the law in sovereignty is suggested but often repressed by the ‘post-ideological’ trappings of the administered or disciplinary society. What Harry does, like Agamben and Legendre, is to uncover this Renaissance (Romano-Christian) image magic in modernism.
Watts comments that what Harry
highlights perhaps most clearly of any member of the Royal family, is how desire is part of our subjection to and belief in the law. Law and sovereignty are not only constitutions and speeches but, parties and beautiful Royal bodies. So when I recently saw an E Entertainment special: The two faces of Prince Harry, I thought that is exactly right. The law has two faces, its public face, and a private more transgressive and desirous face. The subtitle was: A look at the two sides of Prince Harry examines his most controversial moments, along with his charity work and his life in the military. What they failed to describe was how these two sides are actually quite natural to the sovereign body. The sovereign body as Giorgio Agamben has suggested is an exception to the law; it is paradoxically the place marker of (public) law and always somehow outside of the law. As the public law the English Royal Family is all charity and army. Prince Henry (Harry), like certain Shakespearean Henrys, wanted to lead his men in Iraq in an outpouring of princely virtue and Renaissance civic humanism. On the other side like the great criminal they are the subject of Hollywood fascination. As someone that we imagine is not obligated to the law in the way we are as citizens we want to see the royal family act out again like a Renaissance King. So for example when Harry was found in naked pictures in Las Vegas last year, this is part of what we expect and want to see of someone not beholden to the law as a citizen but outside the law as sovereign. Even his tricks with an apache helicopter this month at an air show showed an incredibly free and excitingly transgressive life. Like a Renaissance king riding a white steed at a joust, our Prince Henry throws a multimillion dollar Apache around, again here not in combat but in showy play. Don’t forget either my favourite story of how Prince William picked up Catherine for a date in military helicopter as he ‘got in flying hours’. Although in all these cases the media brought down harsh criticism and called for sanctions my argument is that in all these cases the performance of the sovereign is enacted. The sovereign is connected to desire and transgression. 
Following Lacan I treat the sovereign body as an icon of law. A sublime object of ideology, and that this functions in the same way whether you are in a democracy a communist state or a constitutional monarchy. Whether the sovereign as denoted by the monarch or as the sovereignty of the people, the authority of the state is still a sacred, unrepresentable object. Importantly, the process of ‘personifying’ the void still occur through ‘bodies’ and images, which can be explained empirically or through Lacan. Michael Walzer concisely writes, ‘[Because] the state is invisible...it must be personified before it can be seen, symbolised before it can be loved, imagined before it can be conceived’ (Walzer 1967: 194). In the work Now by Maurizio Cattelan in 2004 he presents an effigy for democracy; John F. Kennedy (JFK) lies barefoot like a prophet, perfectly recomposed from his trauma, a sublime object of ideology for democracy. This analysis helps to explain why the symbolic body has survived in contemporary politics as a fantasy structure connected to what Lacan sees as a ‘master signifier’, the cipher. In this case Harry is one of these bodies that covers over the gap, an effigy. …
This Lacanian reading is also explained by Žižek who suggests that there is an obscene hidden supplement to the law, where we enjoy the law, and relate to it through jouissance. Is not love, looking from the other side, always about power and sovereignty? (Kristeva 1987: 125). 
So on one level this obscene hidden supplement to the law, can express itself as profane love as it does here or as transgressive, deviant almost criminality. This gap in the power structures interrupts the big Other itself and opens up this space of jouissance. There is something always outside the Law, the element of the ‘real’ exterior to it, or to reiterate Žižek's term following Lacan, this is the ‘master signifier’ here the point of concept of sovereignty. In reality this real exterior to the political may be the founding violence of colonial power, the founding violence of revolution, repression, or capitalist power. The big Other is split at the point of the subject but also within itself. 
Žižek argues that, ‘every power structure is necessarily split, inconsistent, there is a crack in the very foundation of its edifice– and this crack can be used as a lever for the effective subversion of the power structure…’ (1996: 3). The split in the law’s edifice – this lack – creates an interesting connection to the law. The outcome is that we are not merely ruled by the law but we also desire it. For Lacan, as we are castrated, or submit to interpellation, we must give up our desires, the object petit a. There is a fantasy that the big Other has held this lost object of desire almost on trust and will at some point give it back. Our relationship to the big Other or the Real Thing is then mediated by desire. 
Žižek's contribution is once again helpful. He shows a supplement to the public law that is a hidden secret law whose overarching power is one of jouissance and superego injunctions. So that, for example, our truest relationship with the ‘nation’ may be at a sporting event, the Olympics or even a music festival. In Tarrying With The Negative Žižek suggests as a master signifier, the subject may not ‘know’ the nation (as an object of empirical knowledge) but they ‘enjoy (jouis) their nation as themselves’ (1993: 200-238). The Lacanian term Jouissance is usually translated from French as ‘enjoyment’ – as opposed to the English idea of ‘pleasure’ – implying jouissance as a sexualised, transgressive enjoyment at the limit of what subjects can experience or talk about in public. Here I suggest the body of Harry is one of these sites of desire (of nation and of law).

28 April 2014

Audit of Medicare Data Integrity

The Australian National Audit Office has released a 106 page report on Integrity of Medicare Customer Data (ANAO Audit Report No.27 2013–14.

ANAO comments that
Medicare is Australia’s universal healthcare system, which provides people with access to free or subsidised health and hospital care, with options to also choose private health services. Medicare is one of a range of Australian Government health programs administered through the Department of Human Services (Human Services).
In its 2012–13 Annual Report, Human Services reported that as at 30 June 2013, there were 23.4 million people enrolled in Medicare, including 618,533 new enrolments. For an individual to enrol in Medicare, they need to reside in Australia and be either an Australian or New Zealand citizen; a permanent resident visa holder; or an applicant for a permanent resident visa (excluding a parent visa). Australia has Reciprocal Health Care Agreements with 10 countries and visitors from these countries may also be eligible to enrol. Some eligibility types, for example, visitors from Reciprocal Health Care Agreement countries, are only eligible to use Medicare for a limited period of time.
In 2012–13, Human Services processed payments totalling $18.6 billion for over 344 million Medicare services. Expenditure under Medicare is expected to continue to grow, with payments estimated to reach $23.7 billion by 2016–17.
In administering Medicare, Human Services collects personal information from customers at the time of their enrolment and amends this  information to reflect changes in their circumstances. The main repository for this data is the Medicare customer record database, the Consumer Directory.
Maintaining the integrity of customer data assists to mitigate key risks associated with Medicare including access to benefits by ineligible people who are enrolled without an entitlement or who are enrolled for a period beyond their entitlement. There is also a risk that ineligible people may obtain an active Medicare card and use it fraudulently to access services and/or make fraudulent claims. In addition,  fraudulent use of Medicare cards as a form of identification is a risk to Medicare and the broader community.
Customer data integrity assists in mitigating these risks and contributes to the effective and efficient administration of Medicare. To maintain data integrity, Human Services has implemented both ‘upstream’ controls at the enrolment stage, and post-enrolment measures to manage updates to its records arising from changed customer circumstances. The department has also implemented measures to protect the privacy and security of customer data.
The report notes that
a range of businesses rely on Medicare cards to help satisfy personal identity requirements, including banks and telecommunications companies. Human Services advised the ANAO that it does not endorse this practice.
The aim of the ANAO audit was 
to examine the effectiveness of Human Services’ management of Medicare customer data and the integrity of this data.
To assist in evaluating the department’s performance in terms of the audit objective, the ANAO developed the following high level criteria:
  • Human Services has adequate controls and procedures for the collection and recording of high quality customer data; 
  • Medicare customer data as recorded on Human Services systems is complete, accurate and reliable; and 
  • customer data recorded on Human Services systems is subject to an effective quality assurance program and meets relevant privacy and security requirements.
The audit scope focused on the integrity of Medicare customer data and included related testing of all Medicare customer records. It did not examine Healthcare Provider Information, the allocation or management of Individual Healthcare Identifiers (IHI) or the operation of Personally Controlled Electronic Health Records.
The audit also considered the extent to which Human Services had implemented the six recommendations from ANAO Performance Audit Report No. 24 of 2004–05 Integrity of Medicare Enrolment Data.
The report provides an "overall conclusion" - 
Medicare has been in place for 30 years and is accessed by almost all Australians and some visa holders and visitors. In 2012–13, Human Services reported over 23 million people enrolled in Medicare, including 618,533 new enrolments. The department’s administration of Medicare is supported by a long-established database, the Consumer Directory, which contains all Medicare customer records. As the repository of a large and evolving data set incorporating, on an ongoing basis, both new enrolments and changes to customer information, the Consumer Directory requires active management to maintain the integrity, security and privacy of customer data; essential prerequisites for the effective administration of Medicare.
Human Services’ framework for the management of Medicare customer data, including procedures and input controls for the entry of new enrolment information and changes to customer information, has not been fully effective in maintaining the integrity of data in the Consumer Directory.
ANAO analysis of the department’s Medicare customer data holdings identified:
  • at least 18,000 possible duplicate enrolments—an ongoing data integrity issue in the Medicare customer database; 
  • active records for customers without an entitlement as well as inactive records and some with unusual activity; and  
  • records which had customer information inconsistently, inaccurately and incompletely recorded.
In addition, the department advised the ANAO of instances where the records of two different customers are combined (‘intertwined records’), giving rise to privacy and clinical safety risks. 
While the number of compromised records held in the database is not significant given the scale of the department’s data holdings, the data integrity issues referred to above indicate that departmental procedures and key elements of the data input control framework require management attention to improve operational efficiency, better protect customer privacy and clinical safety, and reduce the risk of fraudulent activity. The extent of the data integrity issues highlighted by the audit and the length of time these issues have been evident also indicate a need for the department to periodically assess the underlying causes of data integrity issues and implement necessary treatments.
The audit identified that additional attention should be given to: the tightening of data input controls, including the full and accurate completion of mandatory data fields in accordance with system and business rules; the adequacy and consistency of staff training and written guidance; addressing duplicate and ‘intertwined records’; and undertaking data integrity testing on a targeted risk basis. Further, Human Services’ procedures for managing the security of Medicare customer data do not comply fully with some mandatory requirements of the Australian Government’s Information Security Manual (ISM); significantly reducing the level of assurance of the relevant systems’ ability to withstand security threats from external and internal sources. The department should implement whole‐of‐government requirements in relation to system security.
ANAO offered some positive comments - 
Positive elements of Human Services’ approach to managing Medicare customer data include: unique customer reference numbers within the Consumer Directory, which have a high degree of integrity; a well-developed privacy framework which contributes to maintaining the confidentiality of sensitive Medicare customer records; and a Quality Framework comprising a daily program of random checks on completed transactions by customer service officers. As discussed however, a fully effective approach to managing the integrity of data holdings requires that attention be given to the development and consistent implementation of the full suite of procedures and controls.
However, "the department has foregone an opportunity to enhance its performance by implementing a number of the earlier ANAO recommendations". ANAO therefore makes five recommendations,to improve training and guidance for customer service officers, address  data integrity issues and their causes, and comply with the mandatory requirements of the ISM.

The Key findings are
  • Medicare customer data, with the exception of claims, is captured mainly when customers enrol in Medicare and when they amend their details. Customer service officers are mostly responsible for entering and updating customer information in Medicare’s customer record database, the Consumer Directory. The collection of accurate, complete and reliable customer data supports the efficient and effective administration of Medicare. 
  • Customers enrol in Medicare using one of three main forms. There is an opportunity for Human Services to improve the efficiency of the enrolment process by amending the Medicare Enrolment Application form to better specify the documentation that visitors are required to provide in support of their enrolment. 
  • There are a range of channels for customers to amend their data, including over–the–phone, in–person, in–writing and through self–service options such as Medicare Online Services and the Medicare Express Plus mobile phone application. Customers would benefit from Human Services listing all of these channels on its webpage, Keeping up to date with Medicare.
  • To assist customer service officers to enrol customers and amend their personal information, Human Services provides training and guidance on its intranet. While the online training covers the essentials of enrolling customers, it does not include complex enrolment examples. Further, there are inconsistent instructions in and between the training and guidance. For these reasons, Human Services should review its staff training and guidance, in respect to enrolling customers and amending their information, for completeness and consistency. 
  • As a further means of collecting and amending customer information, Human Services conducts data matching with other Australian Government departments and state and territory agencies. Customer records are updated with dates of death using an automated process of matching a Fact of Death Data (FODD) file on a monthly basis, compiled from state and territory  registries of births, deaths and marriages. This process was introduced by Human Services in 2005 in response to Recommendation No. 5 of the ANAO’s performance audit ...
  • When customer information is recorded—at the time of enrolment and if subsequently amended—it is subject to system controls, including address matches with the Postal Address File; BSB validation checks; and field controls. These controls are intended to ensure that data is complete, accurate and reliable. The ANAO’s testing of mandatory customer data indicate that some of these controls are not operating effectively.
  • To further support the collection and amendment of Medicare customer data, Human Services has a Quality Assurance Framework that includes a daily check of randomly selected completed transactions. In 2012–13, 26.8% of these daily checks of Medicare transactions were of customer enrolments and information amendments. The results of these daily checks are reported to the Human Services Executive and stakeholders on a monthly basis and a sample are also reviewed annually for accuracy. For the enrolments and data amendments checked in 2012–13, Human Services reported a 96.3% accuracy rate, which was slightly below the key performance indicator of 98%. 
  •  Unique customer reference numbers are used to identify individual customers and to protect their privacy and clinical safety. Customers enrolled in Medicare are assigned four unique reference numbers in Human Services’ records: Consumer IDs: record identifier;  Personal Identification Numbers (PIN): Medicare enrolment identifier; Medicare Reference Numbers: card identifier; and  IHI: identifier within the ‘eHealth’ environment.
  • These numbers are used to identify customers and their records and link their information between Human Services’ various Medicare databases. The ANAO tested all 29.3 million Medicare customer records in the Consumer Directory. No duplicate unique reference numbers were identified apart from one Medicare Reference Number shared by two different records. Human Services investigated this duplicate Medicare Reference Number and found that it had been mistakenly issued by a customer service officer to two different family members sharing the same Medicare card in 1996, using the Medicare Enrolment File (the predecessor of the Consumer Directory). The testing indicates that unique customer reference numbers have a high degree of integrity. 
  • Duplicate customer enrolments mean that customers have more than one of each of these unique customer reference numbers. Consequently, customer information is fragmented across more than one record, posing a risk to the accuracy, completeness and reliability of their personal and health information. 
  • Duplicate customer records have been an ongoing data integrity issue in Medicare customer record databases. The ANAO’s 2004–05 performance audit recommended that Human Services address duplicate enrolments prior to migrating Medicare customer data to the Consumer Directory. Human Services advised that it implemented this recommendation but this could not be verified by the ANAO without supporting documentation. 
  • ANAO’s testing of all 29.3 million Medicare customer records used varying matching criteria which identified at least 18,000 possible duplicate records. Testing included matches based on names, name initials, dates of birth, addresses and gender as well as varying combinations of these criteria, for example, matches on name and address with a different birth day or month. As part of a continuous improvement approach to managing data in the Consumer Directory, Human Services should consider ways to: better identify duplicate enrolments which take into account these types of variances; investigate the underlying causes of duplicate enrolments; and apply appropriate treatments to address duplicate enrolments. 
  • Data integrity can also be weakened by intertwined records, which are single records shared by more than one customer. Intertwined records are created when customer service officers incorrectly enable two customers to use the same PIN—customers’ unique Medicare enrolment identifiers. Human Services advised that it has recorded 34 intertwined records since 2011–12, when it commenced recording identified instances. These records pose a risk to the privacy and clinical safety of affected customers as their recorded health information does not accurately reflect their individual circumstances. Human Services has established a working group to address intertwined records. The department should also introduce guidelines to ensure risks are mitigated when these types of records are resolved—which could form part of the work of this group. Integrity of customer data.
  • To assist with recording accurate and complete customer data, there are controls in the Consumer Directory including mandatory fields and system rules. Mandatory personal data fields include family name, first name, date of birth and most address fields. Mandatory eligibility fields include eligibility document type, a document reference date or number, and an entitlement end date for relevant entitlement types.  ANAO tested these mandatory fields and identified not all mandatory fields had been completed. Further,  ANAO’s testing found Medicare customer data which was inconsistently and inaccurately recorded, and which contravened system and business rules. 
  • One consequence of errors or omissions in customers’ personal data is that existing customer records may not be identified in the customer enrolment search which could result in duplicate enrolments. 
  • Of greatest concern are the consequences of incomplete, inaccurate and unreliable eligibility data, which can include payments to ineligible persons. ANAO identified some active customer records with invalid entitlement types which had recent associated claims. Further, some customer records did not: contain sufficient information to support customers’ eligibility for Medicare. For example, there were 34,129 records for permanent resident visa holders which did not have reference to at least one of the eligibility documents required to support enrolment recorded; and   reflect an entitlement period consistent with the customer’s entitlement type, including not having an entitlement end date recorded despite the customer having a limited entitlement. For example, there were 2,743 records for visitors which had no eligibility end date recorded. 
  • Human Services should implement controls to ensure that: all mandatory data fields are completed; recorded data is consistent with business and system rules; and customer access to Medicare benefits is consistent with their entitlement. Human Services should also review all customers accessing benefits without a valid entitlement type, to confirm their eligibility.
  • ANAO tested date of death data and found 40,541 records for customers over 85 years old which did not have an associated claim in the 12 months prior to testing The absence of claiming activity on these records suggests that these customers may be deceased.  ANAO also identified a customer aged approximately 143 years old who had made a claim in the six months prior to testing. Human Services’ investigation of this record showed that the affected customer’s date of birth had been incorrectly recorded and the department advised ANAO that it has subsequently corrected the record. Human Services does not currently undertake data integrity testing. The department should undertake some risk–based, targeted data integrity testing to assist with the identification of records that require review. 
In discussing privacy the report states that
Human Services has legislative obligations to protect the privacy of customer data and has a well developed framework to meet its obligations. The central element of its framework is the ‘Operational Privacy Policy’ which sets out relevant privacy requirements for all staff in an accessible form and provides links to appropriate supporting documentation on protecting privacy. There are policies and processes in place as well as guidance to assist staff to understand their privacy responsibilities, including reporting privacy incidents and complaints, and completing privacy awareness training. 
Human Services has adopted better practice in requiring Privacy Impact Assessments for new projects. There is an opportunity, however, for Human Services to more consistently apply this requirement to fully realise the benefits of this approach. 
Human Services is required to comply with the Privacy Commissioner’s Privacy Guidelines for the Medicare Benefits and Pharmaceutical Benefits Programs, including the submission of a Technical Standards Report which outlines its management of Medicare customer databases. In 2009, Human Services implemented Recommendation No. 6 of the 2004–05 ANAO audit—to produce and submit a Technical Standards Report—approximately four years after the ANAO’s report was tabled. The guidelines also require that Human Services lodge variation reports to the Technical Standards Report. The current Technical Standards report does not reflect current arrangements and  there is an opportunity for Human Services to implement a process to review and update this report and to lodge variation reports in a timely manner.
Security? The report notes that
Human Services is subject to the  ISM, issued by the Australian Signals Directorate, which outlines standards to assist agencies in applying a risk–based approach to protecting their data and ICT systems. 
Human Services undertakes security initiatives outlined in the ISM but falls short of complying fully with the standards outlined. In particular, Human Services is not compliant with two of the mandatory requirements of the ISM. The department has not completed all of the mandatory security documentation required by the ISM for the systems that record, process and store Medicare customer data. Further, it has not completed the certification and accreditation processes for these systems or most of the infrastructure that supports them, as required by the ISM. Fulfilling these requirements would assist Human Services to identify and mitigate risks to the security and confidentiality of Medicare customer data. 
There is also scope for Human Services to improve its implementation of:  risk management activities for ICT systems and services by ensuring that controls and treatments to mitigate risks are in place;  active security monitoring by addressing identified vulnerabilities associated with new ICT systems and taking a risk-based approach to monitoring potential threats to systems; and  user access management by monitoring and reporting on access to the Medicare Data Warehouse which contains a copy of Medicare customer data. 
Human Services has also identified areas for improvement in its self–assessment against the Australian Government’s Protective Security Policy Framework and is taking action to meet its security awareness and training  responsibilities. Further, the department is undergoing an organisation-wide process to develop business continuity plans which address identified critical functions. There would also be benefit in Human Services completing disaster recovery plans in relation to its identified critical functions.