01 March 2014

Environments

'Theorising International Environmental Law' by Stephen Humphreys and Yoriko Otomo in Florian Hoffmann and Anne Orford (eds) The Oxford Handbook of International Legal Theory (Oxford University Press, 2014)
sketches some early lines of inquiry towards a theoretical understanding of international environmental law. 
As the body of international law regulating human interaction with the natural world, one might expect this branch of law to be a cornerstone of the international system. Yet in practice, international environmental law’s reach is strikingly circumscribed. Little of the governance of natural resources, for example, is ‘environmental’. Subsisting at the periphery, environmental law focuses on conserving particular (rare, exotic) species and ‘ecosystems’, and curbing certain kinds of pollution. Its principles are vague, peppering the margins of rulings within other judicial fora: it is quintessential soft law.
In this paper, we suggest that international environmental law’s dilemmas are due to two competing heritages. On one hand, this law enshrines the peculiar pantheism of the European romantic period, positing the ‘natural world’ as sacred, inviolable, redemptive. On the other, its main antecedents are found in colonial era practices, which provided the data for the earliest environmental science and a laboratory for prototypical attempts at conservation and sustainable development. Caught between irreconcilable demands, international environmental law struggles today to avoid utopian irrelevance or nugatory paralysis.
They comment that
International environmental law raises a paradox. As the body of international law that regulates ‘the environment’, one might expect international environmental law to be a cornerstone of the international legal system. What, after all, is more fundamental to the constitution of the world than the human relation to nature? And yet it is striking how little international environmental law does, in fact, regulate. The global food regime, for example, mostly escapes it: agricultural practices and the slaughter of animals for food (or otherwise) are largely beyond its remit. Those phenomena referred to as ‘natural resources’ are generally managed under separate headers or, more often, private arrangements. Instead we find international environmental law at the margins of these concerns, dealing with the ‘conservation’ of certain plants, certain animals, certain ‘ecosystems’. Marginalia complemented by effluvia: as a matter of treaty law, international environmental law also aims to curb certain forms of pollution. In keeping with this general peripherality, the key environmental cases have arisen at the edges of other bodies of law. International environmental law is generally characterised as quintessential ‘soft law’: general principles and aspirational treaties with weak or exhortatory compliance mechanisms, often dependent on other disciplines altogether—science and economics—for direction and legitimacy. At the same time, the problems it is called upon to deal with are immense, frequently catastrophic, and global in nature: climate change, species extinction, increasing desert, disappearing rainforest. 
Despite or because of all this, international environmental law, more than most bodies of law, has many of the trappings of a faith. It derives its effect largely from its affect: international environmental law stages a kind of global moral authority, premised on an aesthetic ideal and an ethical disquiet. For its acolytes, its essence lies in a series of general principles: the do-no-harm principle, the precautionary principle, the polluter pays principle, the principles of equity and ‘common but differentiated responsibilities’, and of course the über-principle: ‘sustainable development’. Interposed into the practices of international commerce and diplomacy, as its advocates demand, these principles promise radical reshaping of ‘business-as-usual’. In vain, it seems: for, again more than most areas of international law, this is law crying in the wilderness. 
The little sustained theoretical attention this body of law has attracted to date has concentrated in the main on its relationship with property law—posited as one of mutual constraint. While we touch on this important question, in this chapter we direct our principal focus elsewhere, situating international environmental law with regard to the constituent conceptual elements that generate its specific energy and propel its contradictions today. We find this energy and tension in two principal historical sources: first, the romantic movement of the late eighteenth/early nineteenth centuries; second the evolution of colonial governance practices through to the mid-twentieth century. 
As to the first of these, it is through romantic philosophy and poetry that contemporary ideas about ‘nature’ became firmly established. This influential movement, as political as it was artistic, implanted lasting notions of the beauty of ‘unspoilt’ wilderness, imbued with a profound moral significance, that have endured to the present and provide the ideational backdrop specific to this body of international law, as we will show. In this venture, we will be aided by what is by now a significant body of work investigating the intellectual origins of modern environmentalism. 
As to the second source, from the outset, administrators in colonial territories found themselves grappling with concrete questions on the management of territorial, natural, and livestock resources. These included: a demand for immediate returns on the significant investments of colonial enterprise; a belated preservationist impulse emerging from the burgeoning aestheticisation of colonial landscapes; and a drive to ensure sustainable long-term access to the resources that increasingly fuelled a global economy. In examining the competing discourses of colonial resource management, we will be drawing on a second literature that has recently flowered: that of environmental history. 
In this chapter, therefore, we will tentatively open up some new theoretical perspectives on a body of law that (perhaps surprisingly for such an epistemologically rich subject) has been subjected to little theoretical speculation. After this introduction, we begin by posing a question of terminology—why ‘international environmental law’? Then, following sections on the romantics and the colonials, we return to the present in our conclusion to show how international environmental law’s origins in the confluence of the romantic and the colonial explains the apparent mismatch between its ambitious stated objectives and its muted regulatory provisions—and how this tension continues to inform its functioning today.
'Re-Examining Acts of God' by Jill Fraley in (2010) 27 Pace Environmental Law Review comments
 For more than three centuries, tort law has included the notion of an act of God as something caused naturally, beyond both man’s anticipation and control. Historically, the doctrine applied to extraordinary manifestations of the forces of nature, including floods, earthquakes, blizzards, and hurricanes. Despite the significance of the doctrine, particularly in large-scale disasters, scholars rarely engage the act of God defense critically. However, recently, the doctrine has received more substantial criticism. Denis Binder argued that the doctrine should be repudiated as merely a restatement of existing negligence principles Joel Eagle criticized the doctrine, suggesting that it should not exclude liability for damages resulting from Hurricane Katrina, but his argument rested more on an issue of fact whether the hurricane was foreseeable-than a critique of the doctrine itself. 
With so little attention given to this ancient doctrine, scholars have yet to consider the implications of major theoretical shifts in both law and geography that repudiate a separation of “the human” from “the natural.” Notably, this neglect has continued despite significant grappling with defining “nature” and “natural” in other legal contexts such as patents, federal food and drug regulations,” and public lands management or wilderness protection.” Currently, the acts of God doctrine continues its traditional uses in tort, contract, and insurance law, while also being enshrined in new environmental statutes as a method of creating a limit on liability when the polluter might not reasonably have anticipated circumstances-albeit a strict construction of the doctrine.” For example, the Comprehensive Environmental Response, Compensation, and Liability Act applies the acts of God doctrine, as does the Oil Pollution Act. Yet, it is precisely this context of environmental issues that places the most pressure on the theoretical validity of the defense. With increasing awareness of the human role in climatic and weather changes, dividing human from natural or divine action is far from uncomplicated. 
This article discusses the origins, applications, and utility of the acts of God defense, particularly with an eye towards establishing its theoretical foundations and the reliance on the classical human-nature divide. The article will demonstrate how the crumbling classical divide is already causing shifts in legal doctrines across areas as diverse as food and drug law, wilderness protection, and patents. Then through a deeper engagement with the geographical theory responsible for our renewed vision of the human-nature relationship, the argument establishes a critique of the act of God defense as it has been traditionally formulated. In the final analysis, the article suggests that the act of God defense must be shifted to remove any reliance on a strict divide between human and natural action.

Angels and the WA Medical Board

It is a rare case that pushes me to reread Rilke, specifically Elegy 1 of the Duino Elegies -
Who, if I cried out, would hear me among the Angelic Orders?
and even if one of them pressed me suddenly against his heart:
I would be consumed in that overwhelming existence.
For beauty is nothing but the beginning of terror, which we are still just able to endure,
and we are so awed because it serenely disdains to annihilate us.
Every angel is terrifying.
And so I hold myself back and swallow the call-note of my dark sobbing.
Ah, whom can we ever turn to in our need?
Not angels, not humans, and already the cunning animals are aware
that we are not really at home in our decipher world.
Perhaps there remains for us some tree on a hillside,
which every day we can take into our vision;
there remains for us yesterday's street and the loyalty of a habit so much at ease
when it stayed with us that it moved in and never left.
Oh and night: there is night, when a wind full of infinite space gnaws at our faces.
Whom would it not remain for - that longed-after, mildly disillusioning presence,
which the solitary heart so painfully meets.
In A Practitioner v The Medical Board of Western Australia [2005] WASC 198 the WA Supreme Court in considering the Medical Board's suspension of a practitioner noted that Dr X, in dealing with a vulnerable female patient as that person's general practitioner, was responsible for her health care between January 2003 and July 2003.

The Board considered whether Dr X "may have been guilty of infamous or improper conduct in a professional respect or alternatively guilty of gross carelessness or incompetency" in the course of that therapeutic relationship. It apparently found that
Between in or about March 2003 and in or about May 2003 you formed and thereafter held beliefs of a religious or spiritual nature that included the following:
(i) God had a special purpose for the Patient and that she was the 'chosen one' and that you were 'to be her brother and she was to be your sister'; 
(ii) that God was guiding you in your dealings with the Patient and communicating messages to you through an angel, some of which messages related, amongst other things, to the Patient's health and her relationship with her husband; 
(iii) that you had been annointed [sic] by God to baptise the Patient and that this baptism should take place in secret.
The baptism took place.

Dr X was held by the Board to have known or ought to have known that the beliefs impaired, or were likely to impair, his clinical judgment and that he should have terminated the therapeutic relationship with the Patient as soon as he formed these beliefs.

Unsurprisingly, given those beliefs, he did not terminate the therapeutic relationship and the personal relationship - complete with the spiritual dimension - developed.

It appears to have ended in tears for everyone. Dr X, after a spell as an involuntary psychiatric patient, was reprimanded and fined $10,000 but apparently not permanently excluded from medical practice.

Whom can we ever turn to in our need? Not angels, not practitioners with serious problems of their own.

Connectivity

The latest Household Use of Information Technology, Australia report from the Australian Bureau of Statistics indicates that -
  • ihe number of households with internet access at home continues to increase, reaching 7.3 million households in 2012–13 (i.e. 83% of all households, up from 79% in 2010–11).
  • 77% of all households had access via a broadband connection.
  • almost every household with children under 15 years of age had access to the internet at home (96%), compared to 78% of households without children under 15 years of age in 2012–13. 
  • the greater the household income the more likely there is internet access at home. In 2012–13, 98% of households with household income of $120,000 or more had internet access, compared to 57% of households with household income of less than $40,000.
  • some 81% of the online households accessed the internet at home every day. A further 16% accessed the internet at home at least weekly
  • 76% of Australia's 15.4 million internet users (i.e. people aged 15 and over who accessed the internet from any site within the previous 12 months) made a purchase or order over the internet
  • the most popular types of purchases were travel, accommodation, memberships or tickets. Travel, accommodation, memberships or tickets of any kind were the most common type of purchase for both male and female users. 
  • the second most popular online shopping items for females were clothes, cosmetics or jewellery (59%), in comparison to males second most popular purchases of CDs, music, DVDs, videos, books or magazines (50%).
  • there is a slightly higher proportion of male than female internet users (84% compared to 83%)
  • over 76% of female internet users shopped online compared to 75% of male internet users.  
  • the two most popular activities performed on the internet at home were paying bills or banking online and social networking. 
  • Social networking was more common for younger people: 90% of the 15 to 17 year old cohort and 92% of the 18 to 24 year old cohort performed this activity.

Nepotismo 2.0

From 'Nepotism, patronage and the public trust', a paper [PDF] delivered two days ago by Queensland Integrity Commissioner Dr David Solomon AM -
In September 2013, the Crime and Misconduct Commission (CMC) published a report of an investigation it had conducted into alleged misconduct at the University of Queensland. The misconduct concerned a decision in December 2010 that a school leaver who did not satisfy the university’s entrance requirements should receive an offer to enrol in Medicine which was not warranted according to the admission criteria at the time, there being 343 other applicants who were more qualified. The person who received the offer was the daughter of the then Vice-Chancellor. A formal complaint was made to the Chancellor of the university about nine months later. The following month the CMC began its investigation. The matter shortly afterwards became public knowledge through the media. Both the Vice-Chancellor and his deputy subsequently resigned their positions. 
The CMC’s report contains just one mention of a word which describes the particular form of misconduct that was involved in this case, nepotism. This was in the introduction to the report, where reference was made to how the public became aware of the matter through having ‘read media accounts of irregularities and nepotism at the University’. For the rest of the report, the allegations are referred to as official misconduct and as conflicts of interest. There was no analysis of what ‘nepotism’ means or involves. 
Nepotism is a form of patronage. The exercise of both nepotism and patronage may give rise to a conflict of interest. It is noteworthy that there is a special word for nepotist behaviour in most European languages. It is almost invariably used in a pejorative way. 
The Macquarie Dictionary defines nepotism as ‘patronage bestowed in consideration of family relationship and not of merit’ tracing it from the Latin for ‘descendant’. My old (4th edition) Concise Oxford gives a more commonly used definition and source, ‘Undue favour from holder of patronage to relatives (orig. from Pope to illegitimate sons called nephews)’ and says the word is derived from the Italian for nephew. 
According to an American book about nepotism:
The term nepotismo was coined sometime in the fourteenth or fifteenth century to describe the corrupt practice of appointing papal relatives to office – usually illegitimate sons described as ‘nephews’ – and for a long time this ecclesiastical origin continued to be reflected in dictionaries... The modern definition of nepotism is favouritism based on kinship, but over time the word’s dictionary meaning and its conventional applications have diverged. Most people today define the term very narrowly to mean not just hiring a relative, but hiring one who is grossly incompetent – though technically one would have to agree that hiring a relative is nepotism whether he or she is qualified or not. But nepotism has also proved to be a highly elastic concept, capable of being applied to a much broader range of relationships than simple consanguinity. Many practices that seem normal and acceptable to some look like nepotism to others.
It is necessary to take up two of the specific matters alluded to by the author, Adam Bellow, in that discussion of the definition of nepotism, as well as some of the other issues in his book, which is somewhat aggressively titled, ‘In praise of nepotism’. 
First, whether it is appropriate to apply the term nepotism only if it applies to the beneficiary being unqualified. 
Second, whether it is appropriate to apply the term outside ‘simple consanguinity’. 
The first raises what is one of the most important issues about nepotism, because it challenges the notion that nepotism is inherently improper or unethical. That issue is whether if a relative or other person whose appointment could be described as nepotistic ceases to be so because they hold qualifications appropriate to the position to which they are being appointed. In my view it is still appropriate to use the nepotism label, even if the person benefitting from it is at least as qualified as anyone else who might be appointed. However the beneficiary should not be precluded from the appointment because of his or her familial or other relevant relationship, though there may be other reasons why such an appointment should not be made – for example, it may be difficult to remove such a person from their position if they prove to be unsuccessful, or the requirements of the position may be changed in a way that makes it desirable that they be replaced. What is essential, however, is that an independent observer, fully informed of the facts, can conclude that the person deserved to be appointed for reasons other than the nepotistic relationship. This would normally mean that the position has been open to all, and the merits of those interested in taking it have been properly and independently assessed. 
This approach implies that the exercise of nepotism is not invariably or inevitably improper or unethical. It is necessary to determine the facts about its exercise in any particular case before reaching an objective conclusion about whether its exercise is wrong. The fact that there is a word for it does not mean that nepotism must always be condemned. 
The second issue raised by Bellow’s definition is how narrowly the term should be defined. Should it be linked, as he put it, to consanguinity. Clearly not. Consanguinity [related by birth] would not include one’s spouse or partner and some other close relatives. But what about close friends and associates (including political associates), mates, business partners and the like? Strictly speaking, such associations would be covered by the term ‘cronyism’ [crony: an intimate friend or companion] but I suspect popular usage now includes cronyism within nepotism. And what about the close relations (children in particular) of colleagues? In what follows I propose to use the word nepotism to describe the appointment by a person in authority of all such people, though later I will extend the discussion to cover the broader issue of patronage, which in relation to this matter, is defined as ‘the control of appointments to the public service or of other political favours’.

Privacy Impact Assessments

The UK Information Commissioner (ICO) has released an updated version of its Privacy Impact Assessments (PIA) Code of Practice.

Coincidentally the Office of the Australian Information Commissioner (OAIC) has announced that a draft Guide to undertaking privacy impact assessments will soon be released for public consultation.

The UK Code is characterised as meant
to help organisations respect people’s privacy when changing the way they handle people’s information. The code explains the privacy issues that organisations should consider when planning projects that use personal information, including the need to consult with stakeholders, identify privacy risks and address these risks in the final project plan. 
The UK Commissioner states that
With a research study carried out by the ICO last year showing that only 40% of people believe that organisations handle their information in a fair and proper way, privacy impact assessments can be an important means of retaining consumer trust by showing that organisations are working to respect people’s privacy. 
ICO Head of Policy, Steve Wood, said: “The development of projects involving the processing of large amounts of personal information is no longer the preserve of the public sector and large businesses. Today even an app developer can be developing a product in their bedroom that involves using thousands of people’s information. 
“This is why we have published our updated privacy impact assessments code of practice to help organisations of all sizes ensure that the privacy risks associated with a project are identified and addressed at an early stage during a project’s development. 
“The updated code is designed to ensure that privacy impact assessments fit into the project development process, allowing organisations to follow a privacy by design approach to developing new ways of using people’s information. Successfully adopting this approach can only be good for consumers and for business and can enable organisations to demonstrate their compliance with the Data Protection Act.”
The revised UK Code  reflects consultation last year that "highlighted the need for the updated code to be flexible enough to be applicable to organisations of all sizes and for privacy impact assessments to fit into the existing project development process".

The ICO has released a 267 page research project report on Privacy impact assessment and risk management [PDF].

Recommendations in that report were -
Recommendations for the ICO 
1. that the ICO develop measures aimed at promoting a closer fit between PIA and risk- and project-management methodologies through direct contact with leading industry, trade, and other organisations in both the public and private sectors. 
2. that, in revising its PIA Handbook, the ICO make the third edition much shorter, more streamlined, and more tailored to different organisational needs. It should be principles-based and focused on the PIA process. The ICO should undertake a consultation on a draft of a revised guidance document. 
3. that the ICO’s guidance on PIA emphasise the benefits to business and public-sector organisations in terms of public trust and confidence, and in terms of the improvement of internal privacy risk-management procedures and organisational structures. 
4. that ICO guidance help organisations to understand and evaluate privacy risk, whether or not they can integrate PIA into their risk-management routines and methodologies. 
5. that the ICO develop a set of benchmarks that organisations could use to test how well they are following the ICO PIA guidance and/or how well they integrate PIA with their project- and risk-management practices, especially where there are “touch points”. 
6. that the ICO strongly urge PIA-performing organisations to report on how their PIAs have been implemented in subsequent practice, and to review the situation periodically. 
7.  that the ICO promote to organisations the benefits of establishing repositories or registries of PIAs. We recommend that the ICO compile a registry of publicly available PIA reports, or at least a bibliography of such reports. 
8. that the ICO take advantage of the current work within ISO to develop a PIA standard, and the BSI’s technical panel’s contribution to it. 
9. that the ICO audit the PIA process and PIA reports in at least a sample of government departments and agencies. 
10. We recommend that privacy risk be taken into explicit account in the Combined Code for companies listed on the London Stock Exchange. 
11. that privacy risk be inserted into government guidance such as the Treasury Orange Book and the Green Book on appraisal and evaluation in central government. 
12. that, at senior ministerial and official levels in government departments, and among special advisers, the ICO engage in dialogue to underline the importance of 16 privacy and PIA while developing new policy and regulations and in the communication plans accompanying new policies. 
13. that the ICO encourage the Treasury to adopt a rule that PIAs must accompany any budgetary submissions for new policies, programmes and projects. 
14. that the ICO encourage ENISA to support the ICO initiatives with regard to insert provisions relating to PIA in risk management standards as well as within ENISA’s own approach to risk assessment. 
15. that the ICO accelerate the development of privacy awareness through direct outreach to organisations responsible for the training and certification of project managers and risk managers. 
Recommendations for companies and other organisations 
16. that, to help embed PIA and to integrate it better with project and risk management practices, a requirement to conduct a PIA be included in business cases, at the inception of projects, and in procurement procedures. Organisations should require project managers to answer a simple PIA questionnaire at the beginning of a project or initiative to determine the specific kind of PIA that should be undertaken. 
17. that senior management take privacy impacts into consideration as part of all decisions involving the collection, use and/or sharing of personal data. 
18. that companies and other organisations review annually their PIA documents and processes, and should consider the revision or updating of their processes as a normal part of corporate performance management. 
19. that companies and other organisations embed privacy awareness and develop a privacy culture, and should provide training to staff in order to develop such a culture. High priority should be given to developing ways of incorporating an enhanced PIA/risk assessment approach into training materials where information-processing activities pose risks to privacy and other values. 
20. that companies and other organisations include contact details on their PIA cover sheets identifying those who prepared the PIA and how they can be contacted. The PIA should promote the provision of a contact person as “best practice”. Such practice needs to be made mandatory certainly within any government organisation and any organisation doing business with the government. Such practice should also be promoted within standards organisations. 
21. that public-sector organisations insert strong requirements in their procurement processes so that those seeking contracts to supply new information systems with potential risk to privacy demonstrate their use of an integrative approach to PIA, risk management and project management. 
22. that companies and other organisations include privacy in their governance framework and processes in order to define clear responsibilities and a reporting structure for privacy risks. 
23. that companies and other organisations include a PIA task, similar to a work-package or a sub-work-package, in their project plan structures in order to embed PIA better within project management practices, and that project managers monitor and implement this new privacy task, based on the identified privacy requirements, as is done in the case of other project tasks. 
24. that, to foster internal buy-in for any newly adopted processes and procedures, companies and other organisations undertake extensive internal consultation with all parts of the organisation involved in risk management and project management, when thinking of integrating PIA into existing organisational processes. 
25. that companies and other organisations include identified privacy risks in their corporate risk register, and that they update their register when new or specific types of privacy risk are identified by implementation teams. 
26. that companies and other organisations develop practical and easy guidance on the techniques for assessing privacy risks and actions to mitigate them.
The recommendations reflect concerns such as -
While there are commonalities between the project and risk management processes and the PIA process, most of the methodologies do not mention privacy risks or even risks to the individual. Nevertheless, to the extent that privacy risks pose risks to the organisation, the organisation should take account of such risks in their project and risk management processes, including listing such risks in the organisation’s risk register. It should not be too difficult to convince organisations of the importance of taking privacy risks into account and regarding privacy risk as another type of risk (just like environmental risks or currency risks or competitive risks). Especially in industries that deal directly with the general public – for example, banking, entertainment, and retail – privacy breaches, not confined to “data breaches”, can be a significant threat to the company’s reputation. Based on examples of privacy breaches, it should not be too difficult to convince organisations about the need to guard against reputational risk 
Many of the risk management methodologies include provisions for taking into account information security (as distinct from privacy risks), and specifically with regard to confidentiality, integrity and availability of the information. Few go beyond this with the notable exception of ISO 29100, which specifically addresses privacy principles, IT Grundschutz and the CNIL methodology on privacy risk management. One can note that the privacy part of IT Grundschutz was written by the German DPA, and that the CNIL is the French DPA. Helpfully, both the privacy part of IT Grundschutz and the guides published by the CNIL include catalogues of privacy threat descriptions supplemented by the corresponding privacy controls.
Some of the project and risk management methodologies call for consulting or engaging stakeholders, especially internally, but some (e.g., ISO 31000, ISO 27005) externally as well. PIA does the same. Some of the project and risk management methodologies (e.g., ISO 31000, ISO 27005) call for reviewing or understanding or taking into account the internal and external contexts. This is true of PIA too.
Some of the project and risk management methodologies emphasise the importance of senior management support and commitment, which is also important for successful PIAs. Some of the risk management methodologies call for embedding risk awareness throughout the organisation. Some call for training staff and raising their awareness, which is also essential to PIAs. 
Almost all of the methodologies are silent on the issue of publishing the project or risk management report, although some do attach importance to documenting the process. Similarly, most are silent on the issue of independent, third-party review or audit to the project or risk management reports. There is, however, a requirement for companies listed on the London Stock Exchange to include information in their annual reports about the risks facing the company and how the company is addressing those risks.
In Australia the new Guide will replace the OAIC’s existing Privacy Impact Assessment Guide [here], reflecting changes to the Privacy Act 1988 and "taking into account key features of privacy impact assessment guides from other jurisdictions and research on good practice in undertaking privacy impact assessments".

IP Theory, Shame and Tomato Juice

'Theories of intellectual property: Is it worth the effort?' by Neil Wilkof in (2014) Journal of Intellectual Property Law & Practice asks "Should one care about theories of intellectual property?". At least one practitioner has advised 'no, we leave that to you'.

Wilkof goes on to state
A decade ago, Professor William Fisher, of Harvard University, made a challenging attempt to answer “yes”, in a book chapter entitled “Theories of Intellectual Property”. While never quite distinguishing between a philosophy, an approach, and a theory of intellectual property, Fisher identifies four analytical constructs, which we will call “theories”, namely—(i) utilitarian for maximizing net social value, (ii) Lockean (one has the right to the fruits of his intellectual labour); (iii) protection of personality in works; and (iv) fostering a just and attractive culture.
In his editorial for JIPLP Wilkof argues that
The utilitarian theory applies economic constructs to propose how intellectual property rights can achieve the Benthamite ideal of “the greatest good for the greatest number.” Cloaked in the more current notion of “wealth-maximization”, the focus is how to balance the social costs and benefits associated with giving legal effect to IP laws and rules. While the theory has produced various elegant propositions on how to conceive of this balance, it has proved to be devilishly difficult to create robust ways to measure inputs, outputs and process. 
The second, i.e. labour theory, reflects Locke on property rights -
Locke asserted that a person enjoys a natural right in the fruits of his labour in transforming raw materials (viewed as including, eg facts and concepts) that are “held in common” into a finished product of enhanced value, and the state has a duty to enforce the natural right that derives from the labour.
Wilkof's criticism is that
it does not self-explain why labour added to a resource “held in common” should entitle one to a property right in such resource; if “yes”, what is meant by “intellectual labour” and “held in common”; and how far should one's rights go in the fruits of his labour (as Robert Nozick observed, “if I pour my can of tomato juice into the ocean, do I own the ocean?”). As a result, seeking to apply the Lockean approach of property must inevitably end in potentially unmanageable analytical uncertainty. 
Gewirthian flourishing? The  personality theory is characterise by Fisher
as justifying property rights “when and only when they would promote human flourishing by protecting or fostering fundamental human needs or interests.”
Wilkof asks how can we identify the needs or interests to be promoted, noting Fisher's identification of four needs or interests appropriate for intellectual property - benevolence, identity, self-realisation and privacy.

He comments that there is however no agreement on how to apply those interests, e.g. -
is protection of trade secrets “necessary” to protect interests of privacy? Some say “yes” (a right of privacy extends to the freedom to disclose to a limited circle of friends without the fear that it will be disclosed to the entire world), while others say “no” (since most trade secrets are owned by corporations, that do not have the “personal features” that privacy is intended to protect). 
He notes that the final theory (voiced by “an eclectic cluster of political and legal theorists”) has less of an established foundation -
Called “social planning theory”, it differs from utilitarian theory in that it seeks to go beyond the notion of “social welfare” to a much broader vision of society serviced by intellectual property. An example given is Neil Netanel's view of copyright as intending to serve “a robust, participatory, and pluralist civil society,” where “unions, churches, political and social movements, civic and neighborhood associations, schools of thought, and educational institutions” abound.
Wilkof concludes that it
does not, and cannot, achieve agreement on what are the goals that such “social planning” seeks to achieve. As such, it too is inadequate.
'Fear and Loathing: Shame, Shaming, and Intellectual Property' by Elizabeth Rosenblatt investigates
the relationship between intellectual property protection, shame, and shaming. Although some scholars have examined shame and shaming as they relate to criminal law and behavior, none have considered how shame and shaming govern intellectual property and copying behavior. This paper identifies and focuses on two significant intersections: First, shame shapes the behavior of would-be copiers, who abide by anti-copying norms even in the absence of formal intellectual property protection. Second, public shaming shapes the behavior of intellectual property owners, who refrain from aggressively enforcing their rights to avoid being identified as bullies or trolls. 
These two shame/shaming effects have opposing results — on one hand, restriction on copying, and on the other, the freedom to copy — but they unite to establish and enforce intellectual property “negative spaces” where innovation and creation thrive without significant formal intellectual property protection or enforcement. In areas beyond the reach of formal intellectual property protection, shame helps define the boundaries of informal or norms-based intellectual property practices. In areas governed by formal intellectual property protection, shaming helps define the boundaries of rights holders’ enforcement forbearance. The result of these effects is an overlay of shame- and shaming-driven behavior that sits atop, and informally adjusts, the boundaries of formal intellectual property protection. This, in turn, requires us to adjust our thinking about the ideal boundaries of formal protection. Shame and shaming are not suitable substitutes for formal law, nor are they miracle cures for law’s failings, but they may act as guideposts for determining where to draw the lines of formal legal protection.

Surveillance Jurisprudence

'Complementing the Surveillance Law Principles of the ECtHR with its Environmental Law Principles: An Integrated Technology Approach to a Human Rights Framework for Surveillance' by Antonella Galetta and Paul De Hert in (2014) 10(1) Utrecht Law Review 55-75 comments
Looking at the case law of the European Court of Human Rights on surveillance, one notices a well maturing set of principles, namely: legality, legitimacy, proportionality (the standard check) and, if the Court is ‘on it’, also necessity and subsidiarity (the closer scrutiny check). In this contribution, we go through the surveillance case law of the Court. We find that: 1) not all surveillance is considered relevant to the right to privacy (the threshold problem); 2) when surveillance is subjected to a privacy right analysis, concerns about rights contained in other provisions, such as Articles 6, 13 and 14 of the Convention, are added; 3) not all surveillance that interferes with privacy is considered as problematic, hence differences in the Court’s view with regard to the legality requirement and the intensity of the scrutiny arise. 
This contribution goes beyond a straightforward analysis of the Court’s surveillance case law. In our second part we turn to Murphy and Ó Cuinn’s research on a ‘new technology’ approach in the Court’s case law and on principles that apply to a wide range of technology-related issues (from surveillance, to biomedicine, to polluting technologies). We focus in particular on the case law of the Court on environmental matters. We find that greater coherence could be reached in the Court’s case law on surveillance by integrating the environmental law principles of participation, precaution, access to information and access to justice in surveillance matters. Nevertheless, such a move would be very desirable and give new momentum to the Court’s case law on surveillance-related interferences.

Trust and health big data

In the UK the Government is reported as planning to bar the National Health Service (NHS) and Health & Social Care Information Centre (HSCIC) from "selling personal medical records for insurance and commercial purposes in a new drive to protect patient privacy".

NHS records will supposedly only be released where there is a "clear health benefit", rather than for "purely commercial" use by insurers and other companies.

As testimony to a Parliamentary Committee last week indicated, there is disagreement about notions of "clear health benefit" and about the effectiveness of measures to ensure privacy through pseudonymisation of data released directly by the NHS or under the care.data initiative.

Importantly, the Government has indicated that it will "bolster criminal sanctions for organisations which breach data protection laws by disclosing people's personal data". That change will apparently centre on what is described as a "one strike and you're out" approach, with misbehaving organisations being permanently banned from accessing the data.

I have noted the growing controversy over reports that in 2012 the Institute and Faculty of Actuaries obtained hospital data about 47million patients. The supposed price was a mere £2,220.

The Government will now formalise a ban on the HSCIC releasing GP and hospital information "for commercial purposes".  Presumably that restriction will not apply to pharmaceutical and medical device research entities, most of which operate on a commercial basis.

The  legislation will also introduce new measures to "deter " misuse of NHS data, with  companies that “recklessly disclose” personal information being liable to prosecution for criminal offence that carries a maximum penalty of  £500,000. That penalty as such is inadequate, although for large organisations the risk of no longer being able to access the data may be a more effective sanction.

In what is apparently regarded as a major step forward (rather than what should have been a given), NHS data will reportedly only be released to organisations that have abided by data protection rules.  Organisations wishing to use care.data information will have to prove that they are doing so on an “ethical basis” that will benefit patients and not breach privacy.

Growing criticism about inept handling of the opt-out arrangements for patients, highlighted by MedConfidential, is reportedly reflected in a plan that people will be able to opt out by phone and receive a legally-backed assurance that “no identifiable information” about them will enter the database.

Importantly, the Health and Social Care Information Centre and NHS will have to publish information about which organisations have received NHS data and the justification for the decision.

The Health Minister, noting that new measures are necessary to restore “public confidence”, is reported as stating
People want rights over how their health and care data, especially data that identify them, are being used. Safeguards will be put in place over and above what NHS England does to build public confidence. 
We know there is an enormous prize in our grasp, but we know we will win that prize only if we are very careful and thoughtful about how to proceed, taking the public with us.
Given the Caldicott Report we might wonder why the Government and senior officials hadn't had that recognition prior to now.

The UK Information Commissioner, who has traditionally adopted a more positive stance than the OAIC, has commented -
“Last summer I issued a warning to organisations across the UK that the public are now waking up to the value of their personal information and the importance of treating it properly. Any organisation or business that failed to handle people’s information properly in 2013, I said, would quickly find themselves losing trust and losing customers. 
In the months that followed it was two big data developments in the public sector that provoked widespread public unease. First there was Edward Snowden with his revelations about the activities of the security services, in the United States and in Europe. Then the GP data extraction scheme, care.data, was put on hold because a significant number of patients were asserting their right to stay in control of their information. 
We should see these developments as a line in the sand. Members of the public know this country has a Data Protection Act, they understand it requires organisations and companies to look after their information properly. Citizens and consumers expect organisations to be open and upfront with how their information will be used. In a digital age, this knowledge is invaluable and shows why the Act is so important. We must all get it right, or suffer the consequence

28 February 2014

Lost guns and terrorism records

A perspective on data breaches - especially through the loss of laptops, portable hard drives, CDs and USB sticks - is provided in a nice little article by the Milwaukee-Wisconsin Journal Sentinel regarding carelessness by officers of the US Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF).

The Journal Sentinel on 25 February reported
ATF agents have lost track of dozens of government-issued guns, after stashing them under the front seats in their cars, in glove compartments or simply leaving them on top of their vehicles and driving away, according to internal reports from the past five years .... Agents left their guns behind in bathroom stalls, at a hospital, outside a movie theater and on a plane, according to the records, obtained Tuesday by the news organization under the federal Freedom of Information Act.  … It is clear that agency rules were not followed in many of the incidents, which show at least 49 guns were lost or stolen nationwide between 2009 and 2013.
Oops.

Examples provided by the Journal Sentinel  -
  • In December 2009 two 6-year-olds spotted an agent's loaded ATF Smith & Wesson .357 on a stormwater drain in Bettendorf, Iowa. The agent lived nearby and later said he couldn't find his gun for days but didn't bother reporting it until the discovery featured in the local newspaper. He told police that he had recently misplaced the gun, thought it would eventually turn up, and had no idea how it got into the drain or when it was lost.
  • In 2011 an agent in Los Angeles went out drinking with other agents and friends. The next morning he woke up and realized his ATF-issued Glock was gone. It was not found.
  • In Septemer 2012 in Milwaukee an undercover agent had three of his guns (including an ATF machinegun) stolen from his government truck parked at a coffee shop. 
  • On 15 June 2012 an agent at Swedish Hospital in Bellevue, Washington, left his loaded ATF-issued Sig Sauer .40 caliber pistol in a bathroom stall. The gun was later found by someone in the bathroom. 
  • On 11 June 2012 an agent in Plainfield, Illinois was dropping off his children at a soccer game  when he put his government-issued Smith & Wesson revolver on his car's roof, forgot about it and drove away. The gun was found on an off-ramp and turned in to police. 
  • On 20 July 2009 an agent in Fargo, North Dakota put his ATF gun on top of his car and went to water his lawn. He forgot it was there; his daughter took the car to a friend's house. The agent scoured the area but could not find the gun. 
  • In November 2008 an agent left his ATF gun in a fanny pack on a Southwest Airlines plane in Houston. He came back later to retrieve it. 
  • In 2008 another agent in Houston didn't realize he had lost his gun for a week. It was not found.  
More disturbingly, the Journal Sentinel indicates that
the ATF used mentally disabled people to promote operations and then arrested them on drug and gun charges; opened storefronts close to schools and churches, increasing arrest numbers and penalties; and attracted juveniles with free video games and alcohol. 
Agents paid inflated prices for guns, which led to people buying weapons at stores and selling them to undercover agents hours later, in some cases for nearly three times what they paid. 
In addition, agents allowed armed felons to leave their fake stores and openly bought stolen goods, spurring burglaries in surrounding neighbourhoods. 
Given my interest in benchmarking I note that
The ATF has weapons stolen or loses them more frequently than other federal law enforcement agencies, according to a 2008 report from the Office of the Inspector General with the U.S. Department of Justice. In a five-year span from 2002-'07, for example, 76 ATF weapons were reported stolen, lost or missing, according to the report. That's nearly double the rate of the FBI and the U.S. Drug Enforcement Administration, when considering rates per 1,000 agents. The inspector general's office found the majority of losses and thefts were a result of carelessness or failure to follow ATF policy. 
The 2008 report cited "agents leaving weapons in public bathrooms, atop their vehicles, on an airplane and one in a shopping cart".
"There's no doubt that people leave things around, but when you have an agency whose task it is is to focus on firearms, it would seem to me like an extra measure of care would be called for," said David Harris, a professor at the University of Pittsburgh School of Law and an expert on law enforcement tactics and regulation. 
"If they are doing this at a rate that is higher than others (in law enforcement), it is something to worry about." The sloppy attention to securing weapons could stem in part from poor communication about the importance of the issue by leadership or lack of adequate consequences for those who violate the rules, Harris said. 
"You have to make it real. … People have to see there are real consequences," he said. "If you don't do that, you might as well not have the rules. It's just window dressing." 
In the UK Information Commissioner has imposed a civil monetary penalty of £185,000 on the Department of Justice Northern Ireland for what was described as a very serious data breach, i.e. selling a locked filing cabinet that contained personal information relating to victims of a terrorist incident.

The cabinet contained information about the injuries suffered, family details and the amount of compensation offered, as well as confidential ministerial advice. It was sold at auction when the Compensation Agency Northern Ireland, under the Department's control, moved offices in February 2012. The buyer forced the lock and found papers dating from the 1970s through to 2005.

The Commissioner comments "While failing to check the contents of a filing cabinet before selling it may seem careless, the nature of the information typically held by this organisation made the error all the more concerning".

Fordism

What is the likely impact on contemporary liberal democratic states of the demise of 47% of employment? That's one question that might occur to readers of 'The Future of Employment: How Susceptible Are Jobs To Computerisation' [PDF] by Carl Benedikt Frey and Michael A. Osborne in September last year.

The authors indicate that
We examine how susceptible jobs are to computerisation. To as sess this, we begin by implementing a novel methodology to estimate the probability of computerisation for 702 detailed occupations, using a Gaussian process classifier. Based on these estimates, we examine expected impacts of future computerisation on US labour market outcomes, with the primary objective of analysing the number of jobs at risk and the relationship between an occupation’s probability of computerisation, wages and educational attainment. According to our estimates, about 47 percent of total US employment is at risk. We further provide evidence that wages and educational attainment exhibit a strong negative relationship with an occupation’s probability of computerisation. 
They comment that
In this paper, we address the question: how susceptible are jobs to computerisation? Doing so, we build on the existing literature in two ways. 
First, drawing upon recent advances in Machine Learning (ML) and Mobile Robotics (MR), we develop a novel methodology to categorise occupations according to their susceptibility to computerisation. 
Second, we implement this methodology to estimate the probability of computerisation for 702 detailed occupations, and examine expected impacts of future computerisation on US labour market outcomes.
Our paper is motivated by John Maynard Keynes’s frequently cited predic- tion of widespread technological unemployment “due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour” (Keynes, 1933, p. 3). Indeed, over the past decades, computers have substituted for a number of jobs, including the functions of bookkeepers, cashiers and telephone operators (Bresnahan, 1999; MGI, 2013). More recently, the poor performance of labour markets across advanced economies has intensified the debate about technological unemployment among economists. While there is ongoing disagreement about the driving forces behind the persistently high unemployment rates, a number of scholars have pointed at computer-controlled equipment as a possible explanation for recent jobless growth (see, for example, Brynjolfsson and McAfee, 2011).
The impact of computerisation on labour market outcomes is well-established in the literature, documenting the decline of employment in routine intensive occupations – i.e. occupations mainly consisting of tasks following well-defined procedures that can easily be performed by sophisticated algorithms. For exam- ple, studies by Charles, et al. (2013) and Jaimovich and Siu (2012) emphasise that the ongoing decline in manufacturing employment and the disappearance of other routine jobs is causing the current low rates of employment. In addition to the computerisation of routine manufacturing tasks, Autor and Dorn (2013) document a structural shift in the labour market, with workers reallo cating their labour supply from middle-income manufacturing to low-income service occupations. Arguably, this is because the manual tasks of service occupations are less susceptible to computerisation, as they require a higher degree of flexibility and physical adaptability (Autor, et al., 2003; Goos and Manning, 2007; Autor and Dorn, 2013).
At the same time, with falling prices of computing, problem-solving skills are becoming relatively productive, explaining the substantial employment growth in occupations involving cognitive tasks where skilled labour has a comparative advantage, as well as the persistent increase in returns to education (Katz and Murphy, 1992; Acemoglu, 2002; Autor and Dorn, 2013). The title “Lousy and Lovely Jobs”, of recent work by Goos and Manning (2007), thus captures the essence of the current trend towards labour market polarization, with growing employment in high-income cognitive jobs and low-income manual occupa- tions, accompanied by a hollowing-out of middle-income routine jobs.
According to Brynjolfsson and McAfee (2011), the pace of technological innovation is still increasing, with more sophisticated software technologies disrupting labour markets by making workers redundant. What is striking about the examples in their book is that computerisation is no longer confined to routine manufacturing tasks. The autonomous driverless cars, developed by Google, provide one example of how manual tasks in transport and logistics may soon be automated. In the section “In Domain After Domain, Computers Race Ahead”, they emphasise how fast moving these developments have been. Less than ten years ago, in the chapter “Why People Still Matter”, Levy and Murnane (2004) pointed at the difficulties of replicating human perception, asserting that driving in traffic is insusceptible to automation: “But executing a left turn against oncoming traffic involves so many factors that it is hard to imagine discovering the set of rules that can replicate a driver’s behaviour [. . . ]”. Six years later, in October 2010, Google announced that it had modified several Toyota Priuses to be fully autonomous (Brynjolfsson and McAfee, 2011).
To our knowledge, no study has yet quantified what recent technological progress is likely to mean for the future of employment. The present study intends to bridge this gap in the literature. Although there are indeed existing useful frameworks for examining the impact of computers on the occupational employment composition, they seem inadequate in explaining the impact of technological trends going beyond the computerisation of routine tasks. Seminal work by Autor, et al. (2003), for example, distinguishes between cognitive and manual tasks on the one hand, and routine and non-routine tasks on the other. While the computer substitution for both cognitive and manual routine tasks is evident, non-routine tasks involve everything from legal writing, truck driving and medical diagnoses, to persuading and selling. In the present study, we will argue that legal writing and truck driving will soon be automated, while persuading, for instance, will not. Drawing upon recent developments in Engineering Sciences, and in particular advances in the fields of ML, including Data Mining, Machine Vision, Computational Statistics and other sub-fields of Artificial Intelligence, as well as MR, we derive additional dimensions required to understand the susceptibility of jobs to computerisation. Needless to say, a number of factors are driving decisions to automate and we cannot capture these in full. Rather we aim, from a technological capabilities point of view, to determine which problems engineers need to solve for specific occupations to be automated. By highlighting these problems, their difficulty and to which occupations they relate, we categorise jobs according to their susceptibility to computerisation. The characteristics of these problems were matched to different occupational characteristics, using O∗NET data, allowing us to examine the future direction of technological change in terms of its impact on the occupational composition of the labour market, but also the number of jobs at risk should these technologies materialise.
The present study relates to two literatures. First, our analysis builds on the labour economics literature on the task content of employment (Autor, et al., 2003; Goos and Manning, 2007; Autor and Dorn, 2013). Based on defined premises about what computers do, this literature examines the historical im- pact of computerisation on the occupational composition of the labour mar- ket. However, the scope of what computers do has recently expanded, and will inevitably continue to do so (Brynjolfsson and McAfee, 2011; MGI, 2013). Drawing upon recent progress in ML, we expand the premises about the tasks computers are and will be suited to accomplish. Doing so, we build on the task content literature in a forward-looking manner. Furthermore, whereas this literature has largely focused on task measures from the Dictionary of Occupational Titles (DOT), last revised in 1991, we rely on the 2010 version of the DOT successor O∗NET – an online service developed for the US Department of Labor. Accordingly, O∗NET has the advantage of providing more recent information on occupational work activities.
Second, our study relates to the literature examining the offshoring of information-based tasks to foreign worksites (Jensen and Kletzer, 2005; Blinder, 2009; Jensen and Kletzer, 2010; Oldenski, 2012; Blinder and Krueger, 2013). This literature consists of different methodologies to rank and categorise oc- cupations according to their susceptibility to offshoring. For example, using O∗NET data on the nature of work done in different occupations, Blinder (2009) estimates that 22 to 29 percent of US jobs are or will be offshorable in the next decade or two. These estimates are based on two defining characteristics of jobs that cannot be offshored: (a) the job must be performed at a specific work location; and (b) the job requires face-to-face personal communication. Naturally, the characteristics of occupations that can be offshored are different from the characteristics of occupations that can be automated. For example, the work of cashiers, which has largely been substituted by self-service technology, must be performed at specific work location and requires face-to-face contact. The extent of computerisation is therefore likely to go beyond that of offshoring. Hence, while the implementation of our methodology is similar to that of Blinder (2009), we rely on different occupational characteristics.
The remainder of this paper is structured as follows. In Section II, we review the literature on the historical relationship between technological progress and employment. Section III describes recent and expected future technological developments. In Section IV, we describe our methodology, and in Section V, we examine the expected impact of these technological developments on labour market outcomes. Finally, in Section VI, we derive some conclusions.

DNA Crime Databases

'Why So Contrived? The Fourth Amendment and DNA Databases After Maryland v. King' by David H. Kaye in (2014) 104 Journal of Criminal Law and Criminology comments that
In Maryland v. King, 133 S. Ct. 1958 (2013), the Supreme Court narrowly upheld the constitutionality of routine collection and storage of DNA samples and profiles from arrestees. Oddly, the majority confined its analysis to using DNA for certain pretrial decisions rather than directly endorsing DNA’s more obvious value as a tool for generating investigative leads in unsolved crimes. This article suggests that this contrived analysis may have resulted from both existing Fourth Amendment case law and the desire to avoid intimating that a more egalitarian and extensive DNA database system also would be constitutional. It criticizes the opinions in King for failing to clarify the conditions that prompt balancing tests as opposed to per rules for ascertaining the required reasonableness of searches and seizures. It urges the adoption of a more coherent doctrinal framework for scrutinizing not just DNA profiling, but all forms of biometric data collection and analysis. Finally, it considers what King implies for more aggressive DNA database laws.
Kaye comments that
In Maryland v. King, the Supreme Court rejected a constitutional challenge to the practice of routinely collecting DNA from arrested individuals. A bare majority of five Justices effusively endorsed the acquisition of DNA samples for “identification” before conviction (DNA‐BC). In response, four dissenting justices called the opinion a precedent‐shattering and “scary” foundation for “the construction of . . . a genetic panopticon” that could gaze into the DNA of airline travelers, motorists, and public school students. 
The case began when police in Maryland arrested Alonzo King for menacing people with a shotgun. Following the arrest, they took his picture, recorded his fingerprints - and swabbed the inside of his cheeks. When checked against Maryland’s DNA database, his DNA profile led to the discovery that six years earlier, King had held a gun to the head of a 53‐year‐old woman and raped her. Before the DNA match, the police had no reason to suspect King of that crime. Lacking probable cause — or even reasonable suspicion — they did not rely on a judicial order to swab his cheek. They relied on a state law that mandated collection of DNA from all people charged with a crime of violence or burglary.  
King appealed the resulting rape conviction. He argued that the DNA collection de‐ prived him of the right, guaranteed by the Fourth Amendment to the Constitution, to be free from unreasonable searches or seizures. Maryland’s highest court agreed. It held that except in the rarest of circumstances where a suspect’s true identity could not be established by conventional methods — the court gave the example of a face transplant — forcing an arrestee to submit to DNA sampling was unconstitutional. 
The state petitioned the Supreme Court for review. Over and over, the Court had denied requests from convicted offenders and, more recently, from arrestees to address the legality of state and federal laws mandating DNA routine collection of their DNA. But this case was different. Never before had a state supreme court or a federal appellate court deemed a DNA database law unconstitutional. Even before the Court met to consider whether it would review the case, Chief Justice Roberts stayed the Maryland judgment. His chambers opinion stated that “there is a fair prospect that this Court will reverse the decision below” and found that “the decision below subjects Maryland to ongoing irreparable harm.” 
The Chief Justice’s prediction proved correct. But the margin of victory was as narrow it could be, and the majority opinion leaves important questions unresolved. Moreover, the dissenting justices issued a biting opinion importuning the Court “some day” to repudiate its “incursion upon the Fourth Amendment.” Indeed, when Justice Kennedy announced the opinion of the Court, Justice Scalia invoked the rare practice of reading a dissent aloud. For eleven minutes, he mocked the majority’s defense of Maryland’s law as a means of identifying arrestees. “[I]f the Court's identification theory is not wrong, there is no such thing as error,” he railed. As he and the three Justices who joined his dissenting opinion (Justices Ginsburg, Sotomayor, and Kagan) saw it, the majority’s reasoning “taxes the credulity of the credulous.” 
The popular press and bloggers seized on the dissent’s portrayal of the Court’s opinion. One trenchant journalist asked “Why did Kennedy write his opinion in a way that makes him sound like the last guy on Earth to discover Law & Order?” Why indeed? Justice Kennedy knew perfectly well that DNA‐BC was being used to solve crimes. That was why the Chief Justice had granted the stay. It was why Justice Alito had flagged the case during the oral argument as “perhaps the most important criminal procedure case that [the Supreme] Court has heard in decades.” It was why the first words from Maryland’s Deputy Attorney General at oral argument were “Mr. Chief Justice, and may it please the Court: Since 2009, when Maryland began to collect DNA samples from arrestees charged with violent crimes and burglary, there had been 225 matches, 75 prosecutions and 42 convictions, including that of Respondent King.” 
This Article explains why Justice Kennedy’s opinion seems so contrived, describes more convincing (and doctrinally adequate) ways to analyze the constitutionality of DNA‐BC, and probes the boundaries of the Court’s decision. I suggest that the King Court treated the primary value of DNA‐BC—as a crime‐solving tool—as merely incidental to other functions because of the Court’s ambivalent jurisprudence on the propriety of balancing state and individual interests to ascertain the reasonableness of searches under the Fourth Amendment. The majority was unwilling or unable to speak clearly about the category of cases in which balancing is permissible. It was unwilling or unable to consider creating an express exception to accommodate the traditional rule that searches that do not fall within defined exceptions necessarily require probable cause and a warrant. As a result, the Court opened itself to the dissent’s charge of blinking reality and of being less than “minimally competent [in] English.” 
But the dissenting opinion, I maintain, fares no better. For all its barbs and jibes, its turns of phrases, and its literary allusions, the opinion points to no fundamental individual interest or social value that could justify so bilious a condemnation of DNA‐BC. It presents an oversimplified description of Fourth Amendment jurisprudence and applies a one‐size‐fits‐all approach to all types of searches of the person even though these searches vary greatly in their impact on legitimate individual interests and in their value to law enforcement. 
In short, the opinions represent a lost opportunity to clarify the law on balancing tests for Fourth Amendment rights and to scrutinize biometric data collection and analysis practices within a more coherent doctrinal framework. To explain and justify this assessment, Part I describes the reasoning of the Justices. It shows that the majority opinion expands an ill‐defined set of cases in which a direct balancing of interests determines the reasonableness of certain searches or seizures. It also maintains that the dissent simply drew an arbitrary line that was compelled neither by precedent nor by the interests that should determine the scope of Fourth Amendment protection. 
Part II looks more deeply into how the Court reasoned about reasonableness. It describes the existing version of the rule that searches without a warrant and probable cause are unreasonable without an applicable exception — what I call the PSUWE (per‐se‐unreasonable‐with‐exceptions) framework. It contrasts this framework to an earlier “warrant preference” rule, regime, model, or view that “the modern Court has increasingly abandoned.” After explicating the difference between those two methods for analyzing warrantless searches, it argues that King does not obliterate the PSUWE framework. In ad dition, it suggests that balancing within this framework to create either an exception under the special‐needs rubric or a categorical exception for certain types of biometric data would have been preferable to the majority’s direct resort to balancing. 
Part III shows that the opinions in King, having been forged in the crucible of incremental, case‐by‐case adjudication, do not come to grips with obvious variations on Maryland’s version of DNA‐BC, let alone the most basic questions that society must confront about DNA databases for law enforcement. In this Part, I try to elucidate these questions and to enucleate the implications of the opinions for some variations in DNA‐BC statutes in the light of likely advances in DNA science and technology. This analysis requires us to attend to the nature of the DNA sequences that are and might be used in law enforcement databases, the analogy between anatomical biometrics and these DNA sequences, and the ad equacy of statutory protections against the misuse of genetic information. I conclude with a brief discussion of the way in which legislatures should think about building DNA databases for law enforcement now that the Court has issued a construction permit.
'DNA and Law Enforcement in the European Union: Tools and Human Rights Protection' by Helena Soleto Muñoz and Anna Fiodorova in (2014) 10(1) Utrecht Law Review 149-162 comments
Since its first successful use in criminal investigations in the 1980s, DNA has become a widely used and valuable tool to identify offenders and to acquit innocent persons. For a more beneficial use of the DNA-related data possessed, the Council of the European Union adopted Council Decisions 2008/615 and 2008/616 establishing a mechanism for a direct automated search in national EU Member States’ DNA databases. The article reveals the complications associated with the regulation on the use of DNA for criminal investigations as it is regulated by both EU and national legislation which results in a great deal of variations. It also analyses possible violations of and limitations to human rights when collecting DNA samples, as well as their analysis, use and storage. 
Since its first successful use in criminal investigations in the 1980s, DNA has become an important tool to identify the guilty and to absolve the innocent. It provided the impetus to set up national DNA databases and legal provisions to use DNA-related data as forensic evidence. 
Today, given the prospect of an increasingly interconnected society, in which technical resources and technological advances allow a confusingly fast flow of people and instantaneous information on a worldwide level, the evolution of the phenomenon of crime is not lagging behind, but is keeping in step with the process of transnationalisation and is taking advantage thereof. 
In view of this situation, the investigation and prosecution of crime have to overcome borders by means of various judicial and police cooperation instruments, in particular those on mutual assistance and information exchange. 
One of the most important boosts provided over the last few years in combating transnational crime has been the development of the Convention between the Kingdom of Belgium, the Federal Republic of Germany, the Kingdom of Spain, the French Republic, the Grand Duchy of Luxembourg, the Kingdom of the Netherlands and the Republic of Austria on the stepping up of cross-border cooperation, particularly in combating terrorism, cross-border crime and illegal migration (the Prüm Treaty) and its partial transformation into a EU-wide cooperation tool under Council Decision 2008/615/JHA1 and Council Decision 2008/616/JHA. 
Within this cooperation framework, the exchange of DNA profiles and related personal data acquires great relevance, given the usefulness and universality of the information it offers for investigations and prosecutions. However, there is also a grey area concerning this issue, as it poses a series of risks for fundamental rights and not all aspects of DNA collection, analysis and exchange are unified on the EU level while national provisions vary a great deal. 
The article aims to: – present the importance of DNA-related data for criminal investigations; – study the EU information exchange dimension in this area and to discover the possible reason for its success; – analyse which DNA collection, analysis, use and storage aspects are regulated by EU and national law and how they vary; and – examine possible violations of or limitations to fundamental rights while using DNA for criminal investigation purposes

Tagging

In Victoria the Minister for Disability Services and Reform has announced an extension of the state's electronic tagging regime (i.e. more people to wear 'electronic handcuffs'.

Under the heading 'Improved community safety following review of disability treatment centre' the Minister indicates that
Community safety will be enhanced following a review of the Disability Forensic Assessment and Treatment Services (DFATS) facility in Melbourne.
The review followed an incident last month where a client on escorted leave eluded his escort and allegedly assaulted a woman.
Minister for Disability Services and Reform Mary Wooldridge said that the government has accepted the report’s key recommendation – that electronic monitoring devices are used for clients who are on leave from the facility.
”I have ordered that effective immediately, all new clients accepted into the facility agree that electronic monitoring may be used as part of their treatment and monitoring,” Ms Wooldridge said.
“Current laws do not allow for existing clients to wear electronic monitoring devices without their consent. However, we will amend the legislation to remove this barrier.”
The review also made a number of recommendations that the government is already acting on, including minor capital improvements and staff training. ...
“Electronic monitoring of DFATS clients is another step to build a safer Victoria.”
DFATS is a treatment centre that provides court-ordered support for people with intellectual disability who have committed a crime but have good prospects for rehabilitation. Up to 14 clients are housed at the secure facility. These clients may, as part of their treatment, be permitted to have leave from the facility.

Negligence

In State of Queensland v Kelly [2014] QCA 27 the Queensland Supreme Court of Appeal (QCA) has upheld the decision of Queensland Supreme Court, that the State of Queensland breached its duty of care in failing to provide adequate warning of dangers inherent in a visit to Lake Wabby on Fraser Island.

The QCA found that the risk of serious injuries suffered by an Irish tourist caused by running down a dune into a lake was not an obvious risk to a reasonable person in the position of the tourist.

Kelly had exposure to waterways and the ocean when at home in Ireland. He had not been exposed to sand dunes until he came to Australia, where he visited Fraser Island in September 2007 after seeing advertisements at a hostel whose owners hostel were licensed as commercial operators to bring tourists onto Fraser Island. A condition of that licence was that the operators ensured people coming to the island watched a video prepared by the Queensland National Parks and Wildlife Service (the Qld state government) that highlighted rules and some dangers (presumably inc dingos), with very brief warnings about entering shallow lakes and streams. There were no warnings about Lake Wabby, steep sand dunes or the dangers of running down steep dunes.

Kelly and his friends watched that video. Once on the island the visitors apparently encountered a warning sign, potentially misread, at the beginning of the 2.5k hike to the lake, which "presented as an attractive enticement to hot walkers". It was "human nature to enjoy running down a dune and jump into cooling water on a hot day". Numerous people were in the water, diving, running or sunbathing. After running up and down the dunes and diving into the water several times Kelly became a partial tetraplegic through injuries suffered when he apparently lost his footing on the dune and as a result dived into shallow water. He sued Queensland for damages for negligence. The state conceded in Kelly v State of Queensland [2013] QSC 106 that it had the care, control and management of the public land on Fraser Island. It conceded that it owed a duty of care to lawful entrants on that land, including Kelly, but there were disputes about the content of the duty.

In the first instance the trial judge McMeekin J held that Kelly’s injuries were caused by the state's breach of its duty of care in failing to provide adequate warning of the dangers inherent in the visit to Lake Wabby in the video. The damages recoverable by Kelly for that negligence were reduced by 15 per cent because the tourist was guilty of contributory negligence in failing to closely read and obey signs that warned against running down dunes at the lake.

On appeal the Court stated, in effect, that the state should have tried harder -
There was a long history of serious injury to visitors at Lake Wabby. The incidents were summarised in exhibit 11, although it was not known whether that was a complete record and it was not certain that the cause of each injury had been recorded accurately. 
In the 17 year period before the respondent was injured 18 incidents were recorded, many of which involved serious spinal injuries. Thirteen involved the back, neck or spine and others included references to injuries to feet, leg and shoulders. Many of the incidents refer to diving into the water or the shallow water of the lake. Some of the entries refer to the incident occurring when the injured person ran or walked down the Lake Wabby sand dune. 
On 20 April 1993 the appellant’s “Manager (Great Sandy)” wrote a memorandum to the “Manager (Park Management)” expressing concern about a report of “yet another accident at Lake Wabby”, referring to advice by staff that at least two people had broken their necks at the lake in the previous two years and had become quadriplegic and that earlier in 1993 another person had seriously injured his spinal cord, and recorded that: “It appears that visitors injured generally read the warning signs at the lake but ignore the dangers. This area is clearly one of the most dangerous areas on park estate in Queensland by virtue of the number and seriousness of accidents there.” The manager thought that the lake “requires urgent evaluation and formulation of an action plan” and that the factors requiring consideration included adequacy of the existing signage and of other visitor information and of the desirability or need for fencing or other physical barriers to prevent visitors running down a dune.
Henry J on appeal concluded
It warrants emphasis that while the determination of whether the risk was obvious fell to be determined objectively, it did not fall to be determined in the abstract. It is obvious that running down a sand dune into a lake involves a risk of some injury. However sandy slopes and water present as apparently forgiving surfaces on or in which to fall. Whether running down a sand dune into a lake involves an obvious risk of serious injury will very much depend upon the individual circumstances of the case. It is a question of degree, turning upon an appreciation of the whole of the evidence, including evidence about warning signs. The learned trial judge’s approach to the determination of the question, as explained by Fraser JA, was careful and well-reasoned. It involved no error and his Honour’s conclusion was reasonably open on the whole of the evidence.
Fraser JA stated
The trial judge observed that the principle criticism which could be made of the respondent was that he failed to study the signs closely:
“… It was incumbent on him to read the signs. They plainly alerted him to a danger. They expressly warned against running down dunes. As I have said the problem is that the signs did not bring home the real risk in running down the dune – a reasonable reading of them could lead a visitor to think it was the act of running and diving that represented the risk of injury not running and jumping. Acting reasonably he may not have understood why the signs contained that message, but the message not to do so was nonetheless clear. The authorities advised against running down the dunes. 
Had he read the signs and obeyed their message the accident would have been averted. 
The difficulty is that all the other information that he received suggested there was no significant danger. Many others were doing precisely the same activity, without mishap. He had done so himself without mishap on numerous occasions as had his friends. 
… In my view even though the signs did not adequately convey why visitors should not run down the dunes visitors enjoying a novel experience ought in their own interests exercise the caution that the authorities advise.” … 
In holding that the risk which materialised was not an “obvious risk” the trial judge took into account his findings that: the risk of serious injury was not apparent to a significant percentage of the visitors to Lake Wabby; the respondent was relatively young, had no experience with sand dunes, and had not previously been to Lake Wabby; there was no apparent danger in jumping into the water, which was sufficiently deep for that activity; before the respondent was injured he saw numerous other people at the lake engaged in a similar activity without incident; the respondent himself engaged in that activity on about 10 occasions without incident; it was not suggested that the respondent had observed the sand to give way so as to cause him to lose his footing on any previous occasion or that it had such an effect on any other person; the video which the plaintiff had seen included warnings about dangers presented by the topography of and activities on the Island but it did not indicate any problem with running down the sand dunes and jumping into any lake or Lake Wabby in particular; and there was no warning or description in the video, signs, or any published brochures, of the number of serious injuries which had occurred over the years at Lake Wabby or any of those injuries being associated with running down the sand dunes. 
The trial judge also took into account his finding that “[t]here was no sign or other warning in the plaintiff’s immediate vicinity that running down the sand dune involved a risk of serious injury such as a broken neck”

PC Access Regime Report

The Productivity Commission has released its final report [PDF] on the National Access Regime, considering the objectives for access regulation, the effectiveness of the Regime in meeting its objectives and the potential reform options.

Presumably the report will be borne in mind by policymakers involved in the 'root & branch' review of competition policy announced last year and noted here.

The report notes that
The Regime is a regulatory framework that provides an avenue for firms to access certain ‘essential’ infrastructure services owned and operated by others, when commercial negotiations on access are unsuccessful. The Regime is intended to promote the economically efficient operation of, use of and investment in the infrastructure by which services are provided, thereby promoting effective competition in upstream and downstream markets. 
The Regime was introduced in 1995 as a key part of the National Competition Policy (NCP), which brought in broad-ranging reforms to enhance productivity and growth in the Australian economy. The regulatory provisions of the Regime are contained in Part IIIA of the CCA and clause 6 of the CPA, which was signed by the Commonwealth and States and Territories in April 1995 to underpin the NCP. 
In 2001, the Commission conducted an inquiry into the operation of the Regime. The Commission supported continuation of the Regime and made a number of recommendations to improve its operation — including in relation to clarifying the Regime’s objectives and scope, encouraging efficient infrastructure investment, strengthening incentives for commercial negotiation, and improving the certainty and transparency of regulatory processes. The Australian Government supported most of the Commission’s proposed measures and a number of operational reforms to Part IIIA have since been introduced. 
The Council of Australian Governments (COAG) agreed on a new National Reform Agenda in February 2006. As part of that Agenda, COAG signed the CIRA to provide for a simpler and more consistent national system of economic regulation for nationally-significant infrastructure, including for ports, railways and other key infrastructure. The CIRA included some specific reforms to improve the operation of the Regime, building on the Commission’s 2001 recommendations. Clause 8.1 of the CIRA provides that once it has operated for five years, the Parties will review its operation and terms.
In reporting on the Regime and the CIRA the Commission was to -
1. examine the rationale, role and objectives of the Regime, and Australia’s overall framework of access regulation, and comment on: (a) the full range of economic costs and benefits of infrastructure regulation, including contributions to economic growth and productivity; (b) the operation of the Regime relative to other access regimes, including its consistency with those regimes and the effectiveness of the certification process; and (c) the roles of the National Competition Council, the Australian Competition and Consumer Commission and the Australian Competition Tribunal in the administration of the Regime, and the Minister as decision maker, and the relationship between the institutions; 
2. assess the performance of the Regime in meeting its rationale and objectives, including: (a) the effectiveness of enhancements made to the Regime and the regulatory reforms agreed under COAG’s National Reform Agenda; and (b) how the Regime has been variously applied by decision makers, but not so as to constitute a review or reconsideration of particular decisions; 
3. report on whether the implementation of the Regime adequately ensures that its economic efficiency objectives are met, including: (a) whether the criteria for declaration strike an appropriate balance between promoting efficient investment in infrastructure and ensuring its efficient operation and use; (b) whether the criteria for declaration are sufficiently well drafted in the legislation to ensure that its objectives will be met; 
4. provide advice on ways to improve processes and decisions for facilitating third party access to essential infrastructure, including in relation to: (a) promoting best-practice regulatory principles, such as those pertaining to regulatory certainty, transparency, accountability and effectiveness; (b) measures to improve flexibility and reduce complexity, costs and time for all parties; (c) options to ensure that, as far as possible, efficient investments in infrastructure are achieved; and (d) ‘greenfield’ infrastructure projects and private sector infrastructure provision; 
5. review the effectiveness of the reforms outlined in the CIRA, and the actions and reforms undertaken by governments in giving effect to the CIRA; and 6. comment on other relevant policy measures, including any non-legislative approaches, which would help ensure effective and responsive delivery of infrastructure services over both the short and long term.
The Commission's key recommendations are -
Maintenance of the regime
The National Access Regime should be retained. 
  • Access regulation can address an enduring lack of effective competition, due to natural monopoly, in markets for infrastructure services where access is required for third parties to compete effectively in dependent markets. This is the only economic problem access regulation should address.
  • The scope of the Regime should be confined to ensure its use is limited to the exceptional cases where the benefits arising from increased competition in dependent markets are likely to outweigh the costs of regulated third party access to infrastructure services. Proposed changes to the declaration criteria seek to achieve this outcome.
  • Robust institutional arrangements, including an avenue to limited merits review, should ensure that access regulation is judiciously applied.
Market intervention 
When considering whether to regulate access to infrastructure services in the future, governments should seek to demonstrate that there is a lack of effective competition in the market for the service that is best addressed by access regulation. An assessment of the net benefits should determine whether access regulation is most appropriately applied at the facility or industry level.
  • Facility based arrangements impose net costs if they are incorrectly applied, and provide incentives for lobbying. Such arrangements should be limited to where there is a clear net benefit from tailoring access regimes for a specific facility.
  • Further industry specific regimes should apply only where there is sufficient similarity between infrastructure services within the industry and where the industry has features that justify different regulatory treatment from that offered by the generic National Access Regime.
  • Caution should be exercised before mandatory undertakings are implemented in the future. Where mandatory undertakings are used, they should be subject to upfront and ongoing assessment to ensure they are used to target the economic problem. Safeguards for the provider and other existing users of the service should be consistent with those for declared services.
ACCC Determinations 
There is an economic rationale for the Australian Competition and Consumer Commission's (ACCC's) power to direct infrastructure extensions in an access determination but, due to the practical difficulties of directing extensions, it is likely that the benefits of using the power would rarely outweigh the costs.
  • Part IIIA should be amended to confirm that the ACCC's legislative power to direct extensions also encompasses capacity expansions. This will ensure that the safeguards set out in the legislation will also apply to directed expansions. 
  • Following a public consultation process, the ACCC should develop guidelines outlining how it would exercise its legislative power to direct extensions such that it would be expected to generate net benefits to the community. The preparation of the guidelines should include an analysis of the workability and adequacy of the provision to direct extensions and its safeguards. 
  • The safeguards should not be construed such that a service provider could be required to pay the upfront costs of the directed extension or capacity expansion.