24 March 2012

Patent Imperium

'The US, China and the G-77 in the Era of Responsive Patentability' (Queen Mary School of Law Legal Studies Research Paper No. 105/2012) by Peter Drahos notes that
China is building capacity to grant, use and enforce patents. Its interests in the patent system are different to many G77 countries. The paper considers three questions. Can China make the patent system work for it? If so, how will the US respond? What should the weaker members of the G77 do in light of the fact that the leaders of the G77 are no longer interested in dealing with the structural disadvantages that the patent system perpetuates?
Drahos is always worth reading, irrespective of whether you agree with him. He comments that -
As an institution the patent system has spread its wings and flown from the European countries of its origin to the four quarters of the globe. Contrary to what genuine neo- liberals such as von Hayek might have hoped, a highly regulatory and interventionist system has flourished, even through the decades of deregulatory zeal inspired by Reagan and Thatcher. In fact the 1980s was one of the best decades for the patent system since it was the decade in which the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) was forged, an agreement made binding on all members of the World Trade Organization.

The patent system still has its critics, but they are largely confined to university corridors of the powerless. As this paper will show, the developing states such as Brazil and India that led the charge against the system have in practice converted to using the system. Today’s developing states still mount objections to the patent system, but these seem to operate more at the level of rhetorical negotiating strategies than at the level of regulatory praxis. The bold experimentalism of the nineteenth century with the patent system has gone, to be replaced by an acceptance of the system and narcoleptic discussions over matters such as the right level of inventive step, the merits of post-grant opposition and the scope of an experimental use defence. Whatever the fate of Pax Americana this century, the patent system appears to be a globally entrenched system of rules by which all those who would be kings in the global economy will have to play.

If the claim that the lead developing countries have largely abandoned any significant opposition to the patent system is right then it does raise other questions. How will developing country powers such as China and India make the patent system work for them? The patent system, it needs to be remembered, is the visible boot of monopoly in the competitive market. It is generally inadvisable for governments to sit back and let these patent boots march through their economies without some restrictions. The patent system like the tax system demands constant monitoring and adjustment. Another question is what might happen if China, in particular, is too successful in making the patent system work? What if China is able to obtain patent ownership of many more lucrative technologies than it currently has and is able to extract much more in the way of patent rents from the global economy?

A third issue faces developing countries that have very little prospect of being able to make gains from the system. The Group of 77 (G-77) countries, which was formed in 1964 and now has a membership of 131, has many of the poorest countries of the world as members. Fidel Castro in a speech at a G-77 Summit in Havana in 2000 claimed that developed countries “control 97% of the patents the world over and receive over 90% of the international licenses' rights”. He went on to observe that the “new medications, the best seeds and, in general, the best technologies have become commodities whose prices only the rich countries can afford.” Castro finished with a strong appeal for unity and cooperation amongst the G-77. A politician of his longevity and survival skills must have known that something like an inverse unity rule applies in political life - the stronger the appeal for unity by a politician the less actual unity is present. And so it is in the case of the G-77 and patents. Brazil and India have for all practical purposes abandoned their historical leadership of the G-77 in fighting the neo-regulatory expansion of monopoly privileges in the world trading order. Of course Brazil and India’s eloquent diplomats do not announce this in Geneva meetings. They keep up the rhetoric about the injustice of the system, the need for technology transfer, their exclusion from scientific knowledge etc., etc., etc. But these countries have, as the next section will show, joined the ranks of the patent faithful. What then should the weaker members of the G-77 do when it comes fighting the kinds of price and access problems mentioned by Castro? In a world of isolated bilateral trade dealing the answer is far from obvious.

Summing up, the paper argues that the leaders of the G-77 have abandoned their attempts at deep reform of the patent system to meet the development objectives of all G-77 countries. The next section briefly describes the rise and rise of responsive patentability and the high water mark of opposition to it by developing countries and their subsequent surrender. This in turn raises three questions to each of which the paper sketches an answer. Can China, which has probably has the best chance, make the system work for it? If China can do so, what is likely to be the response of the US? What should the weaker members of the G-77 do in light of the fact that the leaders of the G-77 are no longer interested in dealing with the structural disadvantages the system perpetuates?
After an incisive analysis of potential developments in Sino-US relationships he concludes that -
The structural role of the patent system in making knowledge a scarce resource so that the rich can get richer will from time to time come in for some angry denunciation and some economists will from time to time repeat the not-so-startling conclusion that global monopoly privileges are globally inefficient. However, the patent system, like the poor, will stay with us. Political elites everywhere have become convinced that this winner-take-all system best serves their techno-nationalist and wealth-maximizing ambitions. That is true of elites in China as much as it is elites in the US. China’s market socialism may yet evolve into a close variant of US knowledge monopoly capitalism. This ending to China’s development story would not surprise readers of Animal Farm.

In the face of this kind of consensus about the virtues of the patent system there is not much that poor countries can do. An individual country can perhaps pray that an alms- giving network coalesces around its problem rather than that of its neighbour.

The evidence suggests that China has well and truly embraced the patent system as part of its development journey. The same can be said of Brazil and India. These leaders of the G-77 can no longer be said to represent the interests of poorer members of the G-77 when it comes to dealing with the problem of global knowledge monopolies in the world trading order. Charity networks are evolving to ameliorate some of the effects of this trading order for some countries.

China is making the kind of investments in R&D funding and the development of scientific human capital that are needed to make the patent system perform its function of wealth maximization. China’s rapid build up of a patent bureaucracy appears to be part of a strategy to fast track the experience of its enterprises in working in patent intense business environments. Whether encouraging the saturation of its domestic market with patents will produce the desired selection effects is an open question. No other country has, to borrow Den Xiaoping’s metaphor, crossed the stream in this way. If it works we can expect that some time this century Pax Sinica will chug past Pax Americana on the back of global technology monopolies. The US will, in forging a response, draw heavily on antitrust principles and remedies, much as it did in the last century when international cartels threatened its economic interests.

1911

'The First Global Copyright Act' by Uma Suthersanen in A Shifting Empire: 100 Years of the Copyright Act 1911 (Edward Elgar, 2012) edited by Suthersanen & Yvonne Gendreau covers the Imperial Copyright Act 1911.

Suthersanen comments that
The process of consolidation and reform of copyright law in Britain dragged on interminably from the first efforts in the 1830s to 1911. This essay explores the reasons for this delay through a contextual analysis of the period that preceded the adoption of the Imperial Copyright Act 1911. Copyright reform in the nineteenth century, leading up to the Imperial Copyright Act 1911, was not driven by single issues or discrete lobbying groups. First, there was authorial and publishing pressure for domestic and international copyright reform to combat the growing, global piracy of English language works. Secondly, the delay was partly due to the need for the British government to come to terms with its role as international legislator for a global British polity. Much of the debates concerning the reform of copyright law revolved, surprisingly, on imperial governance, international comity, and the protection of the colonial market trade. This is seen when we turn to survey some examples on imperial and trade concerns, including the Anglo-Canadian copyright relations. The essay concludes by discussing the post-1911 copyright era in the UK, noting that in retrospect, the Imperial Copyright Act 1911 was the first global law written and administered by the United Kingdom. The statute was, conceptually, the first multilateral agreement, and if we push the analogy further, the forerunner of the TRIPS Agreement.

This essay is part of a multi-authored collection that specifically surveys the impact and evolution of the Imperial Copyright Act 1911 on countries that were part of the Empire.

23 March 2012

Hackfatigue

Yet another breathless data breach advertorial, this time from Verizon, which claims "2011 Was the Year of the 'Hacktivist".

The 80 page Verizon 2012 Data Breach Investigations Report [PDF] - decorated with pretty graphs and impressive-looking statistics - emotes about "the dramatic rise of hacktivism - cyberhacking to advance political and social objectives" before reporting that the "majority of breaches are avoidable with sound security measures". Verizon offers security solutions along with connectivity.

The report indicates that
In 2011, 58% of data stolen was attributed to hacktivism, according to the annual report released today from Verizon. The new trend contrasts sharply with the data-breach pattern of past several years, during which the majority of attacks were carried out by cybercriminals, whose primary motivation was financial gain.

Seventy-nine percent of attacks represented in the report were opportunistic. Of all attacks, 96% were not highly difficult, meaning they did not require advanced skills or extensive resources. Additionally, 97% of the attacks were avoidable, without the need for organizations to resort to difficult or expensive countermeasures. The report also contains recommendations that large and small organizations can implement to protect themselves.

Now in its fifth year of publication, the report spans 855 data breaches across 174 million stolen records - the second-highest data loss that the Verizon RISK (Research Investigations Solutions Knowledge) team has seen since it began collecting data in 2004. Verizon was joined by five partners that contributed data to this year's report: the United States Secret Service, the Dutch National High Tech Crime Unit, the Australian Federal Police, the Irish Reporting & Information Security Service and the Police Central e-Crime Unit of the London Metropolitan Police.

"With the participation of our law enforcement partners around the globe, the '2012 Data Breach Investigations Report' offers what we believe is the most comprehensive look ever into the state of cybersecurity," said Wade Baker, Verizon's director of risk intelligence.
Modest to a fault, those Verizon people, who announce that "Our goal is to increase the awareness of global cybercrime in an effort to improve the security industry's ability to fight it while helping government agencies and private sector organizations develop their own tailored security plans".

Supposedly
Breaches originated from 36 countries around the globe, an increase from 22 countries the year prior. Nearly 70% of breaches originated in Eastern Europe, with less than 25% originating in North America.

External attacks remain largely responsible for data breaches, with 98% of them attributable to outsiders. This group includes organized crime, activist groups, former employees, lone hackers and even organizations sponsored by foreign governments. With a rise in external attacks, the proportion of insider incidents declined again in this year's report, to 4%. Business partners were responsible for less than 1 percent of data breaches.

In terms of attack methods, hacking and malware have continued to increase. In fact, hacking was a factor in 81% of data breaches and in 99% of data lost. Malware also played a large part in data breaches; it appeared in 69% of breaches and 95% of compromised records. Hacking and malware are favored by external attackers, as these attack methods allow them to attack multiple victims at the same time from remote locations. Many hacking and malware tools are designed to be easy and simple for criminals to use.

Additionally, the compromise-to-discovery timeline continues to be measured in months and even years, as opposed to hours and days. Finally, third parties continue to detect the majority of breaches (92%).
Data in the 2012 report is claimed to demonstrate that:
Industrial espionage revealed criminal interest in stealing trade secrets and gaining access to intellectual property. This trend, while less frequent, has serious implications for the security of corporate data, especially if it accelerates.

External attacks increased. Since hacktivism is a factor in more than half of the breaches, attacks are predominantly led by outsiders. Only 4% of attacks implicate internal employees.

Hacking and malware dominate. The use of hacking and malware increased in conjunction with the rise in external attacks in 2011. Hacking appeared in 81% of breaches (compared with 50% in 2010), and malware appeared in 69% (compared with 49% in 2010). Hacking and malware offer outsiders an easy way to exploit security flaws and gain access to confidential data.

Personally identifiable information (PII) has become a jackpot for criminals. PII, which can include a person's name, contact information and social security number, is increasingly becoming a choice target. In 2011, 95% of records lost included personal information, compared with only 1% in 2010.

Compliance does not equal security. While compliance programs, such as the Payment Card Industry Data Security Standard, provide sound steps to increasing security, being PCI compliant does not make an organization immune from attacks.
Only 1% in 2010 and 95% in 2011? Looking beyond the triteness of "being PCI compliant does not make an organization immune from attacks" - a first year undergrad conclusion - it's difficult to embrace a report with problematical figures that aren't sourced or readily verified.

Fraud

The Australian Institute of Criminology has released Fraud against the Commonwealth 2009-10, one of those unsatisfying documents that relies on problematical reporting by government agencies and discretion on the part of the AIC. Last year's report is noted here.

The report outlines fraud committed against the Commonwealth and includes information to assist agencies to improve fraud control measures. It indicates that -
• there was a 12% reduction in reported incidents of fraud and a 17 % reduction in the amount lost across the Commonwealth;
• agencies recovered almost $200 million in funds lost to external fraud incidents, and almost $600,000 from internal fraud incidents;
• the AFP accepted 94 fraud referrals, 24 of which resulted in legal action;
• the CDPP secured almost $60 million from fraud cases by way of reparation under the Crimes Act and orders under the Proceeds of Crime Act – an increase of over $14 million from the previous year; and
• $498 million was lost to the Commonwealth in fraud, misuse or theft.
The AIC comments that -
Fraud against the Commonwealth may be committed by individuals outside agencies (external fraud) who seek to claim benefits or obtain some other financial advantage dishonestly, or by those employed by agencies (internal fraud), including staff and contractors. The incidence and financial impact of internal fraud is generally lower than of external fraud, although both deplete government resources and have a negative impact on the administration of agencies.

Fraud in the public sector deprives governments of income for providing services to their communities while fraud in the private sector can seriously harm, businesses and individuals alike. The 152 Australian Government agencies that responded to the present survey reported experiencing almost 706,000 incidents of fraud (internal and external), worth almost $498m during 2009–10. This was almost 17 percent less than the amount lost in 2008–09, and almost 12 percent fewer reported incidents than in 2008–09. Reported losses arising from internal fraud, however, increased by almost 10 percent between 2008–09 and 2009–10, with more than $2m lost in 2009–10.

These totals under-represent the true value of fraud losses, as only 43 percent of agencies that experienced fraud specified a loss in 2009–10 (26 out of the 61 agencies that experienced fraud). This was an improvement on the situation in 2008–09, when only 40 percent of agencies that experienced fraud specified a loss (23 out of 58 agencies that experienced fraud). The ability to quantify a loss depends on various factors, including the availability of evidence of what transpired, whether the investigation had been finalised and the nature of the dishonesty practised. Some instances where intangible losses are involved are difficult to quantify.

Responses vary when fraud is identified within agencies. Some responses are obligatory under official policies and laws, and others are optional depending on the scale and circumstances of the offence. Often, however, fraud is not reported officially and sometimes repeat victimisation occurs—occasionally by the same offender against the same agency. Both government and business have developed an extensive range of responses to this problem over the past decade, notably in response to changes in information and communications technology and the resulting increased vulnerability to computer-enabled crime.
It goes on to comment that -
Almost the same percentage of agencies reported fraud victimisation in 2009–10 as in 2008–09 (40% in 2009–10, 39% in 2008–09). Slightly more agencies reported external fraud (34%) than internal fraud (31%), while nearly one-quarter had experienced both types of fraud (24%). Seven percent of agencies reported incidents of collusion between individuals within agencies and those outside agencies in 2009–10, the same as in the preceding year. In total, 705,547 incidents of fraud (internal and external) were reported in 2009–10 by 61 agencies —a reduction of almost 12 percent of the number of incidents from the 800,698 reported in 2008–09.

There were considerably more reported incidents of fraud alleged against persons external to agencies (external fraud) than against employees and contractors (internal fraud). In 2009–10, 47 agencies reported 3,001 incidents of internal fraud. For the five specified categories of internal fraud, incidents relating to ‘financial benefits’ affected the largest proportion of agencies (20%, n=30). For the specific subcategories of internal fraud, ‘leave and related entitlements’ affected the highest number of agencies experiencing internal fraud (n=19, 40%), which differed from 2008–09, when misuse of government credit cards affected the largest number of agencies (38%).

Agencies reported 702,941 incidents of external fraud, some of which may have involved allegations of non-compliance with regulatory instruments rather than actual incidents of financial crime. Most incidents related to ‘entitlements’; however, this only affected a small number of the largest agencies. One agency reported 75,644 incidents related to entitlements, while another reported 613,996 incidents which were comparable in scale to those reported by these agencies in 2008–09. For external fraud, the type of incident affecting the greatest number of agencies involved ‘financial benefits’ (21%).The specific category of fraud that affected the greatest number of agencies was ‘theft of telecommunications or computer equipment (including mobile devices)’ (n=18, 35%). It was found that smaller agencies, with 500 or fewer employees, were less likely to report fraud incidents than those with more than 500 employees. However, while the smaller agencies reported fraud at lower rates, they were not completely immune. Eighteen percent of smaller agencies reported experiencing at least one fraud incident, while 83 small agencies reportedly did not experience any fraud.

The total loss reported by agencies was $497,573,820, although only 42 percent of agencies that experienced fraud specified a loss.

Fifty-three percent of agencies that reported experiencing an internal fraud incident reported a financial loss in 2009–10 totalling $2,039,162, compared with 60 percent in 2008–09 totalling $1,856,707—an increase of almost 10 percent.

Fraud related to ‘misuse of entitlements’ was the most costly internal fraud category, with agencies reporting more than $1.2m lost to this fraud type alone.

Fifty-one agencies experienced an incident of external fraud, worth $495,534,658 in 2009–10, although only 65 percent of agencies that experienced an incident of external fraud specified a loss. This was a 17 percent decrease in reported losses from external fraud from 2008–09. The largest external fraud losses arose from fraud relating to ‘entitlements’, with a total estimated loss of $487m in 2009–10 compared with $489m in 2008–09. For both internal and external fraud, there were several agencies that suffered losses they were unable to quantify.

In 2009–10, some 40 percent of total reported losses were recovered by agencies, with $196,735,497 recovered. This was a considerable increase in the proportion of losses recovered in 2008–09, when $139,312,337 was recovered. The vast majority of funds recovered related to external fraud. ...

In 2009–10, 5,010 defendants were referred to the CDPP for prosecution involving allegations of fraud. Of these, 4,913 were prosecuted, resulting in 4,180 convictions and 29 acquittals. It should be noted that prosecutions undertaken by the CDPP in 2009–10 may relate to cases that had been referred to the CDPP in previous years. Accordingly, some cases that agencies referred to the CDPP in 2009–10 may have been prosecuted in later years. Charges against those prosecuted for fraud in 2009–10 involved alleged financial losses of almost $100m. The CDPP secured more than $59m by way of reparation under the Crimes Act 1914 (Cth) and pecuniary orders under the Proceeds of Crime Act 1987 (Cth). These recoveries related only to monies recovered during 2009–10.

Data Protection or Protection Theatre?

'Global Data Privacy Laws: 89 Countries, and Accelerating' PDF] by Graham Greenleaf in 115 PrivacyLaws & Business International Report (Special Supplement, February 2012) comments that
It is almost forty years since Sweden’s Data Act 1973 was the first comprehensive national data privacy law, and was the first to implement what we can now recognize as a basic set of data protection principles. How many countries now have data protection laws? This article surveys the forty years since then of global development of data privacy laws to the start of 2012. It expands and updates ‘Global data privacy laws: Accelerating after 40 years’ ((2011) Privacy Laws & Business International Report, Issue 112, 11‐17) which showed that at least 76 countries had enacted data privacy laws by mid‐2011. Six months later, further investigation shows that there are at least 89 countries with such laws. The picture that emerges is that data privacy laws are spreading globally, and their number and geographical diversity accelerating since 2000.
In a useful tabulation Greenleaf argues that -
There are some surprising inclusions, and some illuminating trends in the expansion of these laws. The total number of new data privacy laws globally, viewed by decade, shows that their growth is accelerating, not merely expanding linearly: 8 (1970s), 13 (1980s), 21 (1990s), 35 (2000s) and 12 (2 years of the 2010s), giving the total of 89. In the first two years of this decade 11 new laws have been enacted (Faroe Islands, Malaysia, Mexico, India, Peru, Ukraine, Angola, Trinidad & Tobago, Vietnam, Costa Rica, Gabon and St Lucia) and the Russian law came into force, making this the most intensive period of data protection developments in the last 40 years.

Geographically, more than half (56%) of data privacy laws are still in European states (50/89), EU member states making up only slightly more than one third (27/89), even with the expansion of the EU into eastern Europe. The geographical distribution of the 89 laws by region is therefore: EU (27); Other European (23); Asia (9); Latin America (8); Africa (8); North Africa/Middle East (5); Caribbean (4); North America (2); Australasia (2); Central Asia (1); Pacific Islands (0). So there are 39 data privacy laws outside Europe, 44% of the total. Because there is little room for expansion within Europe, the majority of the world’s data privacy laws will soon be from outside Europe, probably by the middle of this decade.

The article also shows that we can expect the pace of legislation to continue accelerating. There are Bills currently before legislatures in at least five countries although some have been withdrawn for redrafting. There are official draft Bills known in another five during the past year.

Now that we have this more accurate picture of the global development of data privacy laws, further research becomes possible. It has already made possible an assessment of the influence of European privacy standards on legislative developments outside Europe. Further research is required on such questions as the implications of the increasingly interlocking data export restrictions in this legislation; on the effectiveness of the enforcement regimes in various countries; on the extent of judicial interpretation of these laws, and on other comparative aspects of data privacy laws. All of this requires an accurate account of the world’s data privacy laws.
We might, of course, question whether enactment of statutes in Gabon, Angola and similar jurisdictions is particularly meaningful. Does that law reflect external pressures? Is it enforced? Is it understood? Is it a signifier of modernity, easily acquired and even more easily disregarded?

22 March 2012

Credibility

'Tweeting is Believing? Understanding Microblog Credibility Perceptions' [PDF] by Meredith Morris, Scott Counts, Asta Roseway, Aaron Hoff and Julia Schwarz reports on survey results regarding user perceptions of tweet credibility, concluding that there is a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines.

The authors comment [citations and figure references deleted] that
Our survey showed that users are concerned about the credibility of content when that content does not come from people the user follows. In contexts like search, users are thus forced to make credibility judgments based on available information, typically features of the immediate user interface. Our survey results indicated features currently underutilized, such as the author bio and number of mentions received, that could help users judge tweet credibility.

It is sensible that traditional microblog interfaces hide some of these interface features because they aren’t necessary when only consuming content from known authors. Without these established relationships, errors in determining credibility may be commonplace. Participants were poor at determining whether a tweet was true or false, regardless of experience with Twitter. In fact, those higher in previous Twitter usage rated both content and authors as more credible. This mirrors findings with internet use generally, and may be due to a difficulty in switching from the heavily practiced task of reading content from authors a person follows to the relatively novel task of reading content from unknown authors. Even topical expertise may not support reliable content validity assessments. We did find that for politics, those higher in self-reported expertise (by a median split) gave higher credibility ratings to the true political tweets and their authors, yet these effects disappear for the science topic and for entertainment where those low in expertise actually gave slightly (though non-significantly) higher ratings to the true content.

In the absence of the ability to distinguish truthfulness from the content alone, people must use other cues. Given that Twitter users only spend 3 seconds reading any given tweet, users may be more likely to make systematic errors in judgment due to minimal “processing” time. Indeed, participants rated tweets about science significantly more credible than tweets on politics or entertainment, presumably because science is a more serious topic area than entertainment. Other types of systematic errors, such as gender stereotyping based on user image, did not appear to play a role. Although our survey respondents reported finding non-photographic user images less credible, our experiment found that in practice image choice (other than the detrimental default image) had little effect on credibility judgments. It is possible that image types we did not study (such as culturally diverse photographs) might create a larger effect.

The user name of the author showed a large effect, biasing judgment of both content and authors. Cha et al. discuss the role of topically consistent content production in the accumulation of followers. We see a similar phenomenon reflected here in users incorporating the degree of topical similarity in an author’s user name and tweets as another heuristic for determining credibility.

What are the implications of these difficulties in judging credibility and how can they be mitigated? Our experimental findings suggest that for individual users, in order to increase credibility in the eyes of readers, they should start by avoiding use of the default twitter icon. For user names, those who plan to tweet exclusively on a specific topic (an advisable strategy for building a large follower base), should adopt a topically-aligned user name as those generated high levels of credibility. If the user does not want a topical username, she should choose a traditional user name rather than one that employs “internet” styled spelling.

Other advice for individual tweet authors stems from our survey findings. For instance, use of non-standard grammar damaged credibility more than any other factor in our survey. Thus, if credibility is a goal, users are encouraged to use standard grammar and spelling despite the space challenges of the short microblog format, though we note that in some user communities non-standard grammar may increase credibility. Maintaining a topical focus also increases credibility, as does geographic closeness between the author and tweet topic, so users tweeting on geographically-specific events should enable location-stamping on their mobile devices and/or update their bio to accurately identify location, which is often not done.

Tweet consumers should keep in mind that many of these metrics can be faked to varying extents. Selecting a topical username is trivial for a spam account. Manufacturing a high follower to following ratio or a high number of retweets is more difficult but not impossible. User interface changes that highlight harder to fake factors, such as showing any available relationship between a user’s network and the content in question, should help. The Twitter website, for instance, highlights those in a user’s network that have retweeted a selected item. Search interfaces could do something similar if the user were willing to provide her Twitter credentials. Generally speaking, consumers may also maintain awareness of subtle biases that affect judgment, such as science-oriented content being perceived as more credible.

In terms of interface design, we highlight the issue that users are dependent on what is prominent in the user interface when making credibility judgments. To promote easier credibility assessment, we recommend that search engines for microblog updates make several UI changes. Firstly, author credentials should be accessible at a glance, since these add value and users rarely take the time to click through to them. Ideally this will include metrics that convey consistency (number of tweets on topic) and legitimization by other users (number of mentions or retweets), as well as details from the author’s Twitter page (bio, location, follower/following counts). Second, for content assessment, metrics on number of retweets or number of times a link has been shared, along with who is retweeting and sharing, will provide consumers with context for assessing credibility. In our pilot and survey, seeing clusters of tweets that conveyed similar messages was reassuring to users; displaying such similar clusters runs counter to the current tendency for search engines to strive for high recall by showing a diverse array of retrieved items rather than many similar ones – exploring how to resolve this tension is an interesting area for future work.
Useful hints for verification experts and identity criminals alike.

Cybercodes

Drowning in Code: An analysis of codes of conduct applying to online activity in Australia [PDF] by Chris Connolly & David Vaile of the Cyberspace Law & Policy Centre UNSW examines 16 codes of conduct relevant to Australian consumers online.

The 49 page report comments that -
Australians face a complex, confusing and often inconsistent environment when it comes to regulating how businesses and consumers should conduct themselves online.This Report examines 16 codes of conduct that are relevant to Australian consumers when they engage in online activity (13 active codes and 3 draft codes). It is the first report to analyse the numerous codes of conduct that have been developed in Australia to address online conduct.

These codes, individually and together, offer online users the prospect of assistance dealing with unsatisfactory conduct by businesses and others, but whether they meet expectations has been unclear.

The Report compares each code against best practice guidance on the development and implementation of codes of conduct issued by Australian regulators. The report also examines the coverage of codes, through an analysis of the code coverage amongst the top 50 websites visited by Australian consumers, and the top 19 ISPs by Australian market share.
The report identifies 13 codes that are currently in force and three significant draft codes. Those codes are -
In effect

1. Telecommunications Consumer Protection Code
2. ePayments Code
3. [Internet] Content Services Code
4. Interactive Gambling Industry Code
5. Internet Industry Spam Code of Practice
6. e-Marketing Code of Practice
7. Australian Best Practice Guidelines for Online Behavioural Advertising
8. IIA Family Friendly ISP Seal
9. Australian Association of National Advertisers Code of Ethics
10. iCode (E-Security Code for ISPs)
11. IIA Codes for Industry Co-Regulation in Areas of Internet and Mobile Content
including Content Code 1 (Hosting Content in Australia), Content Code 2 (Providing
Access to Content Hosted Within Australia) and Content Code 3 (Providing Access to
Content Hosted Outside Australia
12. IIA Responsible Internet Business Program - 10 Point User Protection Code of Ethics
13. Australian Group Buying Code of Conduct

Draft

1. IIA Privacy Code
2. IIA Industry Copyright Code
3. Best Practices for Dating Websites
The authors identified several consumer issues -
• the very number of codes which could potentially be applicable to a given online transaction or issue;
• the complexity of their overlapping coverage;
• wide variations in language, procedure, remedies and robustness;
• uncertainty about coverage and ‘jurisdiction’ broadly considered, including an often limited or non-existent capacity to involve dominant online service providers operating offshore;
• patchy or very low sign-up by industry participants, and in some cases difficulty in ascertaining who is a ‘member’ of the code and what this means;
• inconsistent approaches to effective complaint handling;
• inconsistent or undeveloped approaches to cross-referral to other codes or code bodies where an inquiry may be outside scope of the first code considered (to prevent ‘falling through the cracks’); and
• a tendency to focus on industry rather than consumer convenience in regulatory scheme design
They comment that -
the majority of codes require companies to subscribe to the code before coverage can be assured, and for most codes sign-up rates are very low.

In addition, many of the top 50 websites visited by Australian consumers are hosted outside Australia by organisations that appear unlikely to sign up to Australian codes of conduct. However, there are some very limited examples of global companies signing key Australian codes.

Overall the coverage of the 13 codes appears to be very poor. Simply having a large number of codes does not ensure consumer protection if most codes only have a few signatories.

Organisations are also faced with a difficult decision in deciding which codes to sign. ... The benefits of signing additional codes diminish rapidly once an organisation is already covered by one code.

There are significant overlaps in code content amongst the 13 codes in force and the three draft codes.

The main overlaps are in the areas of:
• privacy protection;
• truth in advertising;
• refunds and returns; and
• the prohibition against sending spam.
Some of these requirements appear in more than ten of the codes in the study. These overlaps have a range of impacts for potential signatories, including:
• uncertainty about which and how many to join, or whether they are eligible, or required, to join;
• whether their obligations would vary between the codes;
• the necessity to understand the details of overlap; and
• implications for compliance with overlapping and potentially inconsistent frameworks.
The overlaps may also cause concerns for consumers, including:
• uncertainty about which and how many codes might cover a particular situation or concern;
• whether codes covering similar concerns are consistent on specific points;
• the implications of any inconsistency;
• whether there is effective referral between codes where one has more direct relevance than another; and
• whether there are implications for successful resolution of concerns arising from deciding to start with one rather than another possibly relevant code.

19 March 2012

Cyberdanger

Protecting Children Online: Teachers' perspectives on eSafety [PDF] by Helen Aston & Bernadetta Brzyska offers an analysis of responses in a UK online teacher survey covering 'e-safety', cyberbullying, pupil use of mobile phones and social networking.

The authors of the 65 page report indicate that -
• 87% of teachers feel pupils are e-safe at school although only 58% think their pupils "have the knowledge and skills to stay e-safe at home"
• 74% of teachers think that the prevalence of smart phones among their pupils is making it easier for them to access inappropriate material at school, with "nine out of 10 secondary school teachers finding this difficult to manage"
• cyberbullying continues to be a problem, with 91% of secondary teachers and 52% cent of primary teachers saying pupils at their school have experienced cyberbullying, and that most of it is perpetrated via social networking sites.
The report is largely self-congratulatory and conflates the existence of policies with the minimisation of substantive harms, identifying teacher confidence rather than effectiveness. It claims that "the survey data shows that the majority of teachers can confidently deal with most e-safety issues and support their pupils to do so". Indeed. As with the rather vacuous ACER report noted earlier this year I'd have liked less recitation of self-reporting and more analytical bite.

The UK report comments that -
Many teachers acknowledged that technology can be useful to their pupils, for example they felt mobile phones are good for emergencies and social networking sites can facilitate pupils’ communication with their friends. However, our findings also show that technology is creating challenges for teachers. This is in relation to issues around e-safety and cyberbullying as well as managing pupils' usage of particular technologies, such as smartphones and social networking sites.

Given the pace at which new technologies are being developed, and pupils’ enthusiasm for using new technologies, having a regularly updated e-safety policy that provides a clear framework for guiding and managing pupils’ use of technology is important. Nearly nine in ten (87%) teachers said that their school has an e-safety policy, but only seven in ten (72%) indicated that it is reviewed regularly, suggesting that more work needs to be done with schools in this area. This is particularly the case in secondary schools, where the proportion of teachers responding that their school has an e-safety policy was lower.

Encouragingly, the vast majority of teachers felt that their pupils have the skills and knowledge to use the internet safely at school. However, only three-fifths (58%) of teachers felt that pupils had the skills and knowledge of use the internet safely at home.

This suggests that pupils need more education and support to ensure that they use the
internet safely outside of school, where there is less supervision and potentially more online freedom. Communication with teachers and parents about how best to support this learning would be useful.

Over three-quarters (77%) of primary teachers and half (54%) of secondary teachers felt that staff had received adequate e-safety training. Indeed, most teachers felt confident about advising pupils on different aspects of e-safety. The safe use of social networking sites was the area of e-safety that proportionally fewest teachers were confident to advise pupils on. These findings imply that a significant minority of teachers, particularly within the secondary phase of education, want or need more training on e-safety. We would expect this to result in greater proportions of teachers feeling confident in giving advice to pupils on all facets on e-safety.

Given the growing ownership of smartphones, Vital were keen for the survey to investigate teachers’ views of mobile phones. 85% of secondary teachers said that many of their pupils carried mobile phones with internet access, compared with only 7% of primary teachers. Given this variation by phase, it is unsurprising that while secondary teachers were proportionally more likely than primary teachers to see the benefit of pupils having a mobile phone for emergencies, they were also much more inclined to agree that mobile phone use within school is problematic. More than nine in ten secondary teachers thought that controlling mobile phone use within school was difficult. This suggests that secondary teachers would particularly welcome advice on managing pupils’ use of mobile phones within school.

Many teachers (59%) have a social networking profile themselves, and less than 1% have experienced pupils leaving inappropriate comments on their profile. Teachers do not encourage pupils to contact them via social networking sites, with only 1% happy for their pupils to contact them in this way. A third (33%) of primary teachers and three quarters (78 %) of secondary teachers felt that many of their pupils spend too much time on such sites. Across both phases of education, most teachers felt that access to these sites should be banned during the school day. Teachers are therefore likely to find advice on how to manage pupils’ attraction to social networking sites useful and relevant.

The survey findings on cyberbullying give a clear indication that communication with teachers should focus on both bullying of teachers and pupils, particularly in the secondary phase on education. While only 3% of teachers said that they had been cyberbullied by pupils, a third of secondary and 7% of primary respondents said that one of their colleagues had been. The picture amongst pupils was markedly worse, with 91% of secondary teachers and 52% of primary teachers reporting that pupils at their school have experienced cyberbullying. By far the most common form of cyberbullying was via social networking sites, irrespective of whether teachers or pupils were the intended victims, suggesting that cyberbullying advice should explicitly consider the use of this technology.

Anxious Bodies

'Civil Rights Reform and the Body' by Tobias Barrington Wolff in 6 Harvard Law & Policy Review (2012) 201-231 argues that -
Discrimination on the basis of gender identity or expression has emerged as a major focus of civil rights reform. Opponents of these reforms have structured their opposition around one dominant image: the bathroom. Trans-gender people are among the most targeted populations in the United States, subject to widespread and sometimes violent mistreatment in places of public accommodation, the workplace, and the housing market. Gender-variant young people become homeless in vastly disproportionate numbers, facing hostility in foster care and sexual exploitation on the streets. Some states have enacted legislation prohibiting such discrimination and abuse, and the Obama Administration has taken steps to extend similar protections within the federal civilian workforce and in administrative programs like federally subsidized housing and veterans services. Nonetheless, a condition of threat and vulnerability remains the norm for many who depart from the gender expectations of those around them, and efforts to enact protections have become a priority in Lesbian, Gay, Bisexual, and Transgender (LGBT) advocacy.

Opponents of these civil rights reforms have structured their opposition around one dominant image: the bathroom. With striking consistency, opponents have invoked anxiety over the bathroom - who uses bathrooms, what happens in bathrooms, and what traumas one might experience while occupying a bathroom - as the reason to permit discrimination in the workplace, housing, and places of public accommodation. Antagonists brand proposed civil rights laws as “bathroom bills,” promoting fear of sexual assault and employing icons and imagery that raise the specter of invasive voyeurism, all in response to the suggestion that transgender people might be able to hold a job, rent an apartment, or use places of public accommodation with out discrimination. The tactic has proven effective. Proponents of reform in the 111th Congress identified their colleagues’ anxiety over gender-identity protections, and in particular the rhetoric of the bathroom, as a major reason they could not successfully advance a federal Employment Non-Discrimination Act, which would prohibit employment discrimination on the basis of both sexual orientation and gender identity. Similar battles in state legislatures have either been lost or rendered more difficult by anxiety over the bathroom. And the effect is not felt in the vote count alone. These political struggles impose yet more indignity upon transgender people, who find themselves reduced to a caricature of a bodily function when seeking the assistance of their government in enjoying the basic features of a safe and dignified existence.

When a single image or rhetorical device becomes so dominant in a public policy debate, it requires an analysis on two levels. One must interrogate the device on its own terms, examining the factual predicates underlying its claims. But one must also lift the device out of this narrow and oversimplified pocket of debate — the power of the device being its very capacity to flatten discussion — and place it within a historical and analytical context where its origins and meaning can be scrutinized.

The rhetoric of the bathroom in the debate over gender-identity protections seeks to exploit an underlying anxiety that has played a role in many efforts at civil rights reform: anxiety over the body. The body can be a site of vulnerability and pain, shame and pleasure, excitement and embarrassment — human experiences that are often unmediated by rational thought and impervious to reasoned argument. When opponents of civil rights reform mobilize these primal forces in response to progressive efforts, they wield a potent tool for preserving existing arrangements of status and power.

This rhetoric of the bathroom in the debate over gender-identity protections seeks to exploit an underlying anxiety that has played a role in many efforts at civil rights reform: anxiety over the body. This Article makes a first attempt to identify and analyze the role that anxiety over the body can play in civil rights reform, using as its primary point of reference the obsessive focus on bathrooms that antagonists exhibit in seeking to justify discrimination on the basis of gender identity and expression. It begins by exploring the mode of subordination that characterizes much anti-LGBT discrimination: the wholesale erasure of the target population. Debates over gender-identity discrimination are regularly framed around an implicit invitation to negate transgender people entirely — to behave as though they can be conjured out of existence by establishing public policy that refuses to take them into account. The maneuver is persistent in disputes over LGBT equality and has been a regular feature of anti-gay advocacy. When deployed against transgender people, this aggressive form of erasure takes shape around anxiety over the body, for it is the trans-gender body itself that the antagonist wishes to erase. The bathroom obsession offers a prime example of this dynamic of erasure.

The Article then identifies some of the core anxieties of the body — fear of sexual predation and invasion; fear of pain and injury; and fear that the body will be exposed, with the attendant feelings of shame and loss of control that many people experience when their unclothed bodies are scrutinized — and analyzes resistance to civil rights reform around these themes. Anti-transgender discrimination implicates each of these core anxieties, coupled with the pervading insecurity over sexual or gender identity that is brought to the fore in some people when confronted with the reality of a transgender body. Past civil rights efforts have likewise provoked sharp reactions around these anxieties. The fight over the segregation of municipal swimming pools took shape around widely expressed fears of sexual predation by Black men toward White women, as well as unspoken anxieties about exposure of the body by some White men. Anxiety around physical injury and mutilation threw additional fuel on the long simmering battle over the segregation of railroad cars when travel by rail was still a new and dangerous technology, subjecting passengers to pervading physical peril. And purported fears over exposure of the body in barracks and showers played an outsized role in the resistance to the repeal of the military’s Don’t Ask, Don’t Tell (DADT) policy, under which lesbian, gay, and bisexual servicemembers had to lie about their identities as the price of their service. The Article examines these core anxieties of the body and analyzes the role that they have played in resistance to civil rights reform

18 March 2012

Gene Patents

'Exclusivity Without Patents: The New Frontier of FDA Regulation for Genetic Materials' by Gregory Dolin comments that -
Over the last twenty years, the legal and scientific academic communities have been embroiled in a debate about patent eligibility of genetic materials. The stakes for both sides couldn’t be higher. On one hand are the potential multi-billion dollar profits on the fruits of research (from newly discovered genes) and on the other is the ability of scientists to continue and expand research into the human genome as well as patients’ access to affordable diagnostic and therapeutic modalities. This debate is currently pending before the Supreme Court which has under consideration petition for certiorari in Ass’n for Molecular Pathology v. USPTO.

This paper recognizes that both sides have legitimate concerns. Given the unique nature of DNA, patents that broadly cover genetic materials and prevent their use (except by the license of the patentee) create insurmountable roadblocks for future research. However, denying exclusive rights to the fruits of laborious and costly research will remove the necessary incentives for investment in these endeavors, thus delaying scientific and medical discoveries.

To remedy these problems, the paper proposes a non-patent exclusivity system administered by the Food & Drug Administration. Under such a system, the innovators who bring new therapeutic or diagnostic products to market will receive exclusive rights to market their products for a limited time. This will provide sufficient market-based incentives to continue with the research and investment in this area. At the same time, because genetic sequences will no longer be broadly protected by patents, the public will be able to access these basic research tools without fear of infringement litigation. This approach addresses concerns of the both sides to the debate, and leads to a cheaper, more predictable, and easier to administer system of exclusive rights.
Dolin concludes that -
The science of molecular genetics has challenged the long-accepted standards and rules of the patent law. The basic unit of molecular genetics – a molecule of DNA – is unlike any other chemical entity in that it has both chemical and informational properties. It is no surprise then that determining how the patent law should treat this molecule has been subject of much debate.

Ultimately though, with the scientific advancement of the last few decades, the patent law question is resolving itself. Even if DNA is treated as a patent eligible subject matter, it is unlikely to find much protection in the bosom of patent law because sequencing of genes has become so routine as to no longer be inventive. In this sense, the patent system actually under-protects and therefore under-incentivizes investment and work in the field of molecular genetics. On the other hand, to the extent that some DNA sequencing may overcome the obviousness bar, the patent system over-protects and over- incentivizes the investment in this field as it allows the patentee to essentially limit access to that which is not truly his invention.

A new system based on the desire to properly incentivize the work of pioneers in molecular genetics, while maintaining due regard for the need to permit access to genetic materials for further research is needed. That system can be built by having the Food and Drug Administration regulate market entry for the makers of genetic diagnostic and therapeutic modalities. By allowing developers of new tests and treatments to enter the market on preferential basis, as compared to later applicants, the system will permit innovators to enjoy monopoly rents much like they would under the patent system. On the other hand, by limiting the monopoly only to the market for diagnostics and therapeutics, the alternative FDA-based system would permit further research unfettered by the need to spend resources on licensing patents that encompass genetic materials. This new approach would finally resolve the debate on the patent eligibility of genetic materials and place all parties in a more advantageous position than that they currently enjoy.

Exceptionalism

From the incisive 'Queer Legal History: A Field Grows Up and Comes Out' by Felicia Kornbluh' in 36(2) Law & Social Inquiry (2011) 537–559,
Kevin Cathcart, executive director of Lambda Legal,1 used to say that there was a “gay exception” to major doctrines of US law. In the 1990s, it was certainly easy to believe that Lesbian, Gay, Bisexual, and Transgender (LGBT) people were left out of every just and compassionate aspect of the law: We were still reeling from the Supreme Court's decision in Bowers v. Hardwick (1986), which authorized criminal prosecution of the consensual behavior of two adults in a private bedroom, in part on the grounds that same-sex intimacy had long been despised in Anglo-American law. In addition, the first president elected with substantial debts to the LGBT movement retreated almost immediately from his promise to allow lesbians and gay men to serve openly in the armed forces (Vaid 1995). Instead, President Bill Clinton signed legislation that created a new regime of mandatory closeting (Halley 1999, 1). During that same year, 1993, the Hawaii Supreme Court found that the state's constitutional provision for equal protection on the basis of sex barred it from denying marriage licenses to same-sex couples (Baehr v. Lewin). However, the decision was immediately appealed, and the conservative response to the ruling produced a state constitutional amendment permitting the legislature to deny marriage licenses by statute, which the legislature quickly did (Goldberg-Hiller 2002, 2003; Baehr v. Milke 1996). Despite massive grassroots protests and persistent efforts at reform, state and federal law seemed immune to change in favor of gay rights.

Cathcart's summary of LGBT legal history may no longer be relevant. The Supreme Court began to bring sexual minorities under the sheltering canopy of the federal equal protection clause in Romer v. Evans (1996). Justice Kennedy's majority decision described a Colorado law that preempted local civil rights ordinances as singling out LGBT people for unequal treatment without even the rational state interest that would constitute a justification at the lowest level of scrutiny. The Court extended its reasoning in Lawrence v. Texas (2003), which allowed the protection for intimate conduct it had found in the due process clause of the Fourteenth Amendment, among other places, to encompass the ancient Anglo-American crime of sodomy. In 2010, Congress and President Obama erased the product of President Clinton's compromise—“Don't Ask, Don't Tell”—from military procedures (Servicemembers Legal Defense Fund 2010; Schwartz 2010; Hulse 2010). In California, a federal district court issued an opinion that echoes Romer v. Evans in striking down a popular referendum against same-sex marriage. Judge Vaughn Walker wrote that “moral animus” toward LGBT people or same-sex intimacy was the reason for California's marriage restriction and that such animus is not reasonably related to any legitimate state interest (Perry v. Schwarzenegger 2010).

The compelling questions for historians of law and society concern the nature of the “gay exception,” its origins, and the source of its erosion. In the depressing Clinton years, and even as the Lawrence v. Texas (2003) decision was being handed down, scholars did not know enough about the past to answer these questions. Now, thanks to a generation of scholarship that has itself been influenced by the movement for LGBT rights, we begin to. Much of the recently published work was begun when Cathcart's “gay exception” had its strongest hold; undertones of fury are audible despite the authors' scholarly rectitude and academic style. This body of work joins the insights of women's and gendered legal history with those of contemporary sexuality studies (as well as political history, constitutional history, historical political science in the American Political Development tradition, the history of social welfare, and immigration history) to generate, “what,” in the words of historian Marc Stein, “might be called queer legal history” (Stein 2004, 111).

The new research is “queer” in that it addresses the meeting points between people who expressed their gender or sexuality in ways that were unfamiliar to the officials who interacted with them and legal institutions, personnel, and ideas. This work also “queers” our reading of the legal past. It compels new understandings of material toward which scholars may have thought they had settled interpretations. The books discussed here direct attention to the so-called sexual revolution in the high courts of the 1960s, to immigration law, military practices, veterans' benefits, and the domestic welfare state (Stein 2010; Canaday 2009). This work “queers” the history of marriage law, of urban vice squads, prisons, the relationship between religion and law reform, and military courts-martial (Chauncey 2004; Hillman 2005; Self 2008; Kunzel 2008; Gordon 2010).

How does queer legal history answer the questions implied by Kevin Cathcart's “gay exception” thesis? While they do not address Cathcart directly, the two most comprehensive of the new works, The Straight State by Margot Canaday (2009) and Sexual Injustice by Marc Stein (2010), argue for gay exceptions in twentieth-century US law. The Straight State may be considered a history of the development and solidification of the gay exception, and Sexual Injustice argues that the exception continued and gained new force amidst the so-called sexual revolution in the high courts of the 1960s and early 1970s.

Canaday's The Straight State (2009) is a study of three major areas of legal and governmental practice across most of the twentieth century. She studies the immigration, military, and social-welfare bureaucracies before and after World War Two. Her overall argument is that the federal government both generated the category of “the homosexual”—that is, officials fashioned an increasingly solid idea of normal and abnormal sexual identity to make sense of a range of sexual practices and performances that they found objectionable—and persecuted people who were thought to belong in that category. Her sources are a wide array of public papers that have been utilized either never or rarely before. They include records of scores of administrative hearings, including cases in which people were discharged other-than-honorably from the military, denied permission to immigrate to the United States, or refused public benefits; midlevel and lower level bureaucratic records, which offer a social history of state activity in this one important dimension; and transcripts of congressional hearings and the guidelines issued by executive agencies for the implementation of new policies. These sources allow Canaday to offer a much more complete record than has previously appeared in print of the law of gay-straight discrimination and its meaning in people's lives.

Marc Stein's Sexual Injustice (2010) is narrower in scope and argument than Canaday's, but it arrives at similar conclusions. Stein suggests that there was, indeed, a gay exception in US law, although, unlike Canaday, he is less interested in its emergence within the bureaucratic national state apparatus than in its presence in the appellate jurisprudence of the supposedly sexually liberatory 1960s and 1970s. His sources are the decisions in those cases themselves, plus the material about them in the Supreme Court archives and the records of the public-interest attorneys and organizations that brought them. Stein focuses on one appellate case from the later twentieth century, Boutilier v. Immigration and Naturalization Service (INS; 1967). The “sexual injustice” of the book's title is not merely the exclusion of Clive Michael Boutilier from the United States but what Stein believes was a whole body of “heteronormative” doctrine created by the Supreme Court under Chief Justices Warren and Burger. Stein implies that Boutilier v. INS solidified a gay exception that emerged at the end of World War Two and gained force in the 1960s, even as the strictures on heterosexual expression were loosening. Although Boutilier v. INS was an immigration case, Stein relates it to municipal regulations on the sale of pornography and showing of sexually explicit movies, state marriage statutes, and controls on access to birth control. He argues convincingly that the holding of that case was broadly significant in law and policy. Boutilier v. INS was the last case concerning gay rights that the Supreme Court heard before Bowers v. Hardwick (1986).

Queer legal history has begun to reshape legal and historical scholarship through its contributions to social theory and to at least four substantive areas of law. Among the four, I focus first on the law of marriage, an arena of state activity that neither Canaday (2009) nor Stein (2010) addresses at great length. I argue that a fully rounded understanding of the place of LGBT people in US law must engage the findings on sex, gender, and citizenship of recent scholarship on the history of marriage. I explore histories of heterosexual marriage by Hendrik Hartog (1999, 2004), Nancy Cott (2000), Linda Kerber (1998), and Peggy Pascoe (2009) and the scholarship about sexuality and marriage law by Sarah Barringer Gordon (2002, 2010) and George Chauncey (1990, 2004).

Second, I examine the contributions of queer legal history to the study of the domestic welfare state. Here, I consider Canaday's work in the context of the gendered political history of the past fifteen years. Canaday (2009) builds on the histories of policy and law offered by such historians as Linda Kerber (1998) and Linda Gordon (1994), and “queers” their depiction of the US state. Third, I discuss the military. Debates over “Don't Ask, Don't Tell” and its precursors have spurred extensive journalistic and scholarly writing on the subject of sexuality and the military. However, Canaday and the historian of military law Elizabeth Lutes Hillman (2005) have extended the record of antihomosexual scandals, exclusions, and expulsions to an earlier period than most writers. They grasp the multiple roles of the military branches as signifiers of social status, gateways to decent employment, and sources of robust but conditional welfare state benefits. Fourth, and last, I focus on immigration, an arena in which both Stein and Canaday are interested and in which both offer new perspective on the role of sexuality in defining the literal and figurative boundaries of the nation.