17 January 2017

Accountability

The ANAO report on Offshore Processing Centres in Nauru and Papua New Guinea: Contract Management of Garrison Support and Welfare Services, released today, should be of concern to policy analysts and people interested in public sector accountability.

The report notes
The Department of Immigration and Border Protection’s management of the garrison support and welfare services contracts at the offshore processing centres in Nauru and Papua New Guinea (Manus Island) has fallen well short of effective contract management practice.
The garrison support and welfare contracts were established in circumstances of great haste to give effect to government policy decisions and the department did not have a detailed view of what it wanted to purchase or the standards to apply. These are key considerations in achieving value for money. While the department took between 20 to 43 weeks (depending on the contract) to enter into final 2013 contracts, there remained significant shortcomings in the contractual framework. Many of the shortcomings persisted in the 2014 contracts, indicating that the 2014 contract consolidation process was not informed by lessons learned from the department’s management and operation of the 2013 contracts.
The department did not put in place effective mechanisms to manage the contracts. Other than the contracts, there was no documentation of the means by which the contract objectives would be achieved. In the absence of a plan, assurance processes such as the inspection and audit of services delivered, has not occurred in a systematic way and risks were not effectively managed. In addition, the department has not maintained appropriate records of decisions and actions taken in the course of its contract management. As a consequence, the department has not been well placed to assess whether its service strategies were adequate or fully met government objectives.
Some $2.3 billion in payments made between September 2012 and April 2016 were not authorised or recorded correctly. "$1.1 billion was approved by DIBP officers who did not have the required authorisation and for the remaining $1.1 billion there was no departmental record of who authorised the payments." Contract variations totalling more than $1 billion were made without a documented assessment of value for money.

The report states
In 2012 the Australian Government established offshore processing centres1 in the Republic of Nauru (Nauru) and Papua New Guinea (PNG) with the agreement of the Nauruan and PNG Governments.
Under the agreements, the Australian Government was to bear all costs associated with the construction and operation of the centres. Transfers of asylum seekers to Nauru commenced on 14 September 2012 and to PNG on Manus Island on 21 November 2012.
To underpin operations at the centres, the Department of Immigration and Border Protection (DIBP or the department) entered into contracts for the delivery of garrison support and/or welfare services with a number of providers. Garrison support includes security, cleaning and catering services. Welfare services include individualised care to maintain health and well-being such as recreational and educational activities. The total combined value of the contracts at 6 December 2016, as reported on AusTender, was $3,386 million.
For the purposes of this report the contracts are discussed in two groups: initial contracts signed in 2013 (referred to as the initial or 2013 contracts) with The Salvation Army, Save the Children, Transfield Services (Transfield) and G4S; and contracts signed in 2014 with Transfield and Save the Children to consolidate service provision (referred to as the consolidated or 2014 contracts).
In October 2015, Transfield6 became the sole provider of all garrison support and welfare services to asylum seekers at the offshore processing centres in Nauru and on Manus Island. In February 2016 these arrangements were extended through to 28 February 2017 and in August 2016 the contract with Transfield was further extended until 31 October 2017.
Audit objective, scope and criteria
The objective of the audit was to assess whether DIBP had appropriately established and managed the contracts for garrison support and welfare services at offshore processing centres in Nauru and Papua New Guinea (Manus Island); and whether the processes adopted met the requirements of the Commonwealth Procurement Rules (CPRs), including consideration and achievement of value for money.
The audit examined contracts entered into in 2012, when the arrangements were first put into place, through to the current contract which is due to expire in October 2017.
This is a companion audit to ANAO Performance Audit Report No.16 2016—17 Offshore Processing Centres in Nauru and Papua New Guinea: Procurement of Garrison Support and Welfare Services. As in the earlier audit, the ANAO’s review of departmental records was, due to shortcomings in DIBP’s record keeping system, based on the available records. DIBP was not able to provide the ANAO with assurance that it provided all departmental records relevant to the audit.
DIBP's apparent inability to properly document its operation is also evident in the report noted here.

The ANAO goes on to conclude

The Department of Immigration and Border Protection’s management of the garrison support and welfare services contracts at the offshore processing centres in Nauru and Papua New Guinea (Manus Island) has fallen well short of effective contract management practice.
The garrison support and welfare contracts were established in circumstances of great haste to give effect to government policy decisions and the department did not have a detailed view of what it wanted to purchase or the standards to apply. These are key considerations in achieving value for money. While the department took between 20 to 43 weeks (depending on the contract) to enter into final 2013 contracts, there remained significant shortcomings in the contractual framework. Many of the shortcomings persisted in the 2014 contracts, indicating that the 2014 contract consolidation process was not informed by lessons learned from the department’s management and operation of the 2013 contracts.
The department did not put in place effective mechanisms to manage the contracts. Other than the contracts, there was no documentation of the means by which the contract objectives would be achieved. In the absence of a plan, assurance processes such as the inspection and audit of services delivered, has not occurred in a systematic way and risks were not effectively managed. In addition, the department has not maintained appropriate records of decisions and actions taken in the course of its contract management. As a consequence, the department has not been well placed to assess whether its service strategies were adequate or fully met government objectives.
The department developed a comprehensive and risk based performance framework for the contracts to help it assess provider performance. However, development of the framework was delayed and in applying the framework the department was not consistent in its treatment of different providers. Performance measurement under the framework relied heavily on self-assessments by providers and the department performed limited independent checks. Delays in the department’s review of self-assessments and the provision of feedback on contractor performance eroded the link between actual performance and contract payments. Risk assessment was a key component of the performance reporting processes and while risk assessments were conducted, DIBP did not review risk ratings or determine if controls and mitigations were in place and working. Risks materialised in both the 2013 and 2014 contracts.
An appropriate framework of controls was in place for payments under the contracts, including the authorisation of actual payments by a delegate. This control was intended to provide additional assurance over payments under the contracts but did not always operate as intended. In respect to $2.3 billion in payments made between September 2012 and April 2016, delegate authorisations were not always secured or recorded: an appropriate delegate provided an authorisation for payments totalling $80 million; $1.1 billion was approved by DIBP officers who did not have the required authorisation; and for the remaining $1.1 billion there was no departmental record of who authorised the payments.
In addition, this audit highlighted further weaknesses in the department’s management of procurement. Substantial contract variations totalling over $1 billion were made without a documented assessment of value for money.
Contract management is core business for Australian Government entities, and the department has managed detention contracts since 1997. Previous ANAO audits of the department’s contract management have found that: its contracting framework had not established clear expectations of the level and quality of services to be delivered; and its ability to monitor the performance of contractors was compromised by a lack of clarity in standards and performance measures and reliance on incident reporting to determine when standards were not being met. This audit has identified a recurrence of these (and other) deficiencies, which have resulted in higher than necessary expense for taxpayers and significant reputational risks for the Australian Government and the department. The audit recommendations are intended to address the significant weaknesses observed in DIBP’s contract management practices.

Trust, Privacy and Contract

'Privacy's Trust Gap' by Neil M. Richards and Woodrow Hartzog in Yale Law Journal (Forthcoming) comments
 It can be easy to get depressed about the state of privacy these days. In an age of networked digital information, many of us feel disempowered by the various governments, companies, and criminals trying to peer into our lives to collect our digital data trails. When so much is in flux, the way we think about an issue matters a great deal. Yet while new technologies abound, our ideas and thinking—as well as our laws—have lagged in grappling with the new problems raised by the digital revolution. In their important new book, Obfuscation: A User’s Guide for Privacy and Protest (2016), Finn Brunton and Helen Nissenbaum offer a manifesto for the digitally weak and powerless, whether ordinary consumers or traditionally marginalized groups. They call for increased use of obfuscation, the deliberate addition of bad information to interfere with surveillance; one that can be “good enough” to do a job for individuals much or even most of the time. Obfuscation is attractive because it offers to empower individuals against the shadowy government and corporate forces of surveillance in the new information society. While this concept represents an important contribution to the privacy debates, we argue in this essay that we should be hesitant to embrace obfuscation fully.
We argue instead that as a society we can and should do better than relying on individuals to protect themselves against powerful institutions. We must think about privacy instead as involving the increasing importance of information relationships in the digital age, and our need to rely on (and share information with) other people and institutions to live our lives. Good relationships rely upon trust, and the way we have traditionally thought about privacy in terms of individual protections creates a trust gap. If we were to double down on obfuscation, this would risk deepening that trust gap. On the contrary, we believe that the best solution for problems of privacy in the digital society is to use law to create incentives to build sustainable, trust-promoting information relationships.
We offer an alternative frame for thinking about privacy problems in the digital age, and propose that a conceptual revolution based upon trust is a better path forward than one based on obfuscation. Drawing upon our prior work, as well as the growing community of scholars working at the intersection of privacy and trust, we offer a blueprint for trust in our digital society. This consists of four foundations of trust—the commitment to be honest about data practices, the importance of discretion in data usage, the need for protection of personal data against outsiders, and the overriding principle of loyalty to the people whose data is being used, so that it is data and not humans that become exploited. We argue that we must recognize the importance of information relationships in our networked, data-driven society. There exist substantial incentives already for digital intermediaries to build trust. But when incentives and markets fail, the obligation for trust-promotion must fall to law and policy. The first-best privacy future will remain one in which privacy is safeguarded by law, in addition to private ordering and self-help.
'Contracting Over Privacy: Introduction' by Omri Ben-Shahar and Lior Strahilevitz in (2016) 43(2) Journal of Legal Studies introduces
papers presented at the symposium Contracting over Privacy, which took place at the Coase-Sandor Institute for Law and Economics at the University of Chicago in fall 2015. The essay highlights a quiet legal transformation whereby the entire area of data privacy law has been subsumed by consumer contract law. It offers a research agenda for privacy law based on the contracting-over-privacy paradigm.
The authors comment
What are the legal implications of the classification of privacy notices as enforceable consumer contracts? For firms, the contractual nature of privacy notices ensures two beneficial functions. First, privacy notices are deployed to shield firms against liability for data privacy practices that, absent consumer consent, would violate privacy laws. For example, absent consent, Gmail’s practice of scanning contents of users’ e-mail messages would be a violation of the Wiretap Act, and Facebook’s practice of identifying users in uploaded photos would be a violation of state privacy laws. The contractual status of privacy notices means that users grant consent to these practices and thus provide firms a critical safe harbor.
The second function that privacy notices perform is the assurance for consumers that some uses of the data, which are otherwise permissible even without consent, would not occur. For example, firms and websites may keep logs of customers’ activity, but they can promise in their privacy notices not to do so. If privacy notices are contracts, such promises are binding, and their breach would be actionable. Moreover, the FTC can (and does) treat breaches of these promises as deceptive trade practices. Avowing such potential liability is a credible way for firms to entice hesitant consumers to engage with them. Firms dealing with sensitive content, like adult websites, indeed make explicit and clear promises to limit data sharing with third parties, and cloud-computing sites make explicit promises to follow stringent data security standards (Marotta-Wurgler 2016).
The contractual nature of privacy notices has significant implications for lawmakers working to design statutory privacy protections. The first implication is for the design of default rules. If statutory privacy rights are merely default rules, lawmakers should anticipate wholesale opt outs. Firms that develop business models that are constrained by statutory privacy rules would post privacy notices that effectively override these rules.
The powerful incentives of firms to induce their customers to give up their privacy rights also suggests that the choice between opt-in and opt-out schemes is of less importance than people usually assume. Opt-in schemes are thought to be more protective, because they require firms to get consumers’ affirmative consent to override the pro-consumer status quo. Opt-out schemes, by contrast, put the burden on consumers to initiate the exit from the pro-business status quo. Recent FCC regulations, for example, present the shift to an opt-in regime as a meaningful step toward more privacy protection, as this regime requires consumers’ explicit consent before collecting sensitive data such as geographical location or financial information. But firms are very good at getting consumers to opt in when doing so furthers the businesses interest (Willis 2013), and businesses are able to ask consumers repeatedly to change their minds if they initially resist information sharing. If indeed firms elicit such consumer consent with great ease, the opt-in framework makes little difference.
Once again, consumers may so easily agree to opt in, or fail to opt out, because of lack of information. Informed consumers might refuse to opt in or might initiate their own opt outs. These consumers would walk away from firms that refuse to provide the statutory privacy protections that they demand. Uninformed consumers, by contrast, would stick with any default rule. In such an environment of imperfect information, designing optimal default rules has to account for two separate concerns. First, it has to recognize that there are consumers who do care and who would seek to opt out of an undesirable default rule. For some, the default rule could be insufficiently protective, and they would look for more protection. For others, it would be too protective, and they would prefer to waive the protection for a price discount. These opt outs create transactions costs (the cost of becoming informed about the default rule as well as the cost of contracting around it), and a well-designed default rule has to minimize such costs. But the design of the default rule has to recognize, in addition, that many consumers would remain uninformed about the default rule and refrain from opting out, regardless of its content. For this group the default rule is sticky, and it ought to be designed with an eye to maximizing the value of the transaction. This is a general insight into the optimal design of default rules in consumer contracts: it has to meet two criteria—minimizing the cost of opt outs and maximizing the value of transactions when opt outs do not occur (Bar-Gill and Ben-Shahar 2016).
An additional implication of the contractual nature of privacy notices is the role of disclosures. Contracts over privacy—like any other consumer standard-form contract—are often long and complex. Is there a way to make such contracts simpler? Can the law require firms to present consumers pared-down versions of these privacy notices that would effectively inform consumers of the privacy risks? These questions have risen to the fore of consumer protection law in many areas, as regulators and commentators spend much effort to design simpler, smarter, and user-friendlier disclosures. In the privacy area, the proposals to utilize best practices in the presentation of privacy notices have been widely embraced, and more radical suggestions to use “nutrition facts”–type warning boxes are also intuitively advocated. But would such efforts have the desired effect on informing consumers’ choices? There is some evidence that the answer is no (Ben-Shahar and Chilton 2016) and that the use of the privacy notice to engender trust may be limited (Martin 2016).
In the end, then, the law and economics of contracting over privacy differs only in detail, but not in principle, from the law and economics of consumer contracts. Courts overwhelmingly treat them in the same way, and for good reasons. Consumers’ consent may be ill-informed, but regulatory alternatives might be worse. Consumer contract law has tools to combat overreaching by firms, and these tools—rather than superfluous notions of heightened disclosure or informed consent—ought to guide privacy protection. Such tools allow courts to strike down intolerable provisions, and in a separate article we propose to deny firms the advantages that they bury in cryptic boilerplate (Ben-Shahar and Strahilevitz 2016).
Accordingly, the papers from the symposium Contracting over Privacy collected in this issue examine general questions of contract formation, design, interpretation, and extracontractual norms and trust—all in the context of privacy. Privacy is not sui generis; it is instead a valuable laboratory to examine the evolution of contract law in the digital era.

Internships

The UK All Party Parliamentary Group on Social Mobility The class ceiling: Increasing access to the leading professions report calls for a ban on unpaid internships in the professions and changes to recruitment in higher education.

The Group comments
it is clear that there is more to be done to widen access to the top professions in our country. Research shows that the UK’s top professions remain disproportionally occupied by alumni of private schools and Oxbridge.
While some positive steps have been taken, the overarching evidence from the inquiry and available statistics still show that students from disadvantaged backgrounds are less successful than their more advantaged counterparts in getting in to the top professions. In business, nearly a third of the FSTE 100 chief executives educated in the UK were independently educated, and in law, nearly three quarters of the top judiciary were educated at independent schools. Yet across the country, only 7% of students attend private schools.
This pattern is mirrored, to varying degrees, in a number of different professions such as medicine, journalism and politics and the civil service. One of the most striking findings from the evidence sessions held by the inquiry was that despite the vast range of professions we spoke to, the challenges they faced in widening access were extremely similar. Many spoke of needing to tackle unconscious bias, the lack of contextual recruitment practices, and the fact that for some employers, they just did not receive applications from highly able applicants from disadvantaged backgrounds.
The last point exemplifies how it is not only a formal education which makes a difference to those from disadvantaged backgrounds, but also an informal education such as the learning of soft skills, along with having aspirations and role models to admire and emulate. Employers look for confidence, resilience, social skills and self-motivation in their employees, but for those who have had little to no exposure to extracurricular activities, work experience or mentoring, these skills can be difficult to acquire. A clear message from our evidence sessions was that we need to become better at inspiring our youngsters to reach their full potential, especially for those who start out at a disadvantage. Our professions should reflect our communities and our country, and employers themselves would ultimately benefit from harnessing the broader experience and potential of the country as a whole and not just established groups.
This business case for diversity was put forward by many who responded to this inquiry. By widening access to the professions, organisations benefit from an increased pool of skills and experience. Having a diverse workforce which encompasses many different talents, backgrounds and experiences can help create a dynamic organisation ready to face the challenges of the 21st Century. Businesses need to be measuring and monitoring the social background of their employees in the same way in which they monitor protected characteristics, and held accountable for how well they are doing in widening access.
The report states
Leading People 2016 found that almost a third of MPs in the 2015 intake were independently educated, as are nearly a third of those FTSE 100 chief executives that were educated in the UK. Of all High Court and Appeals Court judges, nearly three quarters attended private schools, as did over half of the top 100 news journalists and over two-thirds of British Oscar winners. This pattern is repeated, to varying degrees, across a host of other professions.
It is not only the very top jobs where an advantage to the privately educated exists. Research by the Bridge Group recently noted that “73% of those who came from the most advantaged backgrounds before Higher Education were in the most advantaged occupation groups six months after graduating in 2012/13. 67% of those from less advantaged backgrounds were in the most advantaged occupation groups”, a gap of 6 percentage points.
This is reinforced by research the Sutton Trust published in partnership with upReach in 2015, which found that, three and a half years after graduation, private school graduates in top jobs earn £4,500 more than their state school counterparts. While half of this pay difference can be explained by the type of higher education institution attended or prior academic achievement, the other half cannot be explained by educational factors.
Over recent years, we have seen a greater focus on diversity in the professions, with an improvement in the number of women appointed to boards at FTSE 100 companies, for example. The Coalition Government set up a Social Mobility Business Compact to encourage employers to be more open to people from disadvantaged backgrounds. Recently the Civil Service announced it was reforming its recruitment process to encourage diversity, while many major companies have changed their admissions process and have set up programmes aiming to widen access.
The Group offers  Recommendations to improve access to the professions
A strategic approach to social mobility should be developed
The issues preventing fair access to the leading professions require cross-sector leadership and real collaboration to solve. The government should develop a national social mobility strategy, linking the work of schools, universities and employers to build a real business case and practical plan for improving social mobility. In doing so, the government should identify champions and model initiatives in each of the most selective professions that can collaborate and share cross sector best practices, setting goals for each sector to meet.
Employers in ‘elite’ professions should take part in the Social Mobility Employer Index, being launched next year by the Social Mobility Foundation and the Social Mobility Commission.
Organisations should be required to report on all measures of the index to highlight how well they are doing in widening access. Once piloted, this should be rolled out to all organisations over a certain size and the index should be considered by companies as akin to diversity tracking and other protected characteristics. Employers should learn from what works in their own profession and from other sectors.
Financial barriers to accessing the professions should be minimised
There are significant barriers to accessing professions, particularly the most competitive and those that are mostly concentrated in London. The government should ban unpaid internships.
Employers need to review their work experience policies to ensure access is fair and transparent, ensuring that all posts are publicly advertised to allow a more diverse range of candidates to apply. After at most one month, interns should be paid the National (or London) Living Wage.
Employers should increase efforts to reduce the London-centric focus of recruitment, either by increasing regional recruitment or outreach, and at least fully cover travel reimbursement for any interviews or work experience placements. The Social Mobility Commission should continue to focus on social mobility by geography – to encourage the government and employers to create and support routes for social mobility in those areas that need it most.

Recruitment practices should be fair and transparent

Employers should ensure that they are doing more to encourage best practice with regards to widening access and are helping to break down the barriers graduates face when transitioning from higher education into employment.
Employers should adopt contextual recruitment practices that place attainment and successes achieved in the context of disadvantage, including underperforming schools and less advantaged neighbourhoods.
Employers should ensure that all internships are advertised publicly, and recruited based on merit and not on networks. They should also ensure that any work experience opportunities are advertised publicly, following best practice. Employers should be conscious of the impact of recruiting from a narrow pool of universities in the graduate ‘milk round’, and the social mix of institutions, building on the work already being done in some elite professions. Unconscious bias training for recruiters should also be considered.
UCAS and universities should consider how to modify the application system to allow for more post-qualification applications than are allowed by the current clearing system.
Careers advice for young people needs to be significantly improved
Good careers advice can be transformative for young people. It should be based on “what works”, so that young people know all the options available to them and what they would need to do to achieve them. Schools should learn from best practice on how to support pupils’ choices, and use their own destinations data to help inform their support. Employers should commit to offering careers support and partnerships that genuinely enhance social mobility. This could be by providing mentors and creating opportunities to raise awareness and aspirations of their professions.
Universities should ensure careers services are a core part of the university support system and, in particular, target proven interventions at disadvantaged students to improve their awareness of career opportunities.
The Government should do more to encourage education in later life and lifelong learning so that people of all ages have access to education throughout their lives. They can do this through encouraging more people to take up postgraduate/ part time study loans and by advocating the benefits of education in later life.
Aspirations, soft skills and extra-curricular activities
Schools should encourage pupils to develop skills beyond their core curriculum that are keenly sought after by employers, such as resilience, confidence, social skills and self-motivation. Employers should pro-actively work with schools and universities to help teach the skills that are most sought after in the workplace.
Schools should actively identify young people who could most benefit from mentoring support from charities and employers.
Schools should also raise aspirations by encouraging reading for pleasure, provide educational trips and ensure that they are offering out-of-school studying opportunities, sport and arts provision for disadvantaged students at all stages of education.
Schools should also encourage pupils to take up volunteering or get involved in social action to help build the skills that universities and employers identify as attractive.
It goes on to make Sector-specific recommendations
Throughout this enquiry, evidence was received from several professional sectors. Some specific recommendations for these sectors are below but should be considered in all sectors, where applicable.
Politics and the civil service
Political parties should actively use contextual information when recruiting employees and always pay interns the living wage. This could set an example to other professions and encourage people from non-traditional backgrounds to get more involved in politics. The socio-economic background information of staff should be monitored and reviewed on an anonymous basis.
MPs and Lords should support the Speakers Parliamentary Scheme to expand wherever possible. MPs should look to draw up shortlists for applications where 50% of candidates are from the local area. This would help to combat issues around networking and would allow the makeup of the MPs staff to reflect that of the local population. The Civil Service should ensure that all departments collaborate to ensure that the image of working in the Civil Service is more open and not intimidating. The Civil Service should look specifically at progression, performance, and pay, to lead by example for other professions. The same rigour on social mobility should be applied to the rest of the civil service recruitment as is currently applied to the Fast Stream.
Medicine
Universities should contextualise admissions to study medicine, recognising that academic ability is just one crucial part of being a successful doctor. This should build on innovative schemes, such as the ‘foundation year’ schemes already underway at some medical schools.
Work experience opportunities for school students should be coordinated to ensure all students, regardless of where they live and their personal networks, can get that crucial experience. An effort should be made to encourage pupils to take an interest in medicine earlier on in their academic lives. This could be done in part, by schools and medical colleges working together in order to expose pupils to the possibilities of studying medicine.
Law, finance and professional services
Established professional bodies should drivehe social mobility agenda in law, finance and accountancy. Where possible, initiatives to improve social mobility should be coordinated to ensure they can have significant impact, where it is most needed. Employers should ban all unpaid internships and need to review their work experience policies to ensure access is fair and transparent.
All firms should undertake awareness-raising activities to ensure that young people, particularly those from disadvantaged backgrounds, are aware of the opportunities to join their profession and the requirements.
Arts and media
Building on the success of the BRIT School in London, other schools and colleges should encourage young people to develop their skills in creative pursuits, regardless of background. The business case for having more diverse groups of people, in this case particularly those from different socio-economic backgrounds, needs to be developed in both the arts and the media.
The Government should ban all unpaid internships, as previously stated, and employers need to review their work experience policies to ensure access is fair and transparent, ensuring that all posts are publicly advertised to allow a more diverse range of candidates to apply.
The government should provide proper support and funding for local arts projects, some of which could be done as part of the pupil premium scheme, through which lower income families could purchase additional educational support for pupils, such as theatre visits and other cultural activities
In discussing internships the Group comments
Recruiters often favour experience as much as aptitude, which the disadvantaged have least opportunity to gain
A Highfliers analysis identified that nearly 30% of accounting and professional services vacancies, over 30% of consulting vacancies, over 55% of law vacancies and over 50% of banking and finance vacancies, are filled by graduates who have already worked for the employer. Highfliers found that nearly 80% of vacancies specifically in investment banking were filled by those who had already worked there, compared with less than 10% of roles in the public sector. This suggests that work experience is both crucial for entry into the most elite professions and implies that recruiters are favouring those who have already had experience with their organisation.  The prevalence of unpaid internships has been a widely acknowledged social mobility issue. In 2014, the Sutton Trust found that 31% of university graduates working as interns were doing so for no pay. The Social Mobility Commission found that 63% of cultural and creative, 56% of media-related, and 42% of financial and professional services internships advertised on the Graduate Talent Pool website were unpaid. The Sutton Trust said that the cost of a six month internship in Manchester could set back an intern a minimum of £4,728 (£788 a month), excluding transport costs which are usually paid by the employer.
This inquiry found this trend as being particularly acute in the media. The National Council for the Training of Journalists (NCJT) said that the extensive use of internships, the majority unpaid, as a recruitment mechanism adds to the difficulty of entry into journalism for those who cannot rely on family support. In its written submission, the National Union of Journalists (NUJ) went further and said that ‘unpaid internships have become almost institutionalised in the media’ and inevitably disadvantaged those who are unable to work for free.
In relation to  Qualification bias the Group notes
Top professions favour Russell Group degrees and/ or post-graduate degrees and so are dominated by most affluent groups One of the most common issues the inquiry heard was the practice of leading professions recruiting from a narrow range of elite universities, mostly in the Russell Group, in which people from disadvantaged backgrounds are underrepresented in (see section 2). The Social Mobility Commission has identified that top employers are far more likely to visit universities with a low proportion of students from disadvantaged backgrounds to recruit.  For instance, in 2015, the Law Society found that ‘the type of university attended is one of the most important elements to factor into a person’s chance to receive a job offer from top law firms’.
On top of this, prohibitively expensive post-graduate degrees or professional qualifications are also required to enter many leading professions. This is true in medicine, where costs can continue after graduation for further study. The British Medical Journal has estimated that in England, a doctor can graduate with between £64,000 and £82,000 debt. David Morley from Allen and Overy told the inquiry non-law graduates require two years of law school and his firm provides considerable financial assistance to trainee recruits (eg paying law school fees) including a relatively small number of bursaries to support some students from less advantaged backgrounds with the costs of going to university.
The Law Society estimates that it costs £25,000-£50,000 to qualify as a solicitor,  while the President of the Bar Council said that qualifying as a Barrister may cost up to £127,000. 51 In the media, the NCTJ said there is a requirement for many new journalists to have postgraduate degrees, which are often self-financed, meaning young people frequently need financial support to enter. This is supported by a report by the Reuters Institute for the Study of Journalism, which found that of those journalists who began their careers in 2013, 2014 and 2015, 98% had a bachelor’s degree and 36% a master’s.
Leadership and confidence traits
Employers want recruits to show leadership qualities, yet people from disadvantaged backgrounds lack leaders in their lives as examples to emulate.
There is an entire literature on ‘leadership in  business’ and there is a widely held assumption that leading employers are looking for ‘natural leaders’ and their assumed associated attributes. The Social Mobility Commission has noted how many firms use ‘competency or strengths based frameworks to seek evidence for skills such as leadership and team work’, or to identify ‘aptitudes such as resilience, drive, enthusiasm and adaptability’.  The inquiry heard how leadership characteristics are often associated with confidence. In its submission, Brightside said that the issue of access into leading professions is linked to confidence as well as the educational attainment gap. Dan Jarvis MP, champion of the Speaker’s Parliamentary Placement Scheme that offers paid internships in Parliament to people from disadvantaged backgrounds, provides its beneficiaries with confidence to interact with senior parliamentarians and policymakers, which is important for their later career.  Archie Brixton said the support he received from upReach built his confidence to commence a career in finance.  The link between confidence and career progress has been quantified by the Sutton Trust in a report that analysed the BBC’s ‘Big Personality Test’ to identify the links between personality traits and career earnings. The report found that highly extroverted people – those who were more confident, sociable or assertive – had a 25% higher chance of being in a high-earning job (over £40,000 per year), with the odds being higher for men than women. The report also found that personality and aspirations were found to be strongly affected by social background, showing that people from more advantaged backgrounds (those whose parents had professional jobs) had significantly higher levels of extroversion and very substantially higher economic aspirations.

15 January 2017

Biometrics

'Biometric Cyberintelligence and the Posse Comitatus Act' (Washington and Lee Legal Studies Paper No. 2016-14) by Margaret Hu is described as addressing 
the rapid growth of what the military and intelligence community refer to as “biometric-enabled intelligence.” This newly emerging intelligence system is reliant upon biometric databases — for example, digitalized collections of scanned fingerprints and irises, digital photographs for facial recognition technology, and DNA. This Article introduces the term “biometric cyberintelligence” to describe more accurately the manner in which this new tool is dependent upon cybersurveillance and big data’s mass-integrative systems.
To better understand the legal implications of biometric cyberintelligence, this Article advances three primary claims. First, it argues that the technological and programmatic architecture of biometric cyberintelligence can be embedded within the data collection and data analysis protocols of civilian governance and domestic law enforcement activities. Next, to demonstrate the potential lethality of this emerging technological and policy development, this Article illustrates how biometric data may be increasingly integrated into drone weaponry, including targeted killing and drone strike technologies. Finally, this Article argues that the Posse Comitatus Act of 1878, designed to limit the deployment of federal military resources in the service of domestic policies, may be impotent in light of the growth of cybersurveillance.
Maintaining strict separation of data between military and intelligence operations on the one hand, and civilian, homeland security, and domestic law enforcement agencies on the other hand, is increasingly difficult as cooperative data sharing increases. The Posse Comitatus Act and constitutional protections such as the Fourth Amendment’s privacy jurisprudence, therefore, must be reinforced in the digital age in order to appropriately protect citizens from militarized cyberpolicing, i.e., the blending of military/foreign intelligence tools and operations and homeland security/domestic law enforcement tools and operations. The Article concludes that, as of yet, neither statutory nor constitutional protections have evolved sufficiently to cover the unprecedented surveillance harms posed by the migration of biometric cyberintelligence from foreign to domestic use.

14 January 2017

Robot Charter

The Artificial Intelligence report by the European Parliament's Committee on Legal Affairs noted in the preceding post features a draft framework
Definition and classification of 'smart robots'
A common European definition for 'smart' autonomous robots should be established, where appropriate including definitions of its subcategories, taking into consideration the following characteristics:
The capacity to acquire autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the analysis of those data
The capacity to learn through experience and interaction
The form of the robot’s physical support
The capacity to adapt its behaviours and actions to its environment
Registration of 'smart robots'
For the purposes of traceability and in order to facilitate the implementation of further recommendations, a system of registration of advanced robots should be introduced, based on the criteria established for the classification of robots. The system of registration and the register should be Union-wide, covering the internal market, and should be managed by an EU Agency for Robotics and Artificial Intelligence.
Civil law liability
Any chosen legal solution applied to robots' liability in cases other than those of damage to property should in no way restrict the type or the extent of the damages which may be recovered, nor should it limit the forms of compensation which may be offered to the aggrieved party on the sole grounds that damage is caused by a non-human agent. The future legislative instrument should provide for the application as a rule of strict liability to damage caused by 'smart robots', requiring only proof of a causal link between the harmful behaviour of the robot and the damage suffered by the injured party. An obligatory insurance scheme, which could be based on the obligation of the producer to take out insurance for the autonomous robots it produces, should be established. The insurance system should be supplemented by a fund in order to ensure that damages can be compensated for in cases where no insurance cover exists.
Interoperability, access to code and intellectual property rights
The interoperability of network-connected autonomous robots that interact with each other should be ensured. Access to the source code should be available when needed in order to investigate accidents and damage caused by 'smart robots'.  Criteria for ‘intellectual creation’ for copyrightable works produced by computers or robots should be drawn up.
Disclosure of use of robots and artificial intelligence by undertakings
Undertaking s should be obliged to disclose:
– the number of 'smart robots' they use,
– the savings made in social security contributions through the use of robotics in place of human personnel,
– an evaluation of the amount and proportion of the revenue of the undertaking that results from the use of robotics and artificial intelligence.
The report also features a Charter of Robotics
The proposed code of ethical conduct in the field of robotics will lay the groundwork for the identification, oversight and compliance with fundamental ethical principles from the design and development phase. The framework must be designed in a reflective manner that allows individual adjustments to be made on a case-by-case basis in order to assess whether a given behaviour is right or wrong in a given situation and to take decisions in accordance with a pre-set hierarchy of values. The code should not replace the need to tackle all major legal challenges in this field, but should have a complementary function. It will, rather, facilitate the ethical categorisation of robotics, strengthen the responsible innovation efforts in this field and address public concerns. Special emphasis should be placed on the research and development phases of the relevant technological trajectory (design process, ethics review, audit controls, etc.). It should aim to address the need for compliance by researchers, practitioners, users and designers with ethical standards, but also introduce a procedure for devising a way to resolve the relevant ethical dilemmas and to allow these systems to function in an ethically responsible manner.
The Code of Ethical Conduct for Robotics Engineers has the following Preamble
• The Code of Conduct invites all researchers and designers to act responsibly and with absolute consideration for the need to respect the dignity, privacy and safety of humans.
• The Code asks for close cooperation among all disciplines in order to ensure that robotics research is undertaken in the European Union in a safe, ethical and effective manner.
• The Code of Conduct covers all research and development activities in the field of robotics.
• The Code of Conduct is voluntary and offers a set of general principles and guidelines for actions to be taken by all stakeholders.
• Robotics research funding bodies, research organisations, researchers and ethics committees are encouraged to consider, at the earliest stages, the future implications of the technologies or objects being researched and to develop a culture of responsibility with a view to the challenges and opportunities that may arise in the future.
• Public and private robotics research funding bodies should request that a risk assessment be performed and presented along with each submission of a proposal for funding for robotics research. Such a code should consider humans, not robots, as the responsible agents. Researchers in the field of robotics should commit themselves to the highest ethical and professional conduct and abide by the following principles:
Beneficence – robots should act in the best interests of humans;
Non-maleficence – the doctrine of ‘first, do no harm’, whereby robots should not harm a human;
Autonomy – the capacity to make an informed, un-coerced decision about the terms of interaction with robots;
Justice – fair distribution of the benefits associated with robotics and affordability of homecare and healthcare robots in particular.
Fundamental Rights
Robotics research activities should respect fundamental rights and be conducted in the interests of the well-being of individuals and society in their design, implementation, dissemination and use.
Human dignity – both physical and psychological – is always to be respected.
Precaution
Robotics research activities should be conducted in accordance with the precautionary principle, anticipating potential safety impacts of outcomes and taking due precautions, proportional to the level of protection, while encouraging progress for the benefit of society and the environment.
Inclusiveness
Robotics engineers guarantee transparency and respect for the legitimate right of access to information by all stakeholders. Inclusiveness allows for participation in decision-making processes by all stakeholders involved in or concerned by robotics research activities.
Accountability
Robotics engineers should remain accountable for the social, environmental and human health impacts that robotics may impose on present and future generations.
Safety
Robot designers should consider and respect people’s physical wellbeing, safety, health and rights. A robotics engineer must preserve human wellbeing, while also respecting human rights, and disclose promptly factors that might endanger the public or the environment.
Reversibility
Reversibility, being a necessary condition of controllability, is a fundamental concept when programming robots to behave safely and reliably. A reversibility model tells the robot which actions are reversible and how to reverse them if they are. The ability to undo the last action or a sequence of actions allows users to undo undesired actions and get back to the ‘good’ stage of their work.
Privacy
The right to privacy must always be respected. A robotics engineer should ensure that private information is kept secure and only used appropriately. Moreover, a robotics engineer should guarantee that individuals are not personally identifiable, aside from exceptional circumstances and then only with clear, unambiguous informed consent. Human informed consent should be pursued and obtained prior to any man-machine interaction. As such, robotics designers have a responsibility to develop and follow procedures for valid consent, confidentiality, anonymity, fair treatment and due process. Designers will comply with any requests that any related data be destroyed, and removed from any datasets.
Maximising benefit and minimising harm
Researchers should seek to maximise the benefits of their work at all stages, from inception through to dissemination. Harm to research participants/human subject/an experiment, trial, or study participant or subject must be avoided. Where risks arise as an unavoidable and integral element of the research, robust risk assessment and management protocols should be developed and complied with. Normally, the risk of harm should be no greater than that encountered in ordinary life, i.e. people should not be exposed to risks greater than or additional to those to which they are exposed in their normal lifestyles. The operation of a robotics system should always be based on a thorough risk assessment process, which should be informed by the precautionary and proportionality principles.
The associated Code for Research Ethics Committees (RECs) is
Principles
Independence
The ethics review process should be independent of the research itself. This principle highlights the need to avoid conflicts of interest between researchers and those reviewing the ethics protocol, and between reviewers and organisational governance structures.
Competence
The ethics review process should be conducted by reviewers with appropriate expertise, taking into account the need for careful consideration of the range of membership and ethics-specific training of RECs.
Transparency and accountability The review process should be accountable and open to scrutiny. RECs need to recognise their responsibilities and to be appropriately located within organisational structures that give transparency to the REC operation and procedures to maintain and review standards.
The role of a Research Ethics Committee
A REC is normally responsible for reviewing all research involving human participants conducted by individuals employed within or by the institution concerned; ensuring that ethics review is independent, competent and timely; protecting the dignity, rights and welfare of research participants; considering the safety of the researcher(s); considering the legitimate interests of other stakeholders; making informed judgements of the scientific merit of proposals; and making informed recommendations to the researcher if the proposal is found to be wanting in some respect.
The constitution of a Research Ethics Committee
A REC should normally: be multidisciplinary; include both men and women; be comprised of members with a broad experience of and expertise in the area of robotics research. The appointment mechanism should ensure that the committee members provide an appropriate balance of scientific expertise, philosophical, legal or ethical backgrounds, and lay views, and that they include at least one member with specialist knowledge in ethics, users of specialist health, education or social services where these are the focus of research activities, and individuals with specific methodological expertise relevant to the research they review; and they must be so constituted that conflicts of interest are avoided.
Monitoring
All research organisations should establish appropriate procedures to monitor the conduct of research which has received ethics approval until it is completed, and to ensure continuing review where the research design anticipates possible changes over time that might need to be addressed. Monitoring should be proportionate to the nature and degree of risk associated with the research. Where a REC considers that a monitoring report raises significant concerns about the ethical conduct of the study, it should request a full and detailed account of the research for full ethics review. Where it is judged that a study is being conducted in a way that is unethical, it should consider the withdrawal of its approval and require that the research should be suspended or discontinued.
In relation to Licensing of designers - 
• You should take into account the European values of dignity, freedom and justice before,  during and after the process of design, development and delivery of such technologies including the need not to harm, injure, deceive or exploit (vulnerable) users.
• You should introduce trustworthy system design princi ples across all aspects of a robot’s operation, for both hardware and software design, and for any data processing on or off the platform for security purposes.
• You should introduce privacy by design features so as to ensure that private information is kept secure and only used appropriately.
• You should integrate obvious opt-out mechanisms (kill switches) that should be consistent with reasonable design objectives.
• You should ensure that a robot operates in a way that is in accordance with local, national and international ethical and legal principles.
• You should ensure that the robot’s decision-making steps are amenable to reconstruction and traceability.
• You should ensure that maximal transparency is required in the programming of robotic systems, as well as predictability of robotic behaviour.
• You should analyse the predictability of a human-robot system by considering uncertainty in interpretation and action and possible robotic or human failures.
• You should develop tracing tools at the robot’s design stage. These tools will facilitate accounting and explanation of robotic behaviour, even if limited, at the various levels intended for experts, operators and users.
• You should draw up design and evaluation protocols and join with potential users and stakeholders when evaluating the benefits and risks of robotics, including cognitive, psychological and environmental ones.
• You should ensure that robots are identifiable as robots when interacting with humans.
• You should safeguard the safety and health of those interacting and coming in touch with robotics, given that robots as products should be designed using processes which ensure their safety and security. A robotics engineer must preserve human wellbeing while also respecting human rights and may not deploy a robot without safeguarding the safety, efficacy and reversibility of the operation of the system.
• You should obtain a positive opinion from a Research Ethics Committee before testing a robot in a real environment or involving humans in its design and development procedures.
The Licence for Users is simpler -
• You are permitted to make use of a robot without risk or fear of physical or psychological harm.
• You should have the right to expect a robot to perform any task for which it has been explicit ly designed.
• You should be aware that any robot may have perceptual, cognitive and actuation limitations.
• You should respect human frailty, both physical and psychological, and the emotional needs of humans.
• You should take the privacy rights of indi viduals into consideration, including the deactivation of video monitors during intimate procedures.
• You are not permitted to collect, use or disclose personal information without the explicit consent of the data subject.
• You are not permitted to use a robot in any way that contravenes ethical or legal principles and standards.
• You are not permitted to modify any robot to enable it to function as a weapon.


EU rights for robots?

The European Parliament's Committee on Legal Affairs in a draft report on Civil Law Rules on Robotics has called for a new legel framework for artificial intelligence, with recommendations that are unlikely to be embraced by the full Parliament later in the year.
The draft report comments -  
Introduction
A. whereas from Mary Shelley's Frankenstein's Monster to the classical myth of Pygmalion, through the story of Prague's Golem to the robot of Karel ńĆapek, who coined the word, people have fantasised about the possibility of building intelligent machines, more often than not androids with human features;
B. whereas now that humankind stands on the threshold of an era when ever more sophisticated robots, bots, androids and other manifestations of artificial intelligence ("AI") seem poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched, it is vitally important for the legislature to consider all its implications;
C. whereas between 2010 and 2014 the average increase in sales of robots stood at 17% per year and in 2014 sales rose by 29%, the highest year-on-year increase ever, with automotive parts suppliers and the electrical/electronics industry being the main drivers of the growth; whereas annual patent filings for robotics technology have tripled over the last decade;
D. whereas in the short to medium term robotics and AI promise to bring benefits of efficiency and savings, not only in production and commerce, but also in areas such as transport, medical care, education and farming, while making it possible to avoid exposing humans to dangerous conditions, such as those faced when cleaning up toxically polluted sites; whereas in the longer term there is potential for virtually unbounded prosperity;
E. whereas at the same time the development of robotics and AI may result in a large part of the work now done by humans being taken over by robots, so raising concerns about the future of employment and the viability of social security systems if the current basis of taxation is maintained, creating the potential for increased inequality in the distribution of wealth and influence;
F. whereas the causes for concern also include physical safety, for example when a robot's code proves fallible, and the potential consequences of system failure or hacking of connected robots and robotic systems at a time when increasingly autonomous applications come into use or are impending whether it be in relation to cars and drones or to care robots and robots used for maintaining public order and policing;
G. whereas many basic questions of data protection have already become the subject of consideration in the general contexts of the internet and e-commerce, but whereas further aspects of data ownership and the protection of personal data and privacy might still need to be addressed, given that applications and appliances will communicate with each other and with databases without humans intervening or possibly without their even being aware of what is going on;
H. whereas the 'soft impacts' on human dignity may be difficult to estimate, but will still need to be considered if and when robots replace human care and companionship, and whereas questions of human dignity also can arise in the context of 'repairing' or enhancing human beings;
I. whereas ultimately there is a possibility that within the space of a few decades AI could surpass human intellectual capacity in a manner which, if not prepared for, could pose a challenge to humanity's capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species;
J. whereas several foreign jurisdictions, such as the US, Japan, China and South Korea, are considering, and to a certain extent have already taken, regulatory action with respect to robotics and AI, and whereas some Member States have also started to reflect on possible legislative changes in order to take account of emerging applications of such technologies;
K. whereas European industry could benefit from a coherent approach to regulation at European level, providing predictable and sufficiently clear conditions under which enterprises could develop applications and plan their business models on a European scale while ensuring that the EU and its Member States maintain control over the regulatory standards to be set, so as not to be forced to adopt and live with standards set by others, that is to say the third states which are also at the forefront of the development of robotics and AI;
General principles
L. whereas, until such time, if ever, that robots become or are made self-aware, Asimov's Laws must be regarded as being directed at the designers, producers and operators of robots, since those laws cannot be converted into machine code;
M. whereas, nevertheless, a series of rules, governing in particular liability and ethics and  reflecting the intrinsically European and humanistic values that characterise Europe's contribution to society, are necessary;
N. whereas the European Union could play an essential role in establishing basic ethical principles to be respected in the development, programming and use of robots and AI and in the incorporation of such principles into European regulations and codes of conduct, with the aim of shaping the technological revolution so that it serves humanity and so that the benefits of advanced robotics and AI are broadly shared, while as far as possible avoiding potential pitfalls;
O. whereas a gradualist, pragmatic cautious approach of the type advocated by Jean Monnet should be adopted for Europe;
P. whereas it is appropriate, in view of the stage reached in the development of robotics and AI, to start with civil liability issues and to consider whether a strict liability approach based on who is best placed to insure is not the best starting point;
Liability
Q. whereas, thanks to the impressive technological advances of the last decade, not only are today's robots able to perform activities which used to be typically and exclusively human, but the development of autonomous and cognitive features – e.g. the ability to learn from experience and take independent decisions – has made them more and more similar to agents that interact with their environment and are able to alter it significantly; whereas, in such a context, the legal responsibility arising from a robot’s harmful action becomes a crucial issue;
R. whereas a robot's autonomy can be defined as the ability to take decisions and implement them in the outside world, independently of external control or influence; whereas this autonomy is of a purely technological nature and its degree depends on how sophisticated a robot's interaction with its environment has been designed to be;
S. whereas the more autonomous robots are, the less they can be considered simple tools in the hands of other actors (such as the manufacturer, the owner, the user, etc.); whereas this, in turn, makes the ordinary rules on liability insufficient and calls for new rules which focus on how a machine can be held – partly or entirely – responsible for its acts or omissions; whereas, as a consequence, it becomes more and more urgent to address the fundamental question of whether robots should possess a legal status;
T. whereas, ultimately, robots' autonomy raises the question of their nature in the light of the existing legal categories – of whether they should be regarded as natural persons, legal persons, animals or objects – or whether a new category should be created, with its own specific features and implications as regards the attribution of rights and duties, including liability for damage;
U. whereas under the current legal framework robots cannot be held liable per se for acts or omissions that cause damage to third parties; whereas the existing rules on liability cover cases where the cause of the robot’s act or omission can be traced back to a specific human agent such as the manufacturer, the owner or the user and where that agent could have foreseen and avoided the robot’s harmful behaviour; whereas, in addition, manufacturers, owners or users could be held strictly liable for acts or omissions of a robot if, for example, the robot were categorised as a dangerous object or if it fell within product liability rules;
V. whereas in the scenario where a robot can take autonomous decisions, the traditional rules will not suffice to activate a robot's liability, since they would not make it possible to identify the party responsible for providing compensation and to require this party to make good the damage it has caused;
X. whereas the shortcomings of the current legal framework are apparent in the area of contractual liability insofar as machines designed to choose their counterparts, negotiate contractual terms, conclude contracts and decide whether and how to implement them make the traditional rules inapplicable, which highlights the need for new, more up-to-date ones;
Y. whereas, as regards non-contractual liability,  can only cover damage caused by a robot's manufacturing defects and on condition that the injured person is able to prove the actual damage, the defect in the product and the causal relationship between damage and defect (strict liability or liability without fault);
Z. whereas, notwithstanding the scope of the Directive 85/374/EEC, the current legal framework would not be sufficient to cover the damage caused by the new generation of robots, insofar as they can be equipped with adaptive and learning abilities entailing a certain degree of unpredictability in their behaviour, since these robots would autonomously learn from their own, variable experience and interact with their environment in a unique and unforeseeable manner;
The report, underGeneral principles concerning the development of robotics and artificial intelligence for civil use, accordingly
1. Calls on the Commission to propose a common European definition of smart autonomous robots and their subcategories by taking into consideration the following characteristics of a smart robot: o acquires autonomy through sensors and/or by exchanging data with its environment (inter - connectivity) and trades and analyses data o is self-learning (optional criterion) o has a physical support; o adapts its behaviours and actions to its environment;
2. Considers that a system of registration of advanced robots should be introduced, and calls on the Commission to establish criteria for the classification of robots with a view to identifying the robots that would need to be registered;
3. Underlines that many robotic applications are still in an experimental phase; welcomes the fact that more and more research projects are being funded with national and European money; calls on the Commission and the Member States to strengthen financial instruments for research projects in robotics and ICT; emphasises that sufficient resources need to be devoted to the search for solutions to the social and ethical challenges that the technological development and its applications raise;
4. Asks the Commission to foster research programmes that include a mechanism for short-term verification of the outcomes in order to understand what real risks and opportunities are associated with the dissemination of these technologies; calls on the Commission to combine all its effort in order to guarantee a smoother transition for these technologies from research to commercialisation on the market;
Ethical principles
5. Notes that the potential for empowerment through the use of robotics is nuanced by a set of tensions or risks relating to human safety, privacy, integrity, dignity, autonomy and data ownership;
6. Considers that a guiding ethical framework for the design, production and use of robots is needed to complement the legal recommendations of the report and the existing national and Union acquis; proposes, in the annex to the resolution, a framework in the form of a charter consisting of a code of conduct for robotics engineers, of a code for research ethics committees when reviewing robotics protocols and of model licences for designers and users;
7. Points out that the guiding ethical framework should be based on the principles of beneficence, non-maleficence and autonomy, as well as on the principles enshrined in the EU Charter of Fundamental Rights, such as human dignity and human rights, equality, justice and equity, non-discrimination and non-stigmatisation, autonomy and individual responsibility, informed consent, privacy and social responsibility, and on existing ethical practices and codes;
A European Agency
8. Calls for the creation of a European Agency for robotics and artificial intelligence in order to provide the technical, ethical and regulatory expertise needed to support the relevant public actors, at both EU and Member State level, in their efforts to ensure a timely and well-informed response to the new opportunities and challenges arising from the technological development of robotics;
9. Considers that the potential of robotics use and the present investment dynamics justify the European Agency being equipped with a proper budget and being staffed with regulators and external technical and ethical experts dedicated to the cross-sectorial and multidisciplinary monitoring of robotics-based applications, identifying standards for best practice, and, where appropriate, recommending regulatory measures, defining new principles and addressing potential consumer protection issues and systematic challenges; asks the Commission and the European Agency to report to the European Parliament on the latest developments in robotics on an annual basis; Intellectual property rights and the flow of data
10. Notes that there are no legal provisions that specifically apply to robotics, but that existing legal regimes and doctrines can be readily applied to robotics while some aspects appear to need specific consideration; calls on the Commission to come forward with a balanced approach to intellectual property rights when applied to hardware and software standards, and codes that protect innovation and at the same time foster innovation; calls on the Commission to elaborate criteria for an ‘own intellectual creation’ for copyrightable works produced by computers or robots;
11. Calls on the Commission and the Member States to ensure that, in the development of any EU policy on robotics, privacy and data protection guarantees are embedded in line with the principles of necessity and proportionality; calls, in this regard, on the Commission to foster the development of standards for the concepts of privacy by design and privacy by default, informed consent and encryption;
12. Points out that the use of personal data as a 'currency' with which services can be 'bought' raises new issues in need of clarification; stresses that the use of personal data as a 'currency' must not lead to a circumvention of the basic principles governing the right to privacy and data protection;
Standardisation, safety and security
13. Calls on the Commission to continue to work on the international harmonisation of technical standards, in particular together with the European Standardisation Organisations and the International Standardisation Organisation, in order to avoid fragmentation of the inter nal market and to meet consumers’ concerns; asks the Commission to analyse existing European legislation with a view to checking the need for adaption in light of the development of robotics and artificial intelligence;
14. Emphasises that testing robots in real-life scenarios is essential for the identification and assessment of the risks they might entail, as well as of their technological development beyond a pure experimental laboratory phase; underlines, in this regard, that testing of robots in real-life scenarios, in particular in cities and on roads, raises numerous problems and requires an effective monitoring mechanism; calls on the Commission to draw up uniform criteria across all Member States which individual Member States should use in order to identify areas where experiments with robots are permitted;
Autonomous vehicles
15. Considers that the automotive sector is in most urgent need of European and global rules to ensure the cross-border development of automated vehicles so as to fully exploit their economic potential and benefit from the positive effects of technological trends; emphasises that fragmented regulatory approaches would hinder implementation and   jeopardise European competitiveness; notes that although current private international law rules on traffic accidents applicable within the EU do not need urgent modification to accommodate the development of autonomous vehicles, simplifying the current dual system for defining applicable law (based on Regulation (EC) No 864/2007 of the European Parliament and of the Council and the 1971 Hague Convention on the law applicable to traffic accidents) would improve legal certainty and limit possibilities for forum shopping;
Care robots
16. Points out that human contact is one of the fundamental aspects of human care; believes that replacing the human factor with robots could dehumanise caring practices;
Medical robots
17. Underlines the importance of appropriate training and preparation for doctors and care assistants in order to secure the hig hest degree of professional competence possible, as well as to protect patients' health; underlines the need to define the minimum professional requirements that a surgeon must meet in order to be allowed to use surgical robots; emphasises the special importance of training for users to allow them to familiarise themselves with the technological requirements in this field; draws attention to the rising trend towards self-diagnosis using a mobile robot which makes diagnoses and might take over the role of a doctor;
Human repair and enhancement
18. Notes the great potential of robotics in the field of repairing and compensating for damaged organs and human functions, but also the complex questions raised in particular by the possibilities of human enhancement; asks for the establishment of committees on robot ethics in hospitals and other health care institutions tasked with considering and assisting in resolving unusual, complicated ethical problems involving issues that affect the care and treatment of patients; calls on the Commission and the Member States to develop guidelines to aid in the establishment and functioning of such committees;
Drones (RPAS)
19. Stresses the importance of a European framework for remotely piloted aircraft systems (RPAS) to prote ct the safety, security and privacy of EU citizens, and calls on the Commission for a follow-up to the recommendations of the European Parliament resolution of 29 October 2015 on safe use of remotely piloted aircraft systems (RPAS), known as unmanned aerial vehicles (UAVs), in the field of civil aviation
20. Draws attention to the Commission's forecast that by 2020 Europe might be facing a shortage of up to 825000 ICT professionals and that 90% of jobs will require at least basic digital skills; welcomes the Commission’s initiative of proposing a roadmap for the possible use and revision of a Digital Competence framework and descriptors of Digital Competences for all levels of learners;
21. Considers that getting more young women interested in a digital career and placing more women in digital jobs would benefit the digital industry, women themselves and Europe's economy; calls on the Commission and the Member States to launch initiatives in order to support women in ICT and to boost their e-skills;
22. Calls on the Commission to start monitoring job trends more closely, with a special focus on the creation and loss of jobs in the different fields/areas of qualification in order to know in which fields jobs are being created and those in which jobs are being destroyed as a result of the increased use of robots;
23. Bearing in mind the effects that the development and deployment of robotics and AI might have on employment and, consequently, on the viability of the social security systems of the Member States, consideration should be given to the possible need to introduce corporate reporting requirements on the extent and proportion of the contribution of robotics and AI to the economic results of a company for the purpose of taxation and social security contributions; takes the view that in the light of the possible effects on the labour market of robotics and AI a general basic income should be seriously considered, and invites all Member States to do so;
Liability
24. Considers that robots' civil liability is a crucial issue which needs to be addressed at EU level so as to ensure the same degree of transparency, consistency and legal certainty throughout the European Union for the benefit of consumers and businesses alike;
25. Asks the Commission to submit, on the basis of Article 114 of the Treaty on the Functioning of the European Union, a proposal for a legislative instrument on legal questions related to the development of robotics and artificial intelligence foreseeable in the next 10-15 years, following the detailed recommendations set out in the annex hereto; further calls on the Commission, once technological developments allow the possibility for robots whose degree of autonomy is higher than what is reasonably predictable at present to be developed, to propose an update of the relevant legislation in due time;
26. Considers that, whatever legal solution it applies to robots' liability in cases other than those of damage to property, the future legislative instrument should in no way restrict the type or the extent of the damages which may be recovered, nor should it limit the forms of compensation which may be offered to the aggrieved party, on the sole grounds that damage is caused by a non-human agent;
27. Considers that the future legislative instrument should provide for the application of strict liability as a rule, thus requiring only proof that damage has occurred and the establishment of a causal link between the harmful behaviour of the robot and the damage suffered by the injured party;
28. Considers that, in principle, once the ultimately responsible parties have been identified, their liability would be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot's learning capability or autonomy is, the lower other parties' responsibility should be, and the longer a robot's 'education' has lasted, the greater the responsibility of its 'teacher' should be; notes, in particular, that skills resulting from 'education' given to a robot should be not confused with skills depending strictly on its self-learning abilities when seeking to identify the person to whom the robot's harmful behaviour is actually due;
29. Points out that a possible solution to the complexity of allocating responsibility for damage caused by increasingly autonomous robots could be an obligatory insurance scheme, as is already the case, for instance, with cars; notes, nevertheless, that unlike the insurance system for road traffic, where the insurance covers human acts and failures, an insurance system for robotics could be based on the obligation of the producer to take out an insurance for the autonomous robots it produces;
30. Considers that, as is the case with the insurance of motor vehicles, such an insurance system could be supplemented by a fund in order to ensure that reparation can be made for damage in cases where no insurance cover exists; calls on the insurance industry to develop new products that are in line with the advances in robotics;
31. Calls on the Commission, when carrying out an impact assessment of its future legislative instrument, to explore the implications of all possible legal solutions, such as:
a) establishing a compulsory insurance scheme whereby, similarly to what already happens with cars, producers or owners of robots would be required to take out insurance cover for the damage potentially caused by their robots;
b) ensuring that a compensation fund would not only serve the purpose of guaranteeing compensation if the damage caused by a robot was not covered by an insurance – which would in any case remain its primary goal – but also that of allowing various financial operations in the interests of the robot, such as investments, donations or payments made to smart autonomous robots for their services, which could be transferred to the fund;
c) allowing the manufacturer, the programmer, the owner or the user to benefit from limited liability insofar as smart autonomous robots would be endowed with a compensation fund – to which all parties could contribute in varying proportions – and damage to property could only be claimed for within the limits of that fund, other types of damage not being subject to such limits;
d) deciding whether to create a general fund for all smart autonomous robots or to create an individual fund for each and every robot category, and whether a contribution should be paid as a one-off fee when placing the robot on the market or whether periodic contributions should be paid during the lifetime of the robot;
e) ensuring that the link between a robot and its fund would be made visible by an individual registration number appearing in a specific EU register, which would allow anyone interacting with the robot to be informed about the nature of the  fund, the limits of its liability in case of damage to property, the names and the functions of the contributors and all other relevant details;
f) creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently;
International aspects
32. Notes the need also to consider amendments to international agreements such as the Vienna Convention on Road Traffic and the Hague Traffic Accident Convention;
33. Strongly encourages international cooperation in setting regulatory standards under the auspices of the United Nations;
34. Points out that the restrictions and conditions laid down in the 'Dual use regulation  on the trade in dual-use items – goods, software and technology that can be used for both civilian and military applications and/or can contribute to the proliferation of weapons of mass destruction – should apply to applications of robotics as well;
Final aspects
35. Requests the Commission to submit, on the basis of Article 225 of the Treaty on the Functioning of the European Union, a proposal for a directive on civil law rules on robotics, following the detailed recommendations set out in the annex hereto;
36. Confirms that the recommendati ons respect fundamental rights and the principle of subsidiarity;
37. Considers that the requested proposal will not have any financial implications;
38. Instructs its President to forward this resolution and the accompanying detailed recommendations to the Commission and the Council.

12 January 2017

Platforms

Last year's House of Lords report on Online Platforms and the Digital Single Market comments
Online platforms, which comprise a wide range of software-based technologies, from search engines and social networks to price comparison websites and collaborative economy platforms, are drivers of growth, innovation and competition. They enable businesses and consumers to make the most of the opportunities created by the digital economy. Supported by the emergence of mobile devices and pervasive wireless connectivity, online platforms have transformed how we live, interact and transact. In doing so they have disrupted existing sectors of the economy and challenged regulatory frameworks. As part of its Digital Single Market Strategy the EU Commission announced its plans to launch a consultation to investigate how the largest online platforms use their market power and whether the current regulatory environment remains ‘fit for purpose’. This report responds to that consultation.
Our assessment of the features of these markets suggests that online platforms that succeed in harnessing strong network effects can become the main provider in a sector, gateways through which markets and information are accessed, and an unavoidable trading partner for dependent businesses. Such platforms are likely to possess substantial market power. However, the possibility of disruptive innovation is higher in these markets than in other networked industries and this may create competitive pressures even where firms have high market shares. We conclude that determining whether a firm possesses substantial market power, or is abusing that power, requires meticulous case-by-case analysis. On this basis we advise against the creation of a platform-specific regulatory regime. Instead, to protect consumers and to ensure that market power is not abused, we recommend that existing regulators should be vigilant in these markets. We also considered three areas of existing regulation and suggested a number of adaptations to each.
Despite the challenges competition authorities face when dealing with online platforms, we find that the flexibility of competition law means that it should be well-suited to addressing the subtle and complex abuses of dominance that may arise. We suggest that the merger control regime should be modified, to prevent the acquisition of smaller digital tech firms by large online platforms from escaping scrutiny. The slowness of competition enforcement, as exemplified by the ponderous Google case, is cause for concern in such fast-moving markets: we recommend that the Commission make greater use of ‘interim measures’ and impose time limits on commitment negotiations, to make enforcement more responsive. There are also sector-specific issues. For example, some allege that online travel agents exploit their bargaining power relative to their trading partners by engaging in a variety of aggressive and misleading practices. To address these concerns, we urge the Competition and Markets Authority to investigate the sector. In markets where online platforms have been found to impose unfair terms and conditions on their trading partners, we suggest that competition authorities could usefully develop codes of practice.
The collection and use of consumer data are integral to the provision of online platforms’ services. We are therefore concerned to find that consumer trust in how online platforms manage personal data is worryingly low. Consumers seem to be unaware that they trade their personal data in exchange for access to many of the so-called ‘free services’ that online platforms provide and that their data are used to generate advertising revenues or are sometimes sold on and shared with third parties. The opaque and legalistic privacy notices used by online platforms are one reason for this lack of trust. We also identify a lack of competition between platforms on privacy standards, and suggest that online platforms could potentially abuse a dominant position by downgrading their privacy standards. To address this, we recommend that the Government work with the Commission to develop a privacy seal that incorporates a graded scale, and that platforms found to have repeatedly or egregiously breached data protection laws should be required to communicate this directly to their users. We also urge Government to press for the proper implementation of the recently agreed General Data Protection Regulation, and invite the Commission to clarify some of its more ambiguous provisions.
While some online platforms have gone beyond the requirements of existing consumer protection law, bad practices also persist. There is a widespread lack of transparency in how platforms rank and present information to their users. We recommend that existing regulation be altered to require online platforms clearly to communicate the basis on which they rank results, and also to inform consumers when ‘personalised pricing’ is taking place.
Underlying the Digital Single Market Strategy is Europe’s conspicuous failure to produce any truly global online platforms. Yet Europe is getting better at producing $1bn-valued tech firms (‘unicorns’), and within Europe the UK leads the field, having produced half of the unicorns in Europe. The UK thus stands to gain more from the creation of a Digital Single Market than any other EU Member State. We suggest that the fundamental aim of the Strategy—to create a scale market of 500 million consumers—is the right one: if it is achieved Europe has the potential to play a leading role in the next phase of the digital revolution. We urge a sharp focus on this fundamental aim.
We support the ambitions of the Digital Single Market Strategy, but we note that the sensitive concerns raised by online platforms have created pressure on regulators and legislators to act at Member State level. This has increased regulatory fragmentation and threatens to undermine the possibility of making the Digital Single Market a reality.
We believe that it is necessary to put in place an ongoing process that can act as an outlet for the concerns of regulators and legislators, as well as businesses, consumers and indeed citizens. To this end, we recommend the appointment of an independent expert panel that would seek to gather concerns, subject them to rigorous analysis, and make policy recommendations to enable the sustained growth of Europe’s digital economy. The nature and role of this panel are outlined in the concluding section of our report.
The report's Summary of Conclusions and Recommendations states
Chapter 2: The importance of online platforms
1. Online platforms are drivers of growth, innovation and competition, which enable businesses and consumers to make the most of the opportunities provided by the digital economy. 
2. E-commerce platforms allow SMEs to access global markets without having to invest in costly digital infrastructure, and provide consumers with increased choice. Search engines enable their users to navigate the web efficiently, and enable businesses to engage in more targeted advertising. Social media and communication platforms provide citizens with new opportunities for interaction, self-expression and activism.
3. Policymakers should take care when examining the challenges these rapidly developing markets present not to lose sight of the very considerable benefits that online platforms provide.
4. The Commission’s decision to conduct a comprehensive assessment of online platforms should not be seen as inherently protectionist. Given the impact these businesses have had on people’s lives and the economy, and concerns about whether existing regulatory regimes are still fit for purpose, a thorough analysis of online platforms is timely. If the growth of Europe’s digital economy is to be maximised, it is important that such concerns are investigated and, where appropriate, addressed.
Chapter 3: Defining ‘online platforms’
5. The Commission’s primarily economic definition of multi-sided online platforms offers insight into central aspects of these businesses including their intermediary role, the interdependencies that arise between their distinct user groups, and the role that data plays in intermediating between these groups. This provides a helpful way of thinking about online platforms that can usefully inform the work of policymakers and regulators.
6. The boundaries of the definition are, however, unclear. This is illustrated by the Commission’s own list, which excludes traditional platform businesses that now operate online, yet includes some digital platforms that are not multi-sided. Broadly interpreted, the proposed definition could encompass ‘all of the Internet’; strictly applied, it would only capture specific elements of the businesses with which it is concerned.
7. We recommend that further consideration of the need for regulation of online platforms should start by attempting to more precisely define the most pressing harms to businesses and consumers, and then consider the extent to which these concerns are common to all online platforms, sector- specific, or specific to individual firms.
Chapter 4: Market power and online platforms
8. The markets in which online platforms operate are characterised by accelerated network effects. These may fuel exponential growth, increase switching costs, increase entry barriers for potential competitors and lead to monopolistic outcomes. Firms that succeed in harnessing these network effects may become the main platform in a sector, gateways through which markets and information are accessed. This can reduce choice for users and mean that they become an almost unavoidable trading partner for businesses. Such platforms are likely to possess a significant degree of market power.
9. However, in contrast to some networked industries, the market power of the most successful online platforms is secured through innovation that has succeeded in harnessing network effects. The risk of disruptive innovation is also greater in these markets because the up- front investment in infrastructure required for market entry is often lower. Therefore, ‘competition for the market’ may create competitive pressure even when one firm has a high market share.
10. Furthermore, we note that competitive pressures vary in type and intensity from sector to sector, and many online platforms are unlikely to possess significant market power. Case by case analysis is therefore necessary.
11. On this basis, while competition authorities reserve the power to break up firms and limit their market shares, we do not believe that ex ante regulation of platforms that sought to substantially restrict their activities on the basis of their market share alone, is necessary. Nonetheless, the potential for dominant positions to emerge means that competition authorities must be vigilant in these markets, to ensure that market power is not abused. Protecting users in these markets also requires that consumer rights and data protection rights are effectively enforced.
Chapter 5: Competition law and online platforms
Restrictions on pricing
12. The increasing use of restrictive pricing practices by online platforms requires critical scrutiny by competition agencies. While some restraints may be justified to enable price comparison websites to operate, these clauses may also, especially when broadly designed, enable firms to exploit suppliers and exclude competitors. A case by case analysis by competition authorities is therefore necessary.
13. While we commend the commitments secured by National Competition Authorities from Booking.com and Expedia to drop the use of wide price parity clauses, we note that the asymmetries of bargaining power that characterise the online travel agent sector may mean that the effects of wide parity clauses persist in practice, even after the prohibition of these clauses.
14. We recommend that the Competition and Markets Authority urgently order a market investigation into the online travel agent sector. This investigation should consider the extent to which banning wide parity clauses has been effective, claims that online travel agents continue to prevent suppliers from offering other online travel agents a lower price, and other misleading practices alleged against online travel agents, including the creation of ‘shell websites’. As this is a Europe-wide issue, we recommend that the Commission support this investigation and co-ordinate any related activity by other National Competition Authorities.
15. We believe the findings of this investigation may be of wider application and could provide helpful insights about how to address similar practices in other sectors. While the evidence we received applied to travel accommodation, the  findings of this investigation may be useful in considering the relationship between Online Travel Agents and other supplier businesses, which also affects fares and travel costs for consumers.
16. We note the growing regulatory fragmentation in the online travel agent sector that has arisen as a result of unilateral action by Member States. This undermines ambitions to create a Digital Single Market. We urge DG Competition to publish guidance in due course clarifying the use of wide and narrow parity clauses by online travel agents.
Asymmetries in bargaining power in other industries
17. We support the Government’s view that developing codes of practice, most likely on a sectoral basis, could help to discourage unfair trading practices in these markets. Such codes of practice should be based on rigorous analysis. We therefore recommend that the Competition and Markets Authority use its market investigation tool to examine markets where concerns about unfair trading practices are most widespread, with a view to determining whether codes of practice are needed.
18. We note with concern that DG Competition’s ‘sector inquiry’ power does not enable it to impose legally binding sector-wide remedies. This limits the ability of the EU competition regime to address market-wide problems efficiently. We recommend that DG Competition be granted powers to impose legally binding sector-wide remedies as a result of a sector inquiry, subject to conditions to be agreed with National Competition Authorities.
19. Extending the EU’s online dispute resolution platform to cover business- to-business disputes could help to address concerns about unfair trading practices by online platforms. Such a mechanism could complement codes of practice described above. However, we note that the business-to-consumer online dispute resolution tool appears not to have been well-implemented. We recommend that the Commission’s first priority should be to ensure the effective implementation of the online dispute resolution mechanism in its current form.
20. Fear of commercial retaliation by the online platforms on which they depend may prevent complainants from approaching competition authorities. We recommend that the Competition and Markets Authority introduce new measures to protect complainants in these markets. These should include imposing substantial penalties upon online platforms that are found to have engaged in commercial retaliation.
Vertical integration and leveraging
21. Google’s search engine shows how the tendencies to concentration in these markets may result in a successful innovator becoming the main provider of a particular service. Google Search has become a gateway through which a large proportion of the world accesses information on the Internet, which many businesses consequently depend on in order to be visible and to compete online.
22. The Google case illustrates the way in which a platform may use a strong position in one sector (in this case, general search) to integrate a range of other services into its core offering, thereby entering into direct competition with trading partners on its platform. Such integration can offer consumers benefits, such as increased convenience; it can also exclude competitors and harm consumers, if they are not directed to the best service or if innovation is reduced.
23. The evidence we have received indicates that it is not possible to formulate useful general rules about vertical integration in relation to online platforms, because each case is substantially different. Whether individual examples should be deemed an abuse must be ascertained through rigorous case by case analysis. Competition enforcement is the most appropriate instrument to deal with such concerns where they arise.
Mergers and acquisitions
24. Large online platforms frequently acquire innovative firms, often at a significant premium, in order gain a competitive advantage over rivals; it is important that competition authorities are vigilant to ensure that, in doing so, they are not also buying up the competition.
25. We are concerned that mergers and acquisitions between large online platforms and less established digital businesses may escape scrutiny by competition authorities, because the target company generates little or no revenue and so falls below the turnover threshold adopted by the European Commission’s Merger Regulation.
26. We recommend that the Commission amend the Merger Regulation to include additional thresholds that better reflect this dynamic, examples of which might include the price paid for the target or a version of the ‘share of supply’ test used in the UK.
Data and competition law
27. Data are integral to the operation of many online platforms and the benefits they provide. For this reason, exclusive access to multiple sources of user data may confer an unmatchable advantage on individual online platforms, making it difficult for rival platforms to compete.
28. As well as providing new benefits, rapid developments in data collection and data analytics have created the potential for new welfare reducing and anti- competitive behaviours by online platforms, including subtle degradations of quality, acquiring datasets to exclude potential competitors, and new forms of collusion. While some of these abuses are hypothetical, they raise questions as to the adequacy of current approaches to competition enforcement.
29. We recommend that the European Commission co-ordinate further research regarding the effects that algorithms have on the accountability of online platforms and the implications of this for enforcement. We also recommend that the Commission co-ordinate further research to investigate the extent to which data markets can be defined and dominant positions identified in these markets.
30. It is clear that dominant online platforms could potentially abuse their market position by degrading privacy standards and increasing the volume of data collected from their users. We welcome ongoing research and competition investigations that seek to clarify the circumstances under which degradation of privacy standards could be deemed abuse under competition law. In the meantime, these concerns underline the clear need for the enforcement  of data protection law to be sufficiently robust to deter bad behaviour.
The adequacy of competition law
31. The sheer diversity of online platforms and the complexity of their business models raise obvious challenges for competition authorities. The lack of price signals on the consumer side, and the presence of multiple prices in multi- sided markets, create difficulties for standard antitrust analysis. Quality is a key parameter of competition in these markets, but is not easily measured.
32. While these challenges are significant, we note that the flexible, principle- based framework of competition law, which can be customised to individual cases, is uniquely well-suited to dealing with the subtlety, complexity and variety of possible abuses that may arise in these markets. We cannot see how a less flexible regulatory approach could be more effective.
33. Competition law is perceived as being too slow to react to rapidly evolving digital markets. While the length of time taken to arrive at a decision in the Google case reflects its importance, it also highlights a wider problem. In such fast-moving markets a competitor who falls foul of anti-competitive conduct may suffer irreversible harm long before a competition case concludes. This undermines public confidence in the ability of regulators to hold large online platforms to account and may create political pressure for legislators to regulate unnecessarily.
34. In order to speed up the enforcement of competition law, and in light of recent changes in UK legislation, we recommend that the Competition and Markets Authority make greater use of interim measures. DG Competition should also make greater use of interim measures by lowering the threshold for their use, bringing it into line with that of the UK Competition and Markets’ Authority. (Paragraph 200)
35. We recommend that the Competition and Markets Authority and DG Competition consider introducing time limits for the process of negotiating commitments between competition authorities and dominant firms. Restricting the period for discussion of commitments should encourage parties to offer serious proposals at the outset and prevent them from delaying the process. 
36. We also note that our proposal to provide DG Competition with market investigation powers would enable the Commission to identify and address market-wide problems more efficiently and comprehensively than its current sector inquiry tool.  
Chapter 6: Data protection law and online platforms
Consumer concerns about personal data and online businesses
37. Consumers agree to share their personal data with online platforms in exchange for access to their services. However, the complex ways in which online platforms collect and use personal data mean that the full extent of this agreement is not sufficiently understood by consumers. As a result, trust in how online platforms collect and use consumers’ data is worryingly low and there is little incentive for online platforms to compete on privacy standards. We believe this presents a barrier to future growth of the digital economy. Online platforms must be more effective in explaining the terms of such agreements to consumers.
General Data Protection Regulation
38. We welcome the wide range of reforms contained within the General Data Protection Regulation which will strengthen and modernise the EU data protection regime. This Regulation will expand the definition of personal data to include data collected through the use of cookies, location tracking and other identifiers, and will mean that the data protection regime will apply directly to online platforms established outside the EU for the first time. 
39. Nonetheless, given the limitations of the consent-based model, and industry’s reluctance to make the mechanisms of consent more meaningful, we are concerned that the provisions that widen the definition of ‘personal data’ will be difficult to apply in practice. We recommend that the Commission investigate how the requirement for all businesses to seek consent for the collection of personal data through online identifiers, device identifiers, cookie IDs and IP addresses can be applied to online platforms in a practical and risk-based way.
40. The privacy notices used by online platforms are inaccessible to the average consumer. They are too long and expressed in complex language. While the General Data Protection Regulation will require more transparency in privacy notices, and introduce heftier fines for non-compliance, this alone may not be sufficient to make consumers understand the value of their data when transacting with online platforms.
41. We support provisions within the General Data Protection Regulation to allow organisations to use privacy seals, or kite-marks, to give consumers confidence that they comply with data protection rules.
42. In order to encourage competition on privacy standards, not just compliance with the law, we recommend that the Government and the Information Commissioner’s Office work with the European Commission to develop a kite-mark or privacy seal that incorporates a graded scale or traffic light system, similar to that used in food labelling, which can be used on all websites and applications that collect and process the personal data of EU citizens.
43. To discourage misuse of users’ personal data, we recommend that the European Commission reserve powers to require online platforms that are found to have breached EU data protection standards, or to have breached competition law by degrading privacy standards, to communicate this information clearly and directly to all of their users within the EU through notifications on their web-sites and mobile applications. We suggest that this power be used sparingly, for repeat offenders or particularly egregious breaches of the law.
44. Data portability could be one of the most significant changes brought in under the General Data Protection Regulation. It could promote quality- based competition and innovation by making it easier for consumers to switch platforms. This would facilitate the emergence of new market entrants.
45. However, we are concerned that the principle of data portability may unravel in practice. If applied too rigidly, it could place onerous obligations on emerging businesses; however, unless it is more clearly defined, it is unlikely that it will be implemented by many online platforms.
46. We recommend that the Commission publish guidelines explaining how data portability requirements apply to different types of online platform. These guidelines should match data portability requirements to different types of online platform, adopting a proportionate approach depending on the essentiality of the service in question.
47. The use of personal data as the basis of research, particularly on social media, goes beyond what most users would ordinarily expect or consider acceptable. We recommend that the Government and Information Commissioner’s Office publish guidelines in the next 12 months setting out best practice for research using personal data gathered through social media platforms.
48. In the past, online platforms established outside the EU were not subject to European data protection rules. This resulted in a weak data protection regime in which European citizens’ fundamental rights were breached, and reduced consumer trust in how online platforms collect and process personal data. We are therefore concerned that industry remains sceptical about the forthcoming General Data Protection Regulation. Online platforms must accept that the Regulation will apply to them and will be enforced, and prepare to make the necessary adaptations.
49. We urge the Commission, the Government, regulators and industry to use the time before the Regulation enters into force to ensure that its terms are well understood and effectively implemented.
Chapter 7: Consumer protection and online platforms Consumer-to-consumer transactions
50. Some online platforms take consumer protection issues seriously and dedicate significant business resources to addressing problems as and when they arise.
51. Nonetheless, the growth of online platforms and the collaborative economy raise important questions about the definitions of ‘consumer’ and ‘trader’, which form the cornerstone of consumer protection law. This creates uncertainty about the liability of online platforms and their users in instances where consumer protection concerns may arise.
52. We recommend that the Commission and the Government review the use of these definitions within the consumer protection acquis in order to determine whether gaps in legislation exist and if legislative change is needed. The Commission should also publish guidance about the liability of online platforms on consumer protection issues in relation to their users, including their trading partners. 
53. We also recommend that online platforms clearly inform consumers that their protection under consumer protection law may be reduced when purchasing a good or service from an individual, as opposed to a registered trader.
Transparency in how online platforms present information
54. Concerns about the lack of transparency in how search and meta-search results are presented to consumers are well founded, especially in relation to price comparison websites, where the results of a search may be based on a commercial deal between the website and a business, rather than on the best possible price. However, we do not believe that this problem should be addressed by requiring online platforms to disclose their algorithms, which are their intellectual property. Instead, we believe that these concerns should be addressed through increased transparency.
55. We recommend that the Commission amend the Unfair Consumer Practices Directive so that online platforms that rank information and provide search and specialised results are required to clearly explain on their website the basis upon which they rank search results. We also recommend that the Commission amend the Directive to require online platforms to provide a clear explanation of their business models and relationships with suppliers, which should also be prominently displayed on their websites.
56. We note concerns that online platforms can and do engage in personalised pricing, using personal data about consumers to determine an individual price for a particular good or service, without clearly communicating this to consumers. This is another worrying example of the lack of transparency with which some online platforms operate. We recommend that DG Competition build on the work of the Office of Fair Trading and investigate the prevalence and effects of personalised pricing in these markets. We also recommend that online platforms be required to inform consumers if they engage in personalised pricing.
57. The rating and review systems used by online platforms are instrumental in creating the trust necessary for consumers to engage in online transactions. To ensure transparency, however, we believe that all online platforms should have publicly accessible policies for handling negative reviews, and clearly distinguish between user reviews and paid-for promotions. We recommend that the Commission publish guidance clarifying how the Unfair Commercial Practices Directive applies to the rating and review systems used by online platforms.
Chapter 8: How to grow European platforms
58. European policymakers should not allow concerns about online platforms to obscure the fact that they are key drivers of competitiveness, productivity and growth. It is important that Europe develop its ability to compete in these markets. We therefore urge the European Commission, as part of its current and future work on online platforms, to prioritise actions that promote the emergence and growth of online platforms in Europe.
The UK’s strengths
59. The UK has a population of early adopters, the highest levels of e-commerce in Europe, a thriving tech start-up scene, exceptionally strong e-commerce and creative sectors, and is a world-leader in FinTech or Financial Technology services. As a result, the UK stands to gain more than any other EU Member State from the creation of a digital single market.
Create a Digital Single Market of 500 million consumers
60. Market scale is paramount for online platforms, whose value resides in the size of the networks they can create. The fragmentation of the European market in digital goods and services—with 28 different rulebooks— substantially limits growth and acts as an incentive for businesses to shift the locus of their operations to the US, to maximise their growth potential. We therefore strongly endorse the central aim of the Digital Single Market Strategy, which is to reduce regulatory fragmentation and remove barriers to cross border trade, and urge the Commission to retain a sharp focus on this over-riding purpose.
61. Initiatives in the Digital Single Market Strategy, particularly the greater harmonisation of contract law and consumer protection, are critically important to enabling digital tech start-ups and platforms to operate without friction across borders and to fully exploit a potential market of over 500 million consumers. We recommend that the Commission and the Government pursue an ambitious degree of integration in these areas, and resist a lowest common denominator approach.
Facilitate increased investment
62. We note the weakness of the European venture capital market compared to that of the US is a barrier to the growth of EU-based start-ups and scale- ups, and an incentive for emerging platforms to move to the US. This lack of investment is not unique to online platforms, and represents a major obstacle to generating economic growth across the Union. We therefore welcome the unprecedented large-scale action from the Commission to address this lack of investment through the Capital Markets Union, the €315 billion Investment Plan for Europe and its proposal to create a venture capital ‘fund of funds’.
63. We also note the difficulty of establishing small-scale investment funds in the UK, compared to the US. We recommend that the Government review the example provided by the US Jumpstart Our Business Startups (JOBS) Act, and consider whether comparable reforms could facilitate increased investment in UK-based start-ups and scale ups. (Paragraph 339)
Embrace the strategic role of innovation
64. If the European Union and its Member States wish to facilitate the growth of online platforms that can compete in these global markets, they must embed innovation at every level of policymaking. The need to update existing regulation in order to protect consumers and the competitive process should be carefully balanced with the need to promote innovation in these markets: we suggest that regulating after markets have matured may be preferable to adopting a more pre-emptive approach.
65. If the EU and its Member States can get this balance right, facilitate increased investment in digital tech firms, and—most importantly of all—create a scale market of 500 million consumers, Europe has the potential to play a leading role in the next stages of the digital revolution.
Chapter 9: Regulating online platforms
Disrupted regulation
66. The rapid growth of online platforms has disrupted many traditional markets. It has also resulted in uncertainty about how existing regulation, designed in a pre-digital age, applies to these new disruptive business models. As a consequence there is a perception that large online platforms are above the law.
Responding to regulatory disruption
67. We do not consider that highly restrictive regulation that seeks to contain disruption would be the right response. It would risk entrenching existing market structures and make it difficult for new platforms to emerge, thereby discouraging innovation. Nonetheless, we acknowledge the need to protect fundamental rights and to ensure that existing regulation is effective and up- to-date. 
68. In addition to the adaptations proposed elsewhere in this report, we recommend that the Commission, in concert with regulators at Member State level, critically review and refit existing regulation to ensure that its application to online platforms is clear. We believe that in many cases specific guidance from the Commission could provide this clarification.
69. As many concerns relate to the enforcement of existing laws rather than the content of those laws, we invite both the Commission and the Member States to consider whether providing regulators with increased resources would be a more efficient way to address concerns about enforcement than introducing additional rules. 
70. We recommend that regulators robustly enforce against online platforms they believe to be in breach of the law. Enforcement authorities should sometimes proceed even where there is a risk of losing the case or having the outcome appealed—such outcomes help to clarify how the law applies. For this reason we welcome Commissioner Vestager’s decision to proceed with the Google case, without prejudice to the outcome. 
71. Online platforms present regulators and enforcement agencies with multiple challenges, outlined in detail in this report. In addition to a perceived gap in enforcement, popular concerns about their use of personal data, disruption of traditional industries and corporate tax contributions have put pressure on policymakers to act at Member State level, resulting in increased regulatory fragmentation. Unless these concerns are addressed in a concerted way at a European level this fragmentation will continue to increase, undermining the possibility of creating a single market in digital goods and services.
72. While the Digital Single Market Strategy identifies specific policy interventions designed to achieve this goal, we consider that the political sensitivity of questions relating to online platforms, as well as their sheer variety, make reaching a consensus in this policy area difficult.
73. Although we welcome the Commission’s consultation as a valuable first step, we believe that it is too broadly designed to address these issues decisively. To support the growth of innovative online platforms across the EU in a sustainable way, we believe that the process of reviewing the effectiveness of existing laws in relation to online platforms must be continuous.
74. We therefore recommend that the European Commission appoint an independent panel of experts tasked with identifying priority areas for action in the digital economy and making specific policy recommendations.
75. The panel would consist of a representative group of independent experts with deep insight into the digital economy and the emerging challenges it presents, drawn from outside the Commission itself. It would be supported by staff that would enable it to effectively pursue its objectives, and would seek input from a wide range of specialists on specific issues. The panel would report annually to the European Commission, the European Council and the European Parliament.
76. The panel would act as a channel for public concerns, engaging with regulators, policymakers, businesses and citizens, but would then subject those concerns to rigorous and impartial analysis, before formulating its recommendations. In this way the panel would seek to build political consensus around its policy proposals, thus reducing the risk of regulatory fragmentation and removing obstacles to the creation of a Digital Single Market.
77. While the panel would set its own agenda, on the basis of this report we identify three subjects that require immediate consideration: • The effectiveness of enforcement in these markets, including whether enforcement agencies have the necessary powers and resources to act against abuse by the largest online platforms, and whether enforcement could be better co-ordinated across different jurisdictions and regulatory regimes; • The lack of competition between platforms on privacy standards, and how data portability requirements should apply to different types of online platform; and • Ways to open up access for emerging and disruptive innovation into the digital economy, including in areas such as the Internet of Things and the expansion of the collaborative economy into new sectors