20 April 2018

UK Artificial Intelligence

The House of Lords has released a report on artificial intelligence in the UK, titled AI in the UK: Reading, Willing and Able?.

The report features a useful discussion of concerns regarding provision by the  Royal Free London Hospital Trust (ie a NHS unit) of bulk patient data to Alphabet (Google's parent).

The summary states
Our inquiry has concluded that the UK is in a strong position to be among the world leaders in the development of artificial intelligence during the twentyfirst century. Britain contains leading AI companies, a dynamic academic research culture, a vigorous start-up ecosystem and a constellation of legal, ethical, financial and linguistic strengths located in close proximity to each other. Artificial intelligence, handled carefully, could be a great opportunity for the British economy. In addition, AI presents a significant opportunity to solve complex problems and potentially improve productivity, which the UK is right to embrace. Our recommendations are designed to support the Government and the UK in realising the potential of AI for our society and our economy, and to protect society from potential threats and risks.
Artificial intelligence has been developing for years, but it is entering a crucial stage in its development and adoption. The last decade has seen a confluence of factors—in particular, improved techniques such as deep learning, and the growth in available data and computer processing power—enable this technology to be deployed far more extensively. This brings with it a host of opportunities, but also risks and challenges, and how the UK chooses to respond to these, will have widespread implications for many years to come. The Government has already made welcome advances in tackling these challenges, and our conclusions and recommendations are aimed at strengthening and extending this work.
AI is a tool which is already deeply embedded in our lives. The prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning. Access to large quantities of data is one of the factors fuelling the current AI boom. We have heard considerable evidence that the ways in which data is gathered and accessed needs to change, so that innovative companies, big and small, as well as academia, have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency in this rapidly evolving world.
To do this means not only using established concepts, such as open data and data protection legislation, but also the development of new frameworks and mechanisms, such as data portability and data trusts. Large companies which have control over vast quantities of data must be prevented from becoming overly powerful within this landscape. We call on the Government, with the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by big technology companies operating in the UK.
Companies and organisations need to improve the intelligibility of their AI systems. Without this, regulators may need to step in and prohibit the use of opaque technology in significant and sensitive areas of life and society. To ensure that our use of AI does not inadvertently prejudice the treatment of particular groups in society, we call for the Government to incentivise the development of new approaches to the auditing of datasets used in AI, and to encourage greater diversity in the training and recruitment of AI specialists.
The UK currently enjoys a position as one of the best countries in the world in which to develop artificial intelligence, but this should not be taken for granted. We recommend the creation of a growth fund for UK SMEs working with AI to help them scale their businesses; a PhD matching scheme with the costs shared between the private sector; and the standardisation of mechanisms for spinning out AI start-ups from the excellent research being done within UK universities. We also recognise the importance of overseas workers to the UK’s AI success, and recommend an increase in visas for those with valuable skills in AI-related areas. We are also clear that the UK needs to look beyond the current dataintensive focus on deep learning, and ensure that investment is made in less researched areas of AI in order to maintain innovation.
Many of the hopes and the fears presently associated with AI are out of kilter with reality. While we have discussed the possibilities of a world without work, and the prospects of superintelligent machines which far surpass our own cognitive abilities, we believe the real opportunities and risks of AI are of a far more mundane, yet still pressing, nature. The public and policymakers alike have a responsibility to understand the capabilities and limitations of this technology as it becomes an increasing part of our daily lives. This will require an awareness of when and where this technology is being deployed. We recommend that industry, via the AI Council, establish a voluntary mechanism to inform consumers when artificial intelligence is being used to make significant or sensitive decisions.
AI will have significant implications for the ways in which society lives and works. AI may accelerate the digital disruption in the jobs market. Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. A significant Government investment in skills and training is imperative if this disruption is to be navigated successfully and to the benefit of the working population and national productivity growth. This growth is not guaranteed: more work needs to be done to consider how AI can be used to raise productivity, and it should not be viewed as a general panacea for the UK’s wider economic issues.
As AI decreases demand for some jobs but creates demand for others, retraining will become a lifelong necessity and pilot initiatives, like the Government’s National Retraining Scheme, could become a vital part of our economy. This will need to be developed in partnership with industry, and lessons must be learned from the apprenticeships scheme. At earlier stages of education, children need to be adequately prepared for working with, and using, AI. For a proportion, this will mean a thorough education in AI-related subjects, requiring adequate resourcing of the computing curriculum and support for teachers. For all children, the basic knowledge and understanding necessary to navigate an AIdriven world will be essential. In particular, we recommend that the ethical design and use of technology becomes an integral part of the curriculum.
In order to encourage adoption across the UK, the public sector should use targeted procurement to provide a boost to AI development and deployment In particular, given the impressive advances of AI in healthcare, and its potential, we considered the health sector as a case study. The NHS should look to capitalise on AI for the public good, and we outline steps to overcome the barriers and mitigate the risks around widespread use of this technology in medicine.
Within the optimism about the potential of AI to benefit the UK, we received evidence of some distinct areas of uncertainty. There is no consensus regarding the adequacy of existing legislation should AI systems malfunction, underperform or otherwise make erroneous decisions which cause harm. We ask the Law Commission to provide clarity. We also urge AI researchers and developers to be alive to the potential ethical implications of their work and the risk of their work being used for malicious purposes. We recommend that the bodies providing grants and funding to AI researchers insist that applications for such funding demonstrate an awareness of the implications of their research and how it might be misused. We also recommend that the Cabinet Office’s final Cyber Security & Technology Strategy consider the risks and opportunities of using AI in cybersecurity applications, and conduct further research as how to protect datasets from any attempts at data sabotage.
The UK must seek to actively shape AI’s development and utilisation, or risk passively acquiescing to its many likely consequences. There is already a welcome and lively debate between the Government, industry and the research community about how best to achieve this. But for the time being, there is still a lack of clarity as to how AI can best be used to benefit individuals and society. We propose five principles that could become the basis for a shared ethical AI framework. While AI-specific regulation is not appropriate at this stage, such a framework provides clarity in the short term, and could underpin regulation, should it prove to be necessary, in the future. Existing regulators are best placed to regulate AI in their respective sectors. They must be provided with adequate resources and powers to do so.
By establishing these principles, the UK can lead by example in the international community. There is an opportunity for the UK to shape the development and use of AI worldwide, and we recommend that the Government work with Government-sponsored AI organisations in other leading AI countries to convene a global summit to establish international norms for the design, development, regulation and deployment of artificial intelligence.