top of page

ETHICAL ISSUES ASSOCIATED WITH THE INCREASING USE OF AI AND AUTONOMY

25th November 2020, Virtual Discussion

“Ethics is a wide ranging, culturally diverse subject which eventually becomes law that must be obeyed, but simultaneously controls and empowers us. The issue is how do we control and empower autonomous systems and reach acceptance of a technology that will definitely arrive.” Tony Gillespie

Image courtesy of Starline / Freepik

AI Courtsey Starline Freepik.jpg

“The emergence of autonomous and smart products, systems and services and the vast potential of the Autonomous Decision Making and Algorithmic Learning Systems (ADM/ALS) comprise the most notable and impactful innovations over the horizon and are already influencing our lives to some degree. Technology evolution has given society systems that can interpret information, make decisions based on it and act on them, possibly without human intervention. This requires a new paradigm that goes beyond concerns over commercial and functional performance. The providers of products, services and systems must be encouraged towards transparency, accountability, explainability and observance of higher ethical values in their increasingly global businesses.


Besides the safety and security rules developed over centuries, new concerns have emerged, beyond the protection of life and property, in both physical and cyber contexts. Well-being, happiness and overall quality of life is the next dimension of focus/interest, largely driven by the emergence and advancement of the digital technologies that now pervade our lives. This is rather unprecedented in human history and requires urgent, but careful review and evaluation of existing and emerging societal hazards leading to the formulation of proactive strategies to preserve and protect rights and ethical values.  We contend that this should be the era of ethics and moral considerations by all enterprises, public or private, both for and not for profit.”  Tony Gillespie and Ali Hessami

The Speakers

The discussion was introduced by Professor Tony Gillespie FREng and Professor Ali Hessami Eur Ing FIET.


Tony Gillespie started his career at Marconi Radar Systems in Leicester, followed by a switch to academic research in radio astronomy for the next 13 years in the UK and Germany. He returned to UK industry for the development of radar and avionic mission systems for air platforms. This was followed by R&D for autonomous systems at the UK Ministry of Defence’s (MOD) Defence Science and Technology Laboratory (Dstl) with applications to weapon systems. A central problem was the ethical use and control of automated systems, issues he has pursued since leaving MOD/Dstl with a visiting professorship at University College, London (UCL). His book 'Systems engineering for ethical autonomous systems' and published papers explore the issues in different applications.


Ali Hessami is Director of R&D and Innovation at Vega Systems.  He joined the Institution of Electrical and Electronic Engineers tandards Association (IEEE SA) as a committee member for the IEEE 7000 standard focused on “Ethical Considerations in System Design” and was subsequently appointed as the Technical Editor and later the Chair of IEEE 7000 standard working group. In 2018 he was appointed the VC and Process Architect for the IEEE ‘s global Ethics Certification Programme for Autonomous and Intelligent Systems (ECPAIS). Ali is a Visiting Professor at London City University’s school of engineering and at Beijing Jiaotong University’s School of Electronics.

Introductory Presentation

Tony Gillespie: Ethics is a wide ranging, culturally diverse subject which eventually becomes law that must be obeyed, but simultaneously controls and empowers us. The issue is how do we control and empower autonomous systems and reach acceptance of a technology that will definitely arrive. Looking back, autonomous systems have been around for some time, for example the speed regulator used by James Watt on his steam engines. Analysis shows that such systems consist of two components, decision making and action - the latter being made by (in the human case) an authorised person who is usually controlled by international law. An example, in defence, of unmanned aircraft strikes suggests that the authority problem is not in the technology, but in the law. The United Nations has recently formed a group to debate how to regulate unmanned lethal weapons systems, without defining autonomous but focussing on human aspects. It concluded that without human checking of decisions, any action taking was, by default, the responsibility of the design engineer who developed the algorithms used for the decision-making rendering them liable for legal action. This changes the parameters of design. It is not limited to military applications, though. Autonomous vehicles are now at this same stage, and current driver-support systems are currently under debate within the Department for Transport. But military does have a head-start.


Tony posed the questions: Is the responsibility for design engineers significantly changed from the present ’duty of care’, how should we regulate autonomous systems and how wide should the debate go beyond engineering?


Ali Hessami: In simple terms, there are two dimensions to AI ethics: Autonomous decision making and Algorithmic Learning Systems, each of which poses challenges both at personal and societal level. Personal involves data agency and self-determination. Societally, it involves loss of human rights and threats to livelihood.  After 2000 years, society has achieved the protection of life and property, but not values and rights. The IEEE has published reports about ethically aligned design and a certification program for ethicality of AI systems. These look for AI systems to be transparent and explainable, for them to be accountable for matters of responsibility and for them to consider the ethics of algorithmic and data bias. A recent initiative has looked at ethical issues associated with contact tracing and produced guidelines to define ethical parameters in contact tracing to be used without harming societal trust.


Ali posed the questions: What are the key challenges to artificial intelligence and algorithmic learning systems, how do we approach embedding ethics and human values in such systems, how do we validate that such values are embedded, and can we achieve this entirely by design? He also asked the panels to consider what might a framework for evaluating AI technologies before they are deployed look like?

Discussion

The nature of discussion reflected the relative infancy of the topic, in that rather than suggesting solutions to ethical issues, it largely raised more questions. The discussions focused on the following points, but some noted that these may differ on a case-by-case basis:


Trust, Responsibility and Human Aspects

Trust was considered a key issue, and that with trust comes acceptance. Recent events (such as the Volkswagen emissions ‘scandal’ and the Boeing 737 Max grounding) have led to a lack of trust of software ‘experts’ and has demonstrated that ‘self-certification’ does not necessarily work. Perhaps software doesn’t yet have similar trust to traditional engineering. It was felt that the ‘designer’ of AI algorithms was presently responsible for AI decision failures, but likely that the ‘designer’ would be a corporation rather than an individual, which may generate a need to collapse corporate hierarchies in relation to the design and production of such technologies. However, as AI grows, it may not be possible to pin blame back to a designer. In relation to ‘authority for action’ based upon AI input, who gives that authority and on what basis could such decisions be made? The question was raised as to who checks compliance? It was felt that the essence of engineering is numeracy, but training is required to broaden into areas normally considered the domain of other professions. A question was raised about whether sufficient effort is being put into publicising the achievements of AI systems.


Law, Regulation and Standards

It was uncertain whether any case law already exists, but generally felt that a legal framework for Artificial Intelligence, against which the technology could be regulated, was required. It should be based on consideration of the ethics of those affected by the technology and would evolve over time as the technology matures. Law could lead the technology but may not be able to keep pace and the legal profession may need to adapt to possible rapid evolution. It was considered that a primary legal framework, and a secondary one that could be more agile might be required (as evidenced by COVID) and that there was a need for a regulatory body. Standards may be a way of representing consensus, but attitudes to compliance with rules varies around the world. In litigation, it was felt that courts would seek to assign blame to a human, rather than a machine. It was also felt that ‘group behaviour’ of individually certified AI systems would be a challenging legal concept. It was felt that literary writers (such as Asimov) might provide insight.


Measurement, Performance and Testing

There is a need for Systems Assurance and to measure the performance of AI systems, and against this operate comprehensive scenario testing starting in low consequence environments before moving to progressively higher consequence situations. This is different to the ‘minimum viable product’ type innovation of ‘fail fast, learn fast’ that is gaining popularity. It was noted that the majority of industrial applications are low risk, but some areas demand reliability.


Data, Algorithms, Transparency and Bias

Concerns were raised about the validity of data used for decision making, and whether removing ‘incorrect’ data might introduce unconscious bias. Concerns were also raised about misuse or use of inappropriate data and extrapolating beyond the bounds of that data. There was concern that algorithms may uncover existing behaviour patterns and reinforce them, even when undesirable. There was a general view that algorithms used in AI should be transparent, and therefore open to wider scrutiny. Algorithmic behaviour becomes blurred with systems that learn, and how such learning is adopted. Over confidence in ‘black box’ computing has occurred, leading to lack of consideration of alternative numeric methods.

Concluding Remarks

Tony Gillespie: In noting that engineers need to talk to other communities and educators, how do we educate the outside world in the engineering ethics and approach to solving problems and the trade-offs that need to be made?


Recognising that gradual acceptance of AI will occur as laws and regulation evolves, there is a need to separate systems which automatically act (because there isn’t time for human intervention) and those that are able to act as intelligent advisors.


Who guards the guards? It is important that the people at the coalface understand how to implement regulations and laws.


Ali Hessami: The issues that what we currently observe in autonomous systems and algorithmic learning systems now will not necessarily be what we might expect in the future as such systems proliferate in every walk of life – transport, medicine and space travel, for example. They will not necessarily be of human origin (i.e. no programmer) and accountability may transform in years to come.


There are fads and seasons when certain philosophies become popular; currently the flavour is for ethics by design. In algorithmic learning systems which adapt and modify, what is achieved by design is a starting point but will not be sufficient for ethical assurance.


All of the ethical issues about these technologies are about human values and value judgments. Central to this will be benefits and disbenefits, personal characteristics and maxims we pursue as decision makers in industry.

bottom of page