On 22 Novemer 2016 the Greens/EFA Digital Working voted for a position paper on robotics and artificial intelligence. This paper serves as an initial step towards forming an opinion to help shape the debate in our political group and party family, but also inside the European Parliament and for the public debate in general. Read the full document Position on Robotics and Artificial Intelligence (PDF) of the Green Digital Working group.
Key Recommendations Green position on Robotics and Artificial Intelligence
Recommendation 1: An informed public debate. Society should be able to help shape technology as it is developing. Hence, public input and an informed debate is of the utmost importance. We call for a European debate with the aim of shaping the technological revolution so that it serves humanity with a series of rules, governing, in particular, liability and ethics and reflecting the intrinsically European and humanistic values that characterise Europe’s contribution to society.
Recommendation 2: Precautionary principle. We demand that research and technology are integrated to the maximum benefit of all and potential unintended social impacts are avoided, especially when talking about emerging technologies. We propose that robots and artificial intelligence should be developed and produced based on an impact assessment, to the best available technical standards regarding security and with the possibility to intervene.
In accordance with responsible research and innovation, it is imperative to apply the precautionary principle and assess the long term ethical implications of new technologies in the early phase of their development.
Recommendation 3: Do no harm-principle. Robots are multi-use tools. They should not be designed to kill or harm humans. Their use must take place according to guaranteed individual rights and fundamental rights, including privacy by design and in particular human integrity, human dignity and identity. We underline the primacy of the human being over the sole interest of science or society. The decision to harm or kill a human being should only be made by a well-trained human operator. Thus, the use of robots in the military should not remove responsibility and accountability from a human. The deployments of robots and artificial intelligence should be in accordance with international humanitarian law and laws concerning armed conflicts.
Recommendation 4: Ecological footprint. We acknowledge robotics and artificial intelligence can help shape processes in a more environmentally friendly way while at the same time emphasising the need to minimise their ecological footprint. We emphasise the need to apply the principles of regenerative design, increase energy efficiency by promoting the use of renewable technologies for robotics, the use and reuse of secondary raw materials, and the reduction of waste.
Recommendation 5: Enhancements. We believe that the provision of social or health services should not depend on the acceptance of robotics and artificial intelligence as implants or extensions to the human body. Inclusion and diversity must be the highest priority of our societies. The dignity of persons with or without disabilities is inviolable. Persons carrying devices as implants or extensions can only live self-determinedly if they are the full owner of the respective device and all its components, including the possibility to reshape its inner workings.
Recommendation 6: Autonomy of persons. We believe a person’s autonomy can only be fully respected when their right to information and consent are protected, including the protection of persons who not able to consent. We reject the notion of “data ownership”, which would run counter to data protection as a fundamental right and treat data as a tradable commodity.
Recommendation 7: Clear liabilities. Legal responsibility should be attributed to a person. Regarding safety and security, producers shall be held responsible despite any existing non-liability clauses in user agreements. The unintended nature of possible damages should not automatically exonerate manufacturers, programmers or operators from their liability and responsibility. In order to reduce possible repercussions of failure and malfunctioning of sufficiently complex systems, we think that strict liability concepts should be evaluated, including compulsory insurance policies.
Recommendation 8: Open environment. We promote an open environment, from open standards and innovative licensing models, to open platforms and transparency, in order to avoid vendor lock-in that restrains interoperability.
Recommendation 9: Product safety. Robotics and artificial intelligence as products should be designed to be safe, secure and fit for purpose, as with other products. Robots and AI should not exploit vulnerable users.
Recommendation 10: Funding. The European Union and its Member States should fund research to that end in particular with regards to the ethical and legal effects of artificial intelligence.