Policy Recommendations

Shaping a meaningful future

Realistic recommendations for the robotic future

The power of imaginaries: Robots, with or without AI,  have the potential to create better, richer societies, to help combat climate problems and provide solutions to problems faced in healthcare, education, agriculture, construction, etc. Robot makers and popular media continually present the public with visions of a future, where robots walk among us as independent agents working for the betterment of humankind.  Such imaginaries are often appealing, and serve as creative catalysts for technological development.

Keeping grounded

However, when we discuss robots, we have to anchor those discussions in reality. Robots today are nothing like the entities often portrayed in media. We cannot lose sight of the realities ‘on-the-ground’, while we try to realize the vision of a robotic future worth wanting. Real robots are already changing the way we understand work and social life, and the people affected are often left without a voice in discussions of robots, as they are currently affecting society.

Here, REELER presents the 5 main problems identified through our research spanning 11 ethnographic case-studies across different sectors. We also provide recommendations for a two-pronged strategy addressing these questions.

Download REELER's full policy recommendations here

Five identified problems  

Robots are constructed by a small circle of people, collectively called robot makers, who share an understanding of the proper frame for discussing robot. This understanding is rooted in an understanding of the robot as primarily a technical solution to an identified problem.

The actual end-users of the robot are left out of this circle. However, they hold important information about the specific context, in which the robot will eventually be deployed. Losing out on their expertise is a loss to the design process – and, in the worst cases, leads to failed implementation. When end-users are included, they are often only involved indirectly, as when a spokespersons speak for a group of end-users (e.g., a hospital manager speaking for the nurses), or instrumentally in testing.

Engineers are trained to solve and identify problems in a very specific way. The problems they identify are informed by classical training problems: optimization and standardization tasks, and the solution to such problems is technology. Robot makers in general share this mindset. By focusing too narrowly on what the technical problems they identify, they can fail to address the complexity of practices, and end up importing their own normative understandings of e.g., what work is meaningful, end-users skills or body features. Where these understandings do not align with the end-users’ or stakeholders’ understandings, friction occur, and end-users may refuse to use or even actively sabotage the robot.

When robot makers fall short of capturing the diversity and complexity of the lifeworlds of end-users and stakeholders, they risk miscalculating the consequences of introducing robots into particular contexts. A robot introduced in a hospital, e.g., will not only affect the cleaning staff using it. Nurses and patients meeting the robots in the hallways may begin to compete with the robot over hallway space. An unforeseen consequence of the robot is the changed environment that makes nurses take detours, cause people to work against the robot, or more costly requires new, special, elevators for the robots. When con­sequences are overlooked in the design phase, robots may be mothballed, be met with resistance, seen as failures, or create problems instead of solving them.  

REELER has identified a wider group of people than the imagined end-users, and we argue it is relevant to consider this overarching group of affected stakeholders when fund­ing and designing robots. Affected stakeholders comprise end-users, directly affected stakeholders and distantly af­fected stakeholders. This group of people is affected in var­ious ways, by facing replacement or changes in job-func­tions, needing reskilling or risking a life on universal basic income. They often lack the educational skills, vocabulary, and power to voice how they experience the impact of robots. While most of the robotic-related consequences for directly affected stakeholders could be addressed in robot design, it is the responsibility of politicians to address the consequences of robots for distantly affected stakeholders.

Commercial marketing of robots, and popular media portrayal, influence how people perceive robots. This in turns affects the public imaginaries, which risks losing its connection to reality, if it does not include the lived experience of the end-users and stakeholders already affected by robots. In public, robots are often portrayed as intelligent and autonomous, however, this obscures a base truth: Robots are not independent agents. They are in need of constant human support – either by humans adapting the robot to suit particular contexts, or people, adapting their routines, to accommodate the functioning of the robot.

REELER’s two main policy recommendations:

 

#1 Awareness-Raising Tools

Develop and disseminate tools that enhance robot developers’ (engineers, mostly) awareness of what is to be gained from collaborating with and taking end-users' and affected stakeholders’ perspectives into account early on in the development phase.
REELER recommends increased awareness of affected stakeholders in the inner circle of robotics through awareness-raising tools to be used by robot developers, facilitators, and application experts. Such tools must raise awareness of own normativity in design work and how insufficient collaboration with actual users in the development phase can lead to robots that, when ready for market, turn out not to fit the body size of the end-users, e.g. patients, or are uncomfortable for staff (e.g. nurses) to work with. REELER also recommends further awareness of how what is considered ‘intuitive’ technology by an engineer tends to be incomprehensible to the actual end-user.

REELER has developed five awareness-raising tools to help robot makers expand collaborations beyond the inner circle: 

  • The Toolbox provides interactive exploration of specific problems in robot development from a stakeholder-informed perspective.
  • BuildBot is a board game that allows players to reflect on responsible design choices that fulfill needs expressed by different stakeholders.
  • Mini-Publics provide a forum for knowledge transfer and debate among experts and the general public.
  • Action Methods contain both established and new explorations into drama as a method for perspective taking.
  • The Human Proximity Model is an analytical tool for understanding roles and relations in robot development.

The use of these tools may benefit both affected stakeholders, who will be more recognized, and robot makers, who can save time and money by making robots that are actually appreciated (rather than mothballed or sabotaged).

Awareness-raising tools cannot stand alone for four reasons:

  • 1) Despite significant efforts towards ethics in engineering education, it proves difficult for robot developers to integrate ethical awareness into practice.
  • 2) Directly affected stakeholders are, like distantly affected stakeholders and sometimes even end-users, consistently
    overlooked by robot makers.
  • 3) Certain ethical issues in robotics are beyond the scope of robot developers’ responsibility and professional competences.
  • 4) Most citizens lack the agency, vocabulary, and access to engage with robot makers directly.

Thus, REELER has a second recommendation for closing the gap between robot makers and affected stakeholders.

#2 Alignment Experts

Develop alignment experts as a new profession, where people are educated in methods of aligning the views and visions of robot makers and affected stakeholders. Alignment experts can also give voice to distantly affected stakeholders, when relevant.
To bring the voices of affected stakeholders into play in the inner circle of robotics, REELER also recommends introducing alignment experts as a new profession in robot and AI development. This new profession would be placed at the crossroad between Responsible Research and Innovation (RRI) and Social Sciences and Humanities (SSH). Their competences should emphasize skills in ethnography, economics, and technology, and would have, as a core expertise, the ability to align different groups of people in order to create ethical and responsible robots and AI.

REELER sees alignment experts as one of the new professions foreseen by economists to arise in an increasingly roboticized society. They would be trained to identify robot makers and affected stakeholders’ diverging motives and find solutions before it is too late in the development process. In this way, alignment experts can help avoid disappointments, create better foundations for legislations, open the eyes of robot developers for directly affected stakeholders and adjust their imaginaries of affected stakeholders and end-users in general.

To truly give voice, alignment experts must be able to speak on behalf of affected stakeholders, irrespective of, for instance, monetary interests of the involved companies, and as such provide perspectives that supplement the existing spokespersons. In order to give voice to the affected stakeholders, alignment experts must work directly with potential affected stakeholders (users, directly and distantly and consumers). This will allow them to identify collaboration possibilities and to bring their needs and expectations back to the inner circle.

Furthermore, alignment experts must identify further needs for awareness-raising educational tools, be capable of arranging mini-publics and expanded council systems identifying realistic needs for robots and AI, calculate economic consequences of ethical robots and AI and suggest new ways of using existing technology and help develop new ideas based on insights from affected stakeholders. Finally, alignment experts will take on the important role of providing reality checks on robot imaginaries.

Scroll to Top