By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Articles

Building Ethical Guardrails into AI-Driven Robotic Assistants

Robotics is one of the proving grounds for artificial intelligence (AI) in our lives.

Ethics is a minefield of considerations for AI developers, most notably with respect to how to ensure that autonomous robots embedded with this technology don't "go rogue" and thereby risk the lives, property, and reputations of their users and of society at large.

For Further Reading:

Our Robot Overlords Are Coming -- Soon

How Robotics Process Automation Eases Data Management

Data Science at the Frontier: Here Be Dragons

As more AI and machine learning developers work on robotics projects -- and even in the software-focused hyperautomation domain known as robotic process automation -- they will need to build safeguards into the algorithmic hearts of their creations to ensure that bots operate within ethical boundaries. At the very least, organizations should incorporate ethical AI principles into the data science DevOps process.

AI robotics developers would also be wise to heed the concerns expressed by technology ethicist Kate Darling in this recent article in The Guardian. She calls for the human race to regard robots as partners -- much the way our species has engaged symbiotically with dogs, horses, and other species -- rather than succumb to the dystopian sci-fi notion that robots are our rivals and that they're bent on destroying, enslaving, or otherwise dominating us.

There is plenty of practical advice for AI robotics developers in Darling's discussion. If you're wrangling over the ethical issues surrounding human-machine interfaces to be incorporated in your next AI digital-assistant project, you should heed these guidelines:

Maintain alignment between bot behaviors and human interests. In building the AI that animates hardware and software robots, ethics requires that one or more humans always takes ultimate responsibility. Unless society is prepared to invest moral accountability in inanimate objects, it can't be otherwise.

Consequently, developers of the AI for these scenarios must build and train their creations to always operate in alignment with the intentions and interests of the responsible parties, who may be the users themselves or, perhaps, also the enterprises that built and maintain the digital assistants. Where the end user's interests and intentions are paramount in the application design, it may be useful for AI developers to incorporate graph technology into their creations to infer the interests and intentions expressed through people's direct engagements with digital assistants.

Ensure transparency into bot inferencing lineage. To ensure that there is always an audit trail to verify human accountability, ethics requires transparency into the data, features, models, context, and other variables that contributed to any arbitrary AI-based inference that a digital assistant might make. This is especially important when we consider the near-infinite range of unforeseeable actions that probabilistic AI models can cause a bot to take. To the extent that its probabilistic logic causes a bot to deviate from strict alignment with the intentions of a responsible human, this fact should be evident in the inferencing audit trail.

Build apps that lessen the possibility of bots exploiting human vulnerabilities. To keep human users always conscious of the fact that robotic assistant serve them (and not the other way around), ethics requires that AI developers forebear from throwing every trick in the "empathetic computing" playbook into their new robotics project. This might happen if an AI-powered robot uses emotion-tinged verbal, visual, and/or tactile guidance to nudge the user in directions that he or she didn't intend.

It might also happen if the embedded AI inaccurately inferred the user's objectives in situations where there is no prior interaction history or in which the user has failed to clarify their intentions. It might even happen if the AI fails to consider significant aspects of the user's profile, history, and sensitivities, a deficit that might inadvertently cause a robot to interact in ways that might be regarded as pushy and manipulative.

Steer clear of anthropomorphic bot interfaces that reinforce offensive cultural stereotypes. To ensure that robotic assistants don't reinforce unfortunate or offensive cultural stereotypes, ethics requires that AI robotics developers think long and hard before building gender, racial, national origin, and other human personas into their interface designs.

For example, AI developers probably want to avoid digital-assistant designs such as the one in the movie "Her," in which the bot voiced by Scarlett Johansson strongly resembles a female escort who is targeting lonely men. Likewise, AI developers should avoid referencing people from historically disadvantaged, underprivileged, or marginalized groups in their bot interfaces, for the simple reason that, when embedded in a digital assistant, it reinforces servile stereotypes of these groups.

Projects to Avoid

Increasingly, digital assistants are being built in zoomorphic form -- in other words, as the robotic equivalent of service animals, pets, and for other uses. We might even consider implementing ethical guardrails in the development of these products. Forward-thinking developers should refrain from developing bots that can be weaponized -- for example, the potential for misuse of "scary animal" robots in the form of snakes, bats, and the like when deployed as decoys.

Another type of bot project that should raise red flags is development of user interfaces for "pet animal" robots that could be construed as modeling or encouraging people -- especially children -- to inflict cruelty on the corresponding real-life species.

These same ethical concerns should apply regardless of whether the bot in question is a hardware device or simply an online avatar. Given the spread of digital twin technology and concomitant blurring of boundaries between physical and virtual entities, it's only a matter of time before unethical behaviors in gaming and other cyber environments bleed over into actual pain inflicted on real human beings.

About the Author

James Kobielus is senior director of research for data management at TDWI. He is a veteran industry analyst, consultant, author, speaker, and blogger in analytics and data management. At TDWI he focuses on data management, artificial intelligence, and cloud computing. Previously, Kobielus held positions at Futurum Research, SiliconANGLEWikibon, Forrester Research, Current Analysis, and the Burton Group. He has also served as senior program director, product marketing for big data analytics for IBM, where he was both a subject matter expert and a strategist on thought leadership and content marketing programs targeted at the data science community. You can reach him by email (jkobielus@tdwi.org), on Twitter (@jameskobielus), and on LinkedIn (https://www.linkedin.com/in/jameskobielus/).


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.