We’re very interested in fostering a more robust discussion on the ethical dimensions of scaling digital health technologies, especially in contexts where human rights are tenuously protected.
I had the chance to speak with ethicist and engineer Dr. Jason Millar who works on the practical integration of ethics into engineering workflow at the University of Ottawa. His research and work has supported companies like Apple, Intel and Ford to elucidate the key ethical considerations their technologies pose and to design processes to more explicitly address those issues throughout the innovation lifecycle. He also works with governmental organizations like Transport Canada to help integrate ethical considerations into the development of technology regulations. Below is a synthesis of our discussion, edited for length and focus. Thanks to Jason for generously sharing his expertise!
MHS: I’ve heard you describe yourself at the nexus between ethics and engineering, focused on empowering engineers to integrate ethical thinking into their daily design workflow. What are some of the core ethical questions you’re working to integrate into design discussions?
JM: We support organizations to uncover what they believe are the most critical ethical questions related to their technologies. Instead of a top-down approach, where outsiders (like ethicists) raise a set of considerations for developers or design teams, I find it’s more effective to work directly with development teams to help them articulate the questions that are most relevant to their stakeholders, customers, boards, engineering teams, leadership and more. Once you’ve aligned internally on some of the core ethical considerations, the pieces are in place for meaningful solutions to start taking shape.
Thus, the core question we’re asking is “How can you uncover the ethical issues that are most relevant to the success of your technology, given your immediate concerns, your stakeholders, etc.? How might you resolve some of the ethical tensions that are inherent to your technology?”
MHS: You’ve talked about the need to build a common language to have an effective cross-disciplinary conversation between engineers and ethicists. What is your process?
JM: With support from company leadership, we assemble an inter-disciplinary group of employees to uncover some of the core ethical questions that might impact society and therefore the success of their technology. We have a set of methods and tools that we use to help to develop a shared understanding of:
- the broad set of stakeholders who are impacted by the technology, which can include both users and society as a whole
- the values held by the core stakeholders
- the key ethical challenges that a company might encounter while developing and rolling out a technology or service.
Mapping these elements allows you to elucidate where key ethical tensions or concerns may arise. The process we use sparks discussions on “What values do we want to build into this technology?” “Who are the relevant stakeholders for this particular value?” “Where are there tensions between stakeholder values?” “What are the impacts of deploying this for 5 years for certain groups in society?” “Where are the core risks?” and so on.
Once you’ve created what we call a “value map” of a technology, you’re able to make more explicit decisions about values and stakeholders. It doesn’t take much to start uncovering the core ethical issues and build a joint understanding where your technology could have great impact and where it could flounder. Building that process into a product review cycle or design cycle is the ultimate goal.
MHS: Who do you believe should be at the table in these discussions?
JM: It has to start with the company. We have found that the people doing the direct work (engineers and computer scientists) quickly see the value in thinking more directly about ethics once you start the work. In order to embed this into company processes you need someone at the executive level who is ready to act on the results. I’ve seen successes at large American tech companies who have run through value-mapping process and are integrating it into their product review cycle.
One of the core questions we get is “what is the return on investment?” This can be a tough question to answer because this area of research is relatively new and is evolving quickly. It’s easiest to engage with companies when they’re getting push back from regulators and are being forced to confront some ethical considerations. However, we are working to understand the value generated by integrating the ethical design discussions upfront.
MHS: What advice would you offer social impact investors interested in health technology to bolster their consideration of ethics?
JM: My advice would be to engage in practices to uncover ethical roadblocks or concerns, especially those that could derail the project. As a starting point, build ethical review into decisions about investing or product design. From an investor perspective ideally I think all the VCs should have this process built in as a part of due diligence. If this isn’t feasible, you could develop a core set of 5-10 questions you ask companies to answer in their application process. Or you can start small, by just asking teams to demonstrate what ethical considerations they’ve put into projects they’re proposing and what processes they have in place to do ethical reviews moving forward.
This is very new work. In industry there are very few examples to point to of this being done well for current issues related to AI, robotics, etc. But it can be done. For example, in the last 20 years we’ve had a radical shift in the way we understand privacy and digital technology; scholarship has increased, regulations have been defined, policies have been integrated into companies’ processes. These activities have changed the way companies negotiate relationships with users, which is (of course) an on-going process. We want to expand this kind of engagement to a broader set of ethical issues, because we’re seeing how those issues impact both 1) society, for better and worse, and 2) the success of the technology or services. I hope that social investors working in contexts where regulations aren’t strong are asking “Who should step in to decide what standards should be used?” “How do you protect people?” “How do we ensure we are adopting the highest standards?”