CCRi helps our customers make big decisions, often within a large and complex ethical context. We have recently published our Statement of Ethical Principles, which provides a framework for thinking through ethical problems as part of our design and development. We consider it to be version 1.0 of a living document that will change over time.
While CCRi has always followed ethical principles in our work, we’ve recently codified them into a formal document. Now it’s official. We are acknowledging to ourselves and to others that the work we do is consequential, and that we recognize our responsibility for those consequences.
CCRi deals in powerful technology. We build systems that automatically fuse and correlate large amounts of data and we help people make decisions with the resulting information. As a company, we are motivated to make things efficient and reliable and to put power at the fingertips of paying customers. As the software engineers, data scientists, user interface engineers, dev ops engineers, and administrators who make up the company, we also want to succeed and to make the world, on the whole, better.
That sounds simple, but every large-scale decision we help to make is in part a complex ethical decision. The news is every day full of examples of unintended racial bias in Machine-Learning-driven software, deadly failures of AI-based systems, or chaos resulting from poorly implemented mapping software. CCRi is a proactive company. Part of this initiative is to take our existing ethos and add extra rigor so that we can design safeguards against causing harm. We think through potential problems early and often so that we can stay in the news only for our great software, our company culture, and our focus on supporting employees. The ethical principles we have laid out provide a framework for thinking through these problems carefully as part of our design and development.
Breaking Them Down
The first principle is a response to a commonsense concern many people have about digital technology—machines cannot emulate human judgment. Whether or not they will ever be able to is currently the domain of science fiction. Ironically, Nick Bostrom, one of the most concerned researchers about the future of AI, also believes that AI may come up with broad and correct ethical answers for the world. We take the more pragmatic view that people must make the important choices.
The second principle concerns equity and equal treatment of users and stakeholders. In general this is an issue that must be dealt with when we are handling data. We discuss this further below.
The third principle concerns privacy. CCRi serves government customers as well as private sector customers. Each sector has different needs and each brings with it different ethical demands. One of the biggest differences between government contracts and commercial contracts is that government contracts come with certain built-in restrictions on how data can be used. The use of commercial data in the United States brings fewer explicit regulations, although it is generally best to adhere to open and transparent policies for both ethical and business reasons.
The fourth principle reiterates and strengthens the first one, ensuring that people are not only in a position to make important decisions using software, but that they may do so in an informed way. CCRi develops open-source software, which we give back to the world, resulting in that software being exposed to a much larger set of potential users than any of our other work. This also implies that we must include documentation about safeguards which have been built into the software, and why those safeguards are there.
Finally, the fifth principle listed acknowledges that the United States already provides regulations. It includes some specific rules we follow, but it does not always change as rapidly as technological advances. We can be proactive and stay ahead of the legislation, and respect the spirit of prior laws. And if we are ever asked to do something that violates our ethics, we can say no.
Why An Ethical Code and Why Now?
Given all of the rapid changes in society around technology, it is easy to be fearful and even cynical. Some people argue that wrestling with ethics is not the job of engineers, since ethical principles are not programmable. It is true that an ethical code and software code are two very different things. But we trust that we can tell the difference, and we understand the important moments when the software crosses into the personal and ethical sphere. We also believe that software, from the visible interfaces down through the algorithms and data models and even to the hardware underneath, must be built with respect for the individual rights of users and other stakeholders.
We believe ethical codes must permeate all of society in order to be taken seriously. They must be legislated in the public sector. They must also be written voluntarily in the private sector. If we, as engineers, shrug our shoulders and declare that ethical diligence is “someone else’s problem” then all of the legislation in the world will not make a difference. We care about doing things right and do not need to wait to hear what that means from regulators.
A promising sign that this is a trend in the industry is that the Department of Defense has recently adopted ethical principles around AI. In doing so, the DoD leaned heavily on the expertise of veterans from the software industry such as Eric Schmidt, former Google CEO.
We at CCRi have been formulating our own version of ethical principles for several months. It is interesting to compare it to the Department of Defense’s principles that were officially adopted in February of 2020. The DoD version includes the following five principles in its handling and development of AI—that it must be: (A) responsible, (B) equitable, (C) traceable, (D) reliable, and (E) governable. The first of the CCRi principles mentioned above incorporates the notions of responsibility, reliability and governability, keeping our focus on due diligence and safety. CCRi’s fourth principle listed above addresses traceability, which essentially means that AI systems will only do things that humans are capable of understanding. Machines may come to optimal decisions more quickly than people, but systems will be built such that they only choose things a human would, for reasons that a human being can understand. In general, if a machine presents an option that a user disagrees with, then that human user can quickly turn elsewhere to make a more sensible or fair choice. Traceability also means that malfunctioning systems can be inspected and debugged by users and engineers.
News headlines often blame faulty algorithms for unfair behavior from large software systems, but an algorithm tends to be a neutral step-by-step process for solving a problem. In general, if a system is biased or harms certain groups of people, then it’s not the fault of the algorithm but rather of the data model that has been constructed by it and for it. Just as people operate on mental models, AI systems operate on models that they build with large sets of data. This presents a difficult challenge, because it implies that fair systems can only be built using equitable datasets. Unfortunately it takes a user to spot bias in input data and users with their own biases may miss big problems when training an AI model using problematic data. CCRi is committed to equality, meaning that we do not discriminate based on immutable characteristics such as race, gender, sexual orientation, or disability when it comes to hiring individuals. (See our careers page for more information on this.)
No Perfect Answers, Just People Paying Attention
Participation from people of diverse ideological and cultural backgrounds can inform a more robust set of ethical principles. This helps to address the elephant in the room when it comes to ethics: that there is no one perfect ethical code. Reasonable people can disagree about how to tackle complex problems while minimizing their harmful side effects. Indeed, as we were writing these principles, we had many discussions about how they would be interpreted and applied. We worked hard to write the principles in a way that wouldn’t be misleading or open to misunderstanding. A wider and more diverse pool of perspectives can help us keep a broad-minded view.
CCRi’s ethical principles are not instructions on how to solve particular problems, but they are intended to help keep the larger questions in mind as we go about our work. As engineers, we often find ourselves tempted to focus on technical solutions. The principles we have listed here help remind us that a system can still be flawed even if it meets all of its technical specifications in an optimal way. We have to solve problems thoughtfully, while making sure that we are focusing on the right ones.
This is also a version 1.0. We consider this version of the ethical principles to have been “shipped,” now that it has been agreed upon by the employees of the company. We fully expect to refine it as new technology, data, and experiences inform our views as the individuals that make up CCRi.