Intrinsic + Vicarious

The Vicarious business has been acquired by Intrinsic, a robotics software and AI company at Alphabet. Learn more about our shared mission here.

Our Principles

Vicarious aims to bring about a robotic golden age by using AI to automate more and more general tasks until we reach artificial general intelligence. Here are the clear and explicit principles that guide our research and products.

Posted January 2020 Back to Resources

Vicarious aims to bring about a robotic golden age by using AI to automate more and more general tasks until we reach artificial general intelligence. Robotic automation is already changing the world economy profoundly, and this effect will only increase as we automate more.

In the long run, artificial general intelligence (AGI) will have massive and long-lasting impacts on society as a whole. While developing these technologies can help us solve many of the world’s largest problems, it also gives us a responsibility to make sure they are used well, which is why we believe it is necessary for us to have clear and explicit principles that guide our research and products.

Why are we sharing these now? We will make mistakes and want to invite feedback on our principles and how we are living up to them. We also welcome partnerships with organizations that share our values. And if these principles speak to you, we are hiring.

 

Help Humanity Thrive

Intelligent robots have the potential to greatly improve humanity’s future.

Technology has historically led to some of the greatest advances for humanity. Affordable, flexible and intelligent robots would allow for unprecedented improvements in quality of life. While robotic hardware is rapidly decreasing in cost and increasing in capability, a stepwise improvement in software is required to make it truly cost-effective and general purpose. We are building that software.

Automation should enable people to live more fulfilling lives.

We want to create a society where robots are widely used to accelerate creative and economic productivity by carrying out the mundane, repetitive and dangerous tasks that people do today. Automation of a diverse set of tasks is key to achieving that vision. However, while we expect automation to benefit society in the long run, we realize that the technology we develop may cause economic harm to those whose livelihood depends on the tasks we automate.

In order to better understand this, we are collaborating with researchers at MIT on a study of the employment effects, positive or negative, of introducing intelligent automation in the areas in which we work. We do not yet know what the results of this study will be, but, if they show that jobs are lost and those affected do not transition smoothly to better opportunities, we commit to evaluating and spearheading interventions to address harms related to deploying this technology.

Vicarious is incorporated as a social purpose corporation.

The creation of generally intelligent robots is difficult and will require the dedicated efforts of many people over a long period of time. As a result, Vicarious must strike a balance between working on longer-term research directly focused on artificial general intelligence (AGI), and near-term commercial efforts that provide immediate impact and generate the necessary resources to build a company. Vicarious has been incorporated as a social purpose corporation with the mission of helping humanity thrive. As a social purpose corporation, Vicarious’ directors are legally required to consider our mission as well as profits in decision making. We believe this structure will best enable us to attain the scale necessary to accomplish our mission without compromising our principles.

 

Develop general intelligence that is safe and aligned with human values.

There are large long-term risks associated with the development of general intelligence.

We are developing AGI because of its potential to greatly improve humanity’s future, but as with all technology, it may have negative consequences if misused, either maliciously or accidentally.¹ These consequences range widely, from minor equipment damage to human extinction. One core risk for accidents is failure of value alignment: that we may create AGI which has goals that are not aligned with the values of humanity as a whole. This is an extreme case of the more general problem of ensuring AI systems of any level of intelligence are aligned with human values.

Our research agenda focuses on creating AGI that has common sense, causal understanding, and is inspired by the human brain. Although challenges still remain, we believe that creating programs which think in similar ways to ourselves will allow for easier interpretation and alignment.

There is a developing body of research on potential solutions to this problem. We believe these efforts are necessary and will adopt best practices from this in our research and products, as well as publish our own insights so that the rest of the community can benefit.

All AI systems that we deploy on the way to AGI must interact safely with people.

It is of paramount importance that the robotic systems we deploy are safe. Therefore, we will only deploy systems that have strict checks and controls, are interpretable, and maintain high standards of industrial safety. This will require us to address many safety issues as we develop our products, and we expect that this real-world experience will help us safely develop AGI.

 

Work with the global community in the service of these principles

Engagement with the AI ethics and safety communities

We believe that we should not, and cannot, act alone in our efforts to help humanity thrive through the development of intelligent robots. We commit to playing an active role in anticipating and mitigating risks associated with AI through leadership and partnership with related organizations.

Vicarious will not engage in a competitive race in the final stages of developing AGI.

As we, or any other organization, approach the last phases of building general intelligence, we want to avoid a dynamic where multiple organizations race each other to complete it first, cutting corners on safety.² If we or any other organization are on track to create AGI that could plausibly pose a substantial risk to humanity within a few years, we commit to cease work on the direct creation of AGI and instead focus those resources on making sure AGI is deployed safely, either by us or others.

We strongly believe this should be a norm within the AI community and encourage other organizations focusing on AGI to join us in this commitment.

 

 

 

 

¹ See, for example, Nick Bostrom’s Superintelligence, Stuart Russell’s Human Compatible and Amodei et al’s Concrete Problems in AI safety for examples of how current reinforcement learning methods could lead to bad outcomes.

² OpenAI called attention to this in their charter and committed to stop competing with and to support any project that is close to AGI. We already made a similar pact with DeepMind back when both of our companies were founded.

Posted