Google-backed Anthropic, which runs AI assistant Claude, entered the debate on ethical use of the emerging technology, setting-out the principles being used to shape responses delivered by its platform.

In a blog, it explained Claude’s Constitution contained elements taken from the UN Universal Declaration of Human Rights published in 1948 alongside other documents incorporating issues specifically associated with the digital age.

Modern documents used included Apple’s Terms of Service and information published by Google-owned AI research lab DeepMind.

Principles being adhered to include traditional moral codes such as supporting freedom and not aiding crime, alongside those based on avoiding the spread of misinformation, causing offence or sounding condescending.

The document also has several points based around encouraging responses taking into account “non-Western perspectives”.

Explaining the rationale behind parts of the code, Anthropic noted: “There have been critiques from many people that AI models are being trained to reflect a specific viewpoint or political ideology, usually one the critic disagrees with. From our perspective, our long-term goal isn’t trying to get our systems to represent a specific ideology, but rather to be able to follow a given set of principles.”

“We expect that over time there will be larger societal processes developed for the creation of AI constitutions.”

The company indicated its rules were very much a working document with it also exploring ways to “more democratically produce a guide for Claude” and investigating customisable constitutions for specific use cases.

Publication of the code comes as generative AI platforms continue to garner attention from regulators and high profile figures, many of which have voiced concerns on the potential of the technology if mishandled given its rapid pace of development.