Salesforce Releases AI Acceptable Use Policy

Salesforce’s EVP and GM of Industry Clouds Jujhar Singh talks about the focus on a trustworthy foundation that is behind the new policy, as well as the future of generative AI.

Man harnessing a virtual circuited brain with a glowing central processor.
Image: sdecoret/Adobe Stock

Salesforce released a policy governing the use of its AI products, including generative AI and machine learning, on Wednesday. This AI policy is an interesting precedent because Salesforce is such a large organization in the field of sales platforms and customer relationship management. Additionally, the policy offers clear guidelines, and AI regulations are still in the discussion and development stages.

We talked to Executive Vice President and General Manager at Salesforce Industry Clouds Jujhar Singh about the policy and why it’s important for companies to draw ethical lines around their generative AI.

Jump to:

What is Salesforce’s Artificial Intelligence Acceptable Use Policy?

Salesforce’s Artificial Intelligence Acceptable Use Policy outlines ways in which its AI products may not be used. It was published on August 23. The policy restricts what Salesforce customers can use its generative AI products for, including banning their use for weapons development, adult content, profiling based on protected characteristics (such as race), biometric identification, medical or legal advice, decisions that may have legal consequences and more.

The policy was written under the supervision of Paula Goldman, chief ethical and humane use officer at Salesforce, who sits on the U.S. National Artificial Intelligence Advisory Committee.

“It’s not enough to deliver the technological capabilities of generative AI, we must prioritize responsible innovation to help guide how this transformative technology can and should be used,” the Salesforce team wrote in the blog post announcing the policy.

The policy joins Salesforce’s internal generative AI guidelines

Salesforce has a public list of internal guidelines for its development of generative AI. This could serve as an example for other companies creating policies around their creation of generative AI. Salesforce’s guidelines are:

  1. Accuracy and verifiable, traceable answers.
  2. Avoiding bias or privacy breaches.
  3. Support data provenance and include a disclaimer on AI-generated content.
  4. Identify the appropriate balance between human and AI.
  5. Reduce carbon footprint by right-sizing models.

“Transparency is a big part of how we deal with gen AI because everything goes back to the (concept of) trusted AI,” said Singh in an interview with TechRepublic.

Singh emphasized the importance of having a zero retention policy so personally identifiable information is not used to train an AI model that might regurgitate it somewhere else. Salesforce is building filters to remove toxic content and constantly tweaking them, Singh said.

Which products does Salesforce’s AI policy apply to?

The policy applies to all services offered by Salesforce or its affiliates.

Under that umbrella are Salesforce’s flagship generative AI products, which include everything hosted on the EinsteinGPT platform, which is a library of public and private foundation models, including ChatGPT, for customer service, CRM and other tasks across various industries. Its competitors are Hubspot’s ChatSpot.AI and Microsoft Copilot.

How can businesses prepare for the future of generative AI?

“Tech leaders have to really think through that they need to upskill their people on gen (generative) AI dramatically,” said Singh. “In fact, 60% of the people that were asked this question felt they didn’t have the skills, but they also thought that their employers actually need to deliver those skills.”

In particular, skills like prompt engineering will be critical to thriving in a changing business world, he said.

SEE: Hire the right prompt engineer with these guidelines. (TechRepublic Premium)

In terms of infrastructure, he emphasized that companies need to have a strong trust foundation in order to make sure AI is on-task and producing accurate content.

What will the generative AI landscape look like in six months?

Singh said there might be a divide between industries with tighter or looser AI regulatory oversight.

“I think the industries that are more regulated are going to have a more human-in-the-loop approach in the next six months,” he said. “As we go into other industries, they are going to be much more aggressive in adopting AI. Assistants working on your behalf are going to be much more prevalent in those industries.”

Plus, industry-specific LLMs will “start becoming very, very relevant very soon,” he said.

Similar Posts