Skip navigation

AI Safety Policy

Best Practices

  1. Human Oversight: Maintain human oversight whenever you leverage an AI tool or process. AI tools are used to augment, not replace, human judgment and discretion.
  2. Input Data Quality: Ensure that data ingested by AI tools is accurate, complete, and representative to avoid faulty or inaccurate outputs, bias, and discrimination. 
  3. Input Data Minimization: Ensure data you input into AI tools is limited to only what is necessary and relevant to achieve the stated business purpose. 
  4. De-identification: Unless strictly necessary, anonymize or pseudonymize personal data before processing it using AI tools. 
  5. Output Review: Outputs from AI systems should be reviewed for accuracy, representativeness, and truthfulness before being used to make decisions, used in work products, or shared with others.
  6. Report Concerns: If you have concerns over specific uses of AI at the Company or observe any potential misuse of AI or breaches of data privacy, report it immediately to your supervisor and the privacy team by submitting a “Security or Privacy Event” ticket.

Examples of unacceptable uses of AI 

To mitigate risks and ensure ethical AI implementation, the following uses are strictly prohibited:

  • Making Decisions that Significantly Impact Individuals Solely Based on AI: AI may be used to assist in the decision-making process, but human oversight is crucial to ensure fairness and avoid unintended bias. Meaningful human oversight should be applied to all AI systems and AI processes that can significantly impact an individual.
  • Using AI to process Restricted Data: AI tools should not be used to process restricted information E.g. personal information including protected class information, religious beliefs, political opinions or affiliations, personal health information, and sensitive personal information like social security numbers, credit card numbers, etc. 
  • Use of Unapproved AI Tools: All AI tools must be used in accordance with the Company's Acceptable Use Policy (AUP), specifically the section which prohibits the use of non-approved vendors to process Sensitive, Confidential or Restricted Information. 
  • Neglecting to review AI outputs: Even Company-approved AI tools are capable of operating in ways we wouldn't expect or anticipate. LLMs in particular can sometimes hallucinate and generate outputs that are inaccurate, misleading or untrue. A human must always review AI outputs for accuracy before using them in their work.
  • Utilizing AI for Expert Advice or Analysis (Without Competent Oversight): While LLMs and other AI tools offer valuable assistance in various tasks, they should not be used as a replacement for human expertise and discretion. LLMs lack the necessary expertise, as well as contextual understanding, common sense, reasoning and critical thinking skills that humans possess, all of which are crucial in fields that require specialized expertise to effectively navigate. For example, tasks in the field of Law, Finance, Compliance, Privacy, Information Security, Data Science and Research should always be left up to the human experts in those fields, and decisions should not be made without their input.

Examples of Improper AI use and unique AI risks

  • A recruiter screens candidate resumes using a free AI resume screening tool found online.
    • AUP Violation: The tool is offered by a third party that has not been approved through Company's vendor onboarding process. 
    • Privacy Risk: There will be no contract protecting the personal information the recruiter uploads or shares with the tool. The vendor may be able to disclose the information or use it for any purpose they choose which would put the Company in violation of applicable Privacy Laws and damage talent trust. 
  • An employee uses a free LLM they found online to summarize a client job intake call. The employee doesn't review the AI-generated summary prior to using it for business purposes.
    • AUP Violation: The LLM tool is offered by a third party that has not been approved through the Company's vendor onboarding process.
    • Confidentiality Risk: Client details disclosed on order intake calls are client confidential information. Sharing client confidential data with an unapproved vendor could violate client confidentiality requirements.
    • LLM Hallucination Risk: The LLM may have hallucinated when it was generating its output. The generated job description may contain false or misleading information.
  • An employee uses a company-approved AI tool to summarize job descriptions but doesn't review the outputs for accuracy before using them in their work. The tool has been approved through the Company's vendor onboarding process, so the necessary contracts are in place with the vendor, but risks still remain.
    • LLM Hallucination Risk: The LLM may have hallucinated when it was generating its output. The generated job description may contain false or misleading information.
  • A recruiter records a business interview with a talent and uploads the transcript to our talent management system for AI summarization and future use with our proprietary scoring model. On the call, the talent mentioned a private health issue to the recruiter. The recruiter did not remove the health information from the transcript before uploading to our talent management system. The health information is ingested by our proprietary scoring model and unexpectedly impacts the talent's score for the order.
    • Discrimination Risk: It is illegal under both federal and state laws to discriminate against individuals based on their medical conditions. The individual could sue the Company or lodge a formal discrimination complaint with a regulator.
  • An employee is working with a prospective client who asks them to sign an NDA before their next meeting. The employee is in a time crunch, and rather than send the NDA to legal for review, they use a company-approved AI tool to compare the terms of the Client's NDA with the terms of our standard NDA. The tool indicates that the terms of each agreement are quite similar, and the employee signs the client's NDA without involving legal.
    • Legal Risk: The agreement may contain risks that would have otherwise been mitigated had Legal reviewed the terms prior to signature. The AI tool lacks the ability to adequately evaluate the terms of the agreement and assess risks. The employee also lacks the necessary expertise to evaluate the agreement or confirm the validity of the guidance they received from the AI tool. 

Important things to remember:

  • Treat all personal and confidential information with the utmost care.
  • Be aware of the potential risks associated with AI and LLMs.
  • Prioritize human oversight and ethical considerations in all AI-driven activities.