Can Copilot leak data into the model or generate something sensitive by accident?
No, Copilot does not train on organizational data or leak information into the model. Each organization has their own instance of the AI and Large Language Model (LLM). Customer A's LLM is completely separate from Customer B's LLM. Copilot consists of three components: the LLM, semantic index, and Microsoft Graph, but all data stays within the organization's instance and is never used for training purposes. Our Microsoft AI security framework ensures complete data isolation.