Vertrauenswürdige KI: Wie Confidential GPT sensible Daten schützt
Besides the obvious advantages, LLM applications carry risks when handling sensitive prompts, which in certain use cases may contain confidential data and can be unintentionally disclosed due to errors, attacks, or public interfaces. Providers usually have technical access to prompts and models, which reduces acceptance in certain domains (e.g., the public and the e-health sector). Confidential computing methods offer a solution by running the sensitive system components in encrypted environments, thereby denying operators access to the prompt contents. This in turn enables further use cases in the LLM domain that would previously have not been possible due to data protection and compliance reasons.