The open-source toolkit provides evaluators for inputs and outputs of LLMs, offering features such as sanitization, detection of harmful language, data leakage prevention, and protection against prompt injection and jailbreak attacks.
The open-source toolkit provides evaluators for inputs and outputs of LLMs, offering features such as sanitization, detection of harmful language, data leakage prevention, and protection against prompt injection and jailbreak attacks.