Skip to content

Security

Security of LLM's is multi fold. For security of the data, security of the models, and security of prompts. One is that the improper use of the models while under the control of the models, the other is for the theft of model information to the model itself.

Security for LLMs involves the protection of proprietary information, or personal identifiable information (PII) that is used in creation or deployment of a model.

Demonstrations

GitHub Repo stars Text Embeddings Reveal (Almost) As Much As Text uses a multistep method to recover a large amount of the original text used to create an embedding.

Paper Wherein the authors introduce Vec2text, a method that can accurately recover (short) texts, given access to an embedding model. This means that while those high-dimensional embedding vectors can be used to reconstructed the text that led to them. This includes important personal information (as in from a dataset of clinical notes).

To Integrate

-Breaking Down the Defenses: A Comparative Survey of Attacks on Large Language Models