How does your authoring AI work?
We an Anthropic model through AWS Bedrock as a base to transform complex knowledge into engaging pieces of courseware.
We've implemented several validations, plying exfiltration protection in front of the request (scanning for things like emails, SSN numbers, as a DLP filter) which will stop it in its tracks, a validator for the response type, and then a secondary filter approach in the output (the old FIEO principle!).
Lastly, we put a human in control.
We believe that humans are still an important component of the AI ecosystem.
In our case, a person has final say on what the AI gets to publish to end users, and that's vital to both context (are they learning the right material) and bias (ethnic bias in LLMs is an ongoing challenge, as is cultural context).
Is my material used to re-train the AI model?
No, your material is never stored, reused, or even memorized.
What parts of the request are encrypted?
All of them!
The request between your browser and the WAF/Balancer/Application are encrypted end to end (HTTPS).
The request between the Application servers and AWS Bedrock are encrypted as well.
The source input is encrypted at rest while the Bedrock model spins up.
The response between the Application server and your browser are encrypted via HTTPS.
Lastly, when the human decides to save the input (which is only in memory up until this point), it is also encrypted via HTTPS.
Then, as with all LLXP content, it is encrypted at rest within RDS.
Will the AI automatically publish anything?
No. It is a generative assistant that understands LLXP's special formatting requirements. It is not an autonomous agent.
Are AI-model input logs disabled?
Yes, we have disabled model logging to ensure that source input is never stored anywhere in the request chain.
Do you perform defensive prompt penetration tests?
Yes, we have attacked our prompts exhaustively. If you believe to have found something, please report it through our responsible disclosure program at https://www.lemonadelxp.com/responsible-disclosure.
What part should I play in deploying generative AI to my organization?
It's very close to what you would do for any web safety program. You'll want to ensure you have processes to teach people how to handle protect sensitive data in their day to day.
In general, things such as PII or highly classified information should never find their way into third-party systems (whether AI or static web forms). The risk of an employee inadvertently pasting PI into a generative AI assistant is not much different than the risk of pasting this same input into Google, for example.
Proper prevention starts with internal training programs and protection at egress. You are likely already guarding against this today within your DLP implementations; LLXP's web interface inputs are no different than any other web input.