TL;DR: The key questions for any AI platform: Is your data encrypted? Is it used to train shared models? Who can access it? Alysium answers all three: documents are encrypted at rest and in transit, not used for model training, and accessible only to you and the users you authorize. Read the privacy policy before uploading anything sensitive.
The concern is legitimate. Uploading your business documents to a platform means that platform has access to the content. Whether that access is a risk depends entirely on what the platform does with it — and "what the platform does with it" is where the answers vary dramatically.
Four questions tell you whether a platform handles your documents safely: Is it encrypted at rest and in transit? Is your content used to train shared AI models? Who can access it? What happens when you delete it? Alysium answers all four explicitly.
This isn't a post to tell you AI platforms are categorically safe. It's a post to help you ask the right questions so you can make an informed decision about what to upload to any AI platform, including Alysium.
The Questions That Actually Matter
Four questions determine whether a platform handles your business documents appropriately:
Is the data encrypted? Reputable platforms encrypt data at rest (stored on their servers) and in transit (moving between your browser and their servers). This is table-stakes security — if a platform can't confirm both, don't upload anything. Alysium encrypts data at rest and in transit.
Is the data used to train AI models? This is the question most business owners don't think to ask. Some platforms use content uploaded by users to improve their AI models — which means your documents could influence what the AI knows generally, and in practice means your content is being used for a purpose beyond what you intended. Alysium does not use uploaded documents to train AI models.
Who can access the content? Your documents should be accessible only to you (the account owner) and the users you explicitly authorize. Alysium's access model: account owner can configure which users can access which agents, and the documents underlying agents are not exposed to third parties.
What happens when you delete content? When you delete a document from your knowledge base, is it actually deleted? Or does it persist in backups and training datasets? Alysium deletes uploaded content when the account owner removes it.
One follow-up question worth asking on the model training point: what's the retention period? Some platforms commit not to train on your data currently but retain the right to change this policy. Looking for 'we will not use your content for training' and also 'we will notify you of any policy changes affecting data use' together is the full picture. A commitment not to train today that reserves the right to change without notice is weaker protection than a commitment that explicitly addresses policy changes.
What's Safe to Upload and What Isn't
Understanding the platform's data handling tells you whether you're comfortable with the risk. But even with strong data handling commitments, some content categories warrant extra thought:
Generally fine to upload: Business descriptions, service menus, pricing guides, FAQ documents, policies (cancellation, returns, booking), publicly available information about your business, training materials that contain no personal data.
Worth considering carefully: Documents containing customer names or contact information (upload a version with personal data removed), internal financial projections or projections involving pricing strategy you consider competitively sensitive, legal agreements (consider whether the substance could cause issues if accessed unexpectedly).
Probably don't upload: Documents with social security numbers, credit card information, or medical records, regardless of platform commitments. For highly regulated industries (healthcare, legal, financial services), consult with your compliance team before uploading any client-related documents.
The practical test that resolves most upload decisions: if the document were accidentally sent in an email to the wrong person, would it cause a problem? A service menu accidentally emailed to a competitor is embarrassing but not damaging. A client list with contact information accidentally emailed to a competitor is a different category of problem. Apply that same threshold to AI platform uploads — documents that would cause no harm if exposed are safe to upload; documents that would cause harm if exposed warrant more caution regardless of platform commitments.
Reading the Privacy Policy Before You Upload
Every AI platform has a privacy policy. Before uploading business documents to any platform, find the section on "user content" or "customer data" and read it. Look for: whether the platform claims a license to use your content, whether it explicitly states your content isn't used for model training, and what its data retention policy is after you delete content.
Alysium's privacy policy addresses these questions directly. But the habit — reading the data section of any new platform's privacy policy before uploading — is worth building regardless of which platform you use.
One specific section to look for in any AI platform's privacy policy: the 'license to use content' section. Many platforms include broad licensing language — 'you grant us a non-exclusive license to use your content to provide and improve the service' — that technically covers model training and feature development using your documents. Alysium's privacy documentation should address this specifically. The presence of broad content licensing language, without explicit carve-outs for training use, is a yellow flag worth investigating before uploading anything you'd consider proprietary.
What Alysium's Six Data Commitments Mean
Alysium's privacy documentation outlines commitments around: encryption, no model training use, access controls, deletion, transparency, and regulatory compliance. These commitments are the floor, not the ceiling — they define what Alysium has committed to do. The practical implication is that you can upload your service menu, FAQ, policies, and expertise documents with reasonable confidence that they'll be handled appropriately.
The summary: upload what you'd be comfortable with a trusted, privacy-respecting business service having access to. Don't upload what you'd consider genuinely sensitive personal or financial information. That threshold covers the vast majority of small business knowledge base content.
Ready to build your agent responsibly? Start on Alysium — review the privacy policy, upload what makes sense, and build with confidence.
The practical implication of 'no model training' deserves concrete explanation. When you upload a document describing your service pricing, that information doesn't become part of what Alysium's AI model knows generally — it only becomes part of what your specific agent can retrieve. A competitor who builds an agent on Alysium won't find your pricing information in the AI's responses because the AI's general knowledge doesn't include it. This is the architectural distinction that matters for business document privacy: document-specific retrieval versus global model training are fundamentally different data uses.
Frequently Asked Questions
Related Articles
Ready to build?
Turn your expertise into an AI agent — today.
No code. No engineers. Just your knowledge, packaged as an AI that works around the clock.
Get started free