Skip to content
AfterLight homeAfterLight
  • Home
  • About
  • Experience
  • Pricing
  • Contact
  • Deck

AI Oversight

Last updated: 2026-02-22

1. Human Oversight of AI in AfterLight

AfterLight uses AI throughout its reflection experience. This page describes how we maintain meaningful human oversight over all AI systems, in line with the EU AI Act’s transparency requirements and our own ethical commitments.

2. AI Systems We Use

AfterLight uses the following AI systems, each with specific oversight measures:

• Conversational AI (OpenAI GPT-4o-mini) — generates reflective responses through Ben • Embedding model (OpenAI text-embedding-3-small) — creates semantic representations of memories • Image generation (Vertex AI Imagen) — creates abstract atmospheric visuals • Voice synthesis (Google TTS) — narrates reflection text • Neural animation (LivePortrait) — creates subtle motion in photos • Face detection (MediaPipe, InsightFace) — identifies facial landmarks for animation

3. Oversight Measures

Behavioral Guardrails

Ben operates under 30+ explicit behavioral rules that define what the AI can and cannot do. These rules are embedded in the system prompt and are enforced on every interaction. They are not optional guidelines — they are architectural constraints.

Automated Testing

We maintain an automated testing suite that continuously validates AI behavior:

• 85 simulation tests across multiple personas • 4 dedicated stress test personas targeting specific safety boundaries • 84 end-to-end tests running against production • Over 1,000 unit and integration tests across all system components

User Confirmation

The AI never takes actions autonomously. When Ben proposes saving a memory, creating a group, or any other data-modifying action, the user must explicitly confirm before it is executed. This applies to all state changes in the system.

Consent Gates

Features that generate synthetic media (embodiments, micro-animations) require explicit consent before processing. Users must actively opt in to these features each time.

Model Pinning

AI model versions are explicitly configured and do not change without deliberate action. The conversational model, embedding model, and all self-hosted models are pinned to specific versions to prevent unexpected behavioral changes.

4. Safety Boundaries

AfterLight enforces safety boundaries at multiple levels:

• System prompt level — 30+ behavioral rules in Ben’s personality • API level — image safety filters (block_most), person generation blocking • Application level — rate limits, abuse detection, CAPTCHA • Infrastructure level — authentication, data isolation, encryption

These layers operate independently, so a failure in one layer does not compromise the others.

5. EU AI Act Classification

AfterLight is classified as a limited-risk AI system under the EU AI Act. It is subject to transparency obligations (Article 50) but is not classified as high-risk or prohibited. We comply with transparency requirements through clear AI labeling, this oversight page, our AI Disclosure page, and the AI acknowledgment screen shown before first use.

6. No Automated Decision-Making

AfterLight does not make automated decisions that produce legal effects or similarly significantly affect users (GDPR Article 22). The AI assists with reflection and memory organization — it does not make decisions about access, eligibility, scoring, or any consequential outcome.

7. Continuous Improvement

We actively monitor and improve our AI oversight:

• Regular updates to guardrail rules based on testing findings • New stress test personas added as new risk vectors are identified • End-to-end production tests validate that safeguards work in the live system • User feedback at ethics@umbrella-research.org informs oversight improvements

8. Contact

Questions about AI oversight: ethics@umbrella-research.org

For a detailed technical disclosure of AI use, see our AI Disclosure page.

Back to top
Umbrella Research

Ethical, human-centered approaches to memory and AI.

Product
PricingEarly AccessAboutExperienceMedia
Legal
PrivacyTermsCookiesAI DisclosureImprintAcceptable Use
Enterprise
Vulnerability DisclosureSynthetic MediaData RightsEmotional SafetyAI Oversight
Contact
hello@umbrella-research.orgContact form
© 2026 Umbrella Research. All rights reserved.