MOIA

Volkswagen's autonomous mobility platform
moia.io ↗There's no driver to rate. Autonomous feedback needs to capture something the system was never designed for.
Why Autonomous Feedback is Fundamentally Different
I use Uber and Bolt regularly. I almost never rate my rides. The 5-star system was built to evaluate a person: was the driver polite, was the car clean, did they drive safely, did they follow the GPS? When the ride is fine which most are, for me the rating feels pointless.
In an autonomous MOIA, there is no person to rate. And that changes everything about how feedback needs to work. The driver was the mediator between rider and vehicle by explaining route decisions, adjusting pickup, providing human reassurance. Remove the driver and the entire interface disappears.
✕ Driver explains unexpected route decisions
→ No one explains — vehicle just acts
✕ "Drop me on the other side" — flexible
→ Pick-up/drop-off locations are system-determined
✕ Driver presence = implicit safety assurance
→ Riders must trust the vehicle itself
✕ Feedback evaluates human performance
→ Feedback must capture system behavior + emotional state
Left: the legacy 5-star model — designed to evaluate a person. Right: what in-ride micro-feedback could look like for an autonomous vehicle.
Prioritization Summary
| Proposal | Impact | Complexity | Timeline | Priority |
|---|---|---|---|---|
| Context-Aware Micro-Feedback | Very High | Medium | 8–12 weeks | Start now |
| Feedback-to-Action Loop | Very High | Medium-High | 12–16 weeks | Start now (parallel) |
These two proposals are complementary. Micro-feedback generates high-quality, context-rich data. The action loop ensures that data reaches the teams who can act.
Context-Aware In-Ride Micro-Feedback
Post-ride surveys have three structural weaknesses in the autonomous context: recall bias (specific moments forgotten before the app opens), rating fatigue (MOIA already asks testers to “provide feedback after each ride”), and missing context (a 3/5 for comfort tells us nothing about which moment caused it).
The proposal: lightweight, event-triggered prompts during the ride, tied to vehicle behavior data. After a complex lane change — “How did that feel? 👍 / 👎.” At pickup — “Was this stop easy to find?” During an unexpected stop — pair the explanation (“Yielding to pedestrians”) with optional sentiment capture. One tap, then it disappears.
Waymo's UX research identified “look-up moments” — when a rider glances up from their phone because something unexpected happened. These are when rider opinions are actually formed. Capture them then.
Success Metrics
Validates prompts are low-friction enough to actually use
The north star: are riders feeling safer over time?
Directly actionable by ops for stop location optimization
Ride → categorized insight must be fast to be useful
Key Risk
Prompt fatigue: Too many prompts per ride recreates the exact survey exhaustion we're trying to solve. Rate-limit to 2 prompts per trip; prioritize by event significance score.
Event-to-Insight Pipeline
KPI Driver Tree
Rider satisfaction in autonomous rides decomposes differently than traditional ride-hailing. The perceived safety dimension and especially the gap between actual and perceived safety is the dimension that doesn't exist in any existing feedback model.
Feedback-to-Action Loop
The Rider Feedback team has a structurally unique vantage point: it's the only team that sees the autonomous experience from the rider's perspective across every dimension — driving behavior, cabin, pickup, app. But feedback trapped in dashboards doesn't improve anything.
The proposal: a structured insight delivery model that routes categorized feedback to the right team with context and priority. The Self-Driving System team gets maneuver-level sentiment correlated with telemetry. Ops gets stop-location heatmaps. Third-party operators (Ruter, BVG, Uber LA) get periodic insight reports which transforms Rider Feedback's output into a platform product.
“Riders in Hamburg's Reeperbahn area consistently report discomfort during multi-lane merges between 5–7 PM. Telemetry shows the vehicle is operating within safety parameters but perceived safety is low.”
Success Metrics
The loop only closes if teams actually act
Proves the loop is working: issues identified → fixed → not recurring
Key Risk
Insight overload: Routing everything to everyone creates noise. The governance model must include prioritization logic: severity thresholds, team-specific filters, and a clear owner for each insight category.
Insight Routing Model
Hypotheses to Test
Hypothesis
Micro-feedback captures what post-ride surveys miss
Test Method
Compare micro-feedback sentiment signals with post-ride ratings for the same rides — do they diverge?
Expected Outcome
In-ride signals correlate with specific telemetry events that don't appear in post-ride data
Hypothesis
Perceived safety diverges from actual safety
Test Method
Cross-reference rider 👎 signals with the vehicle's safety confidence scores on the same maneuver
Expected Outcome
Identify maneuvers the AD system considers routine but riders flag as uncomfortable
Hypothesis
Trust builds with exposure
Test Method
Cohort analysis of micro-feedback by rider ride count — do comfort scores on same maneuver types improve?
Expected Outcome
Repeat riders show measurably higher comfort scores vs. first-time riders
Hypothesis
Pooled rides amplify anxiety
Test Method
A/B comparison of perceived safety in pooled vs. solo rides, controlling for route and maneuver types
Expected Outcome
Pooled rides show lower perceived safety even when vehicle behavior is identical
North Star Metric
Primary Metric
Perceived Safety Score — Trending Upward
Derived from maneuver-level micro-feedback, tracked quarter-over-quarter across rider cohorts. This metric captures the gap between actual safety (sensor data) and perceived safety (rider sentiment) which I think is the most important dimension that no existing ride-hailing feedback system measures.
Methodology — Analysis based on public data (MOIA press releases, ALIKE project documentation, Volkswagen Group announcements, KIT research plans), competitive research (Waymo rider feedback systems and UX design principles), industry analysis (autonomous mobility adoption research), and personal experience as a regular ride-hailing user. No internal MOIA data was used. All opinions are my own.