Anirudha
Product BriefAutonomous MobilityRider ExperienceFeedback Systems~12 min read

MOIA

MOIA

Volkswagen's autonomous mobility platform

moia.io ↗

There's no driver to rate. Autonomous feedback needs to capture something the system was never designed for.


11M+Passengers served
75%Riders neutral or positive on AV
€26MFederal ALIKE funding
10,000Vehicles/year target from 2026

01

Why Autonomous Feedback is Fundamentally Different

I use Uber and Bolt regularly. I almost never rate my rides. The 5-star system was built to evaluate a person: was the driver polite, was the car clean, did they drive safely, did they follow the GPS? When the ride is fine which most are, for me the rating feels pointless.

In an autonomous MOIA, there is no person to rate. And that changes everything about how feedback needs to work. The driver was the mediator between rider and vehicle by explaining route decisions, adjusting pickup, providing human reassurance. Remove the driver and the entire interface disappears.

Driver explains unexpected route decisions

No one explains — vehicle just acts

"Drop me on the other side" — flexible

Pick-up/drop-off locations are system-determined

Driver presence = implicit safety assurance

Riders must trust the vehicle itself

Feedback evaluates human performance

Feedback must capture system behavior + emotional state

9:41●●●uberHow was your trip?👤Ahmed K. · Toyota CorollaAdd a tip or note...Submit ratingNo driver to rate.The 5-star system breaks.LEGACY — Rate the driverVSMOIA · Hamburg · Route 4.2 km✓ Autonomous42 km/hETA: 6 min⚡ Lane change completed — Reeperbahn junctionHow did that feel?Lane change on Reeperbahn · 14:23👍👎Skip (10s)EARLIER THIS RIDEPick-up easy to find?✓ YesUnexpected stop (pedestrians)PROPOSED — Rate the moment, not the person

Left: the legacy 5-star model — designed to evaluate a person. Right: what in-ride micro-feedback could look like for an autonomous vehicle.


Prioritization Summary

ProposalImpactComplexityTimelinePriority
Context-Aware Micro-FeedbackVery HighMedium8–12 weeksStart now
Feedback-to-Action LoopVery HighMedium-High12–16 weeksStart now (parallel)

These two proposals are complementary. Micro-feedback generates high-quality, context-rich data. The action loop ensures that data reaches the teams who can act.


Opportunity 1Start now8–12 weeks to instrumented pilot

Context-Aware In-Ride Micro-Feedback

Post-ride surveys have three structural weaknesses in the autonomous context: recall bias (specific moments forgotten before the app opens), rating fatigue (MOIA already asks testers to “provide feedback after each ride”), and missing context (a 3/5 for comfort tells us nothing about which moment caused it).

The proposal: lightweight, event-triggered prompts during the ride, tied to vehicle behavior data. After a complex lane change — “How did that feel? 👍 / 👎.” At pickup — “Was this stop easy to find?” During an unexpected stop — pair the explanation (“Yielding to pedestrians”) with optional sentiment capture. One tap, then it disappears.

Waymo's UX research identified “look-up moments” — when a rider glances up from their phone because something unexpected happened. These are when rider opinions are actually formed. Capture them then.

Success Metrics

Micro-feedback response rate (prompted moments)30%+

Validates prompts are low-friction enough to actually use

Perceived safety score trend↑ QoQ

The north star: are riders feeling safer over time?

Pick-up/drop-off satisfaction85%+ positive / 6 months

Directly actionable by ops for stop location optimization

Feedback-to-insight latency<48h

Ride → categorized insight must be fast to be useful

Key Risk

Prompt fatigue: Too many prompts per ride recreates the exact survey exhaustion we're trying to solve. Rate-limit to 2 prompts per trip; prioritize by event significance score.

Event-to-Insight Pipeline

🚗Vehicle EventLane change, hardbrake, junction waitEvent DetectedSensor + telemetrydata logged📲Prompt TriggeredIn-cabin screen1-tap UI shown👍Rider RespondsThumbs up/downor skip🔗CorrelatedSentiment + vehiclebehavior linked💡InsightRouted toright teamTarget latency: <48h feedback → insight

KPI Driver Tree

Rider satisfaction in autonomous rides decomposes differently than traditional ride-hailing. The perceived safety dimension and especially the gap between actual and perceived safety is the dimension that doesn't exist in any existing feedback model.

Rider Satisfaction (NPS)Perceived SafetyManeuver comfortSpeed perceptionTrust calibrationPick-Up / Drop-OffStop accuracyWait time deltaBoarding clarityTrip QualityRoute directnessRide smoothnessArrival accuracyCabin ExperienceTemp & noiseScreen content

Opportunity 2Start now (parallel)12–16 weeks for v1

Feedback-to-Action Loop

The Rider Feedback team has a structurally unique vantage point: it's the only team that sees the autonomous experience from the rider's perspective across every dimension — driving behavior, cabin, pickup, app. But feedback trapped in dashboards doesn't improve anything.

The proposal: a structured insight delivery model that routes categorized feedback to the right team with context and priority. The Self-Driving System team gets maneuver-level sentiment correlated with telemetry. Ops gets stop-location heatmaps. Third-party operators (Ruter, BVG, Uber LA) get periodic insight reports which transforms Rider Feedback's output into a platform product.

“Riders in Hamburg's Reeperbahn area consistently report discomfort during multi-lane merges between 5–7 PM. Telemetry shows the vehicle is operating within safety parameters but perceived safety is low.”

Success Metrics

Insight-to-action rate (logged team action within 30 days)60%+ of routed insights

The loop only closes if teams actually act

Time from feedback to categorized insight<48h auto · <1w human
Repeat issue rateDeclining QoQ

Proves the loop is working: issues identified → fixed → not recurring

Key Risk

Insight overload: Routing everything to everyone creates noise. The governance model must include prioritization logic: severity thresholds, team-specific filters, and a clear owner for each insight category.

Insight Routing Model

Self-DrivingSystem TeamManeuver comfortvs. telemetryOperations& Fleet MgmtStop locationoptimizationThird-PartyOperatorsRuter · BVGUber LAProduct &EngineeringCabin UXApp flowsALIKEResearchHamburg 37km²federal pilotMOIARider FeedbackIntelligence LayerManeuver-levelsentimentStop accuracyheatmapsOperator insightreportsCabin UXfindingsResearchdatasetPrimary recipientsSecondary recipients

Hypotheses to Test

Hypothesis

Micro-feedback captures what post-ride surveys miss

Test Method

Compare micro-feedback sentiment signals with post-ride ratings for the same rides — do they diverge?

Expected Outcome

In-ride signals correlate with specific telemetry events that don't appear in post-ride data

Hypothesis

Perceived safety diverges from actual safety

Test Method

Cross-reference rider 👎 signals with the vehicle's safety confidence scores on the same maneuver

Expected Outcome

Identify maneuvers the AD system considers routine but riders flag as uncomfortable

Hypothesis

Trust builds with exposure

Test Method

Cohort analysis of micro-feedback by rider ride count — do comfort scores on same maneuver types improve?

Expected Outcome

Repeat riders show measurably higher comfort scores vs. first-time riders

Hypothesis

Pooled rides amplify anxiety

Test Method

A/B comparison of perceived safety in pooled vs. solo rides, controlling for route and maneuver types

Expected Outcome

Pooled rides show lower perceived safety even when vehicle behavior is identical


North Star Metric

Primary Metric

Perceived Safety Score — Trending Upward

Derived from maneuver-level micro-feedback, tracked quarter-over-quarter across rider cohorts. This metric captures the gap between actual safety (sensor data) and perceived safety (rider sentiment) which I think is the most important dimension that no existing ride-hailing feedback system measures.

Methodology — Analysis based on public data (MOIA press releases, ALIKE project documentation, Volkswagen Group announcements, KIT research plans), competitive research (Waymo rider feedback systems and UX design principles), industry analysis (autonomous mobility adoption research), and personal experience as a regular ride-hailing user. No internal MOIA data was used. All opinions are my own.