Overview
ScrollyPOI transforms the cold-start problem from a data deficit into a design opportunity — asking users to remember rather than to rate.
Every user arrives with something the system rarely tries to access: a history of places they have loved, a sense of what a good afternoon in a city feels like. Yet most POI recommenders wait for interaction data that doesn't exist for new users, falling back to the same 10 popular places for everyone. ScrollyPOI addresses this through narrative design, applying Data Humanism principles to make both preference elicitation and recommendation explanation feel like an experience worth having.
"What if the cold-start problem isn't a data deficit to overcome — but a design opportunity to invite reflection?"Research premise of ScrollyPOI — Chapter 11, Doctoral Dissertation
The Problem
Generating personalized recommendations requires prior interaction data. For new users, this data does not exist. Classical solutions rely on demographic proxies — collapsing individual differences into group averages — or on active feedback requests that ask users to rate items they may never have encountered. Neither approach captures what matters most: the user's personal history of places they've loved and why.
To our knowledge, only one prior system had studied output explanations in the POI domain. Input explanations — which make the system's interpretation of user preferences visible before recommendations are generated — had never been studied in POI recommendation at all. Users left unable to understand or question the places they were recommended had no path from output back to rationale.
Research Foundation
ScrollyPOI translates Data Humanism's principles about narrative, context, and personal history into a new form of POI explanation — one that speaks in the language of place rather than the language of algorithmic distance.
Contextual Data
The principle that data cannot be fully understood outside of its context — spatial, temporal, and relational. Context is not metadata; it is the data's meaning.
Explanation Aim
The explanation aim shifts from "this place is similar" to "this place connects to where you have been, when you visited, and why it matters in that context." Place is the explanation — not algorithmic distance.
Narrative Data
Giorgia Lupi's commitment to data storytelling — the idea that data is not a neutral record but a story waiting to be told, and that narrative is a valid analytical form.
Output Format
Doc2Vec similarity scores are rendered as navigable narrative rather than distance values. The output format is scrollytelling — the explanation unfolds as a story, matching the way humans understand and remember places.
Personal Data
The emphasis on individual-level data as irreducible — a person's history is not a sample from a population, but a unique record worthy of explanation on its own terms.
Personalization
Recommendations are grounded in this user's own movement history — not population-level patterns. Personalization is structural: the explanation refers to where this person has specifically been, not what similar users have done.
Spend Time with Data
The practice of slow, exploratory engagement with data — resisting the impulse to extract a single insight and instead inviting users to dwell within the data's complexity.
Cognitive Load
Three explanation layers — surface similarity, contextual narrative, categorical grounding — distribute cognitive engagement progressively. Users choose how deep to go, preventing overload while preserving access to full explanatory depth.
System Design
The workflow is organized into two phases — input-focused (Steps 1–2) and output-focused (Steps 3–4) — each designed to fulfill a distinct experiential requirement rooted in Data Humanism principles.
Input Phase
Preference Elicitation via Scrollytelling
Users scroll through nine POI categories with color-coded map markers appearing dynamically. Rather than rating abstract items, they remember: which places did they actually enjoy? Recollection replaces cold rating.
Input Phase
Input Interpretation Explanation
A stacked bar chart and word cloud show how the system interpreted the user's selections before any recommendation is generated — making assumptions visible and supporting self-reflection.
Output Phase
Recommendation Exploration
Both selected and recommended POIs appear simultaneously on the city map. Bar charts enable category comparison; hovering highlights corresponding markers. Serendipitous discovery is built into the layout.
Output Phase
Multi-Layer Output Explanation
Three togglable layers — model confidence (circle size), similarity graph (edge thickness), and dual-user comparison — give users progressive access to explanation depth on demand.
Live Visualization
Selected places (hearts) connected to recommended POIs (circles) through similarity edges. Circle size encodes model confidence; edge thickness encodes cosine similarity strength.
Explanation Architecture
Perceived transparency rose from 3.0 with no layers active to 4.25 after all three layers were engaged — each addition deepening understanding without overwhelming users who prefer lighter engagement.
Circle marker size encodes the estimated likelihood of enjoyment for each recommended POI. Hovering reveals the exact score as a percentage. This operationalizes the Data Humanism principle of imperfect data: the system is not certain, and making that uncertainty visible supports better-calibrated user decisions.
A graph overlay connects each recommended POI to the input POIs that influenced it. Edge thickness encodes cosine similarity strength. Clicking a recommendation highlights only its relevant connections — making the model's reasoning traceable from output back to preference without algorithmic expertise.
Both users' selected POIs and their respective recommendation sets appear on a shared city map. Four stacked bar charts support direct comparison of taste profiles and received recommendations — enabling collaborative trip planning between two users with different preferences.
Evaluation · N = 5 participants
Participants spent up to 11 minutes in the preference elicitation narrative — not as a cost but as evidence the task felt like something other than a task. When asked to remember places they loved rather than rate items, input provision became something closer to reflection. Elicitation rated M=4.2; scrollytelling M=4.6.
Each additional explanation layer increased perceived transparency: from 3.0 with none active to 4.25 with all three engaged. The layered architecture lets users who prefer lighter engagement stop early while offering deeper understanding to those who want it.
High ratings for both elicitation (M=4.2) and scrollytelling (M=4.6) confirm that treating personal memory as a legitimate recommender input is not merely philosophically defensible — users experience recollection as an invitation, not a burden.
A follow-up study (N=24) comparing ScrollyPOI 2.0 against Google Maps found significant improvements in perceived transparency and scrutability while maintaining comparable usability scores. Version 2.0 introduces cross-city preference transfer — addressing cold-start in a new dimension.
ScrollyPOI: A Narrative-Driven Interactive Recommender System for Points-of-Interest Exploration and Explainability
ACM UMAP WorkshopScrollyPOI demonstrates that the new-user cold-start problem is not only an algorithmic challenge but a design one. By replacing absent interaction history with an engaging narrative elicitation experience, the system transforms cold-start from a limitation into an opportunity for self-reflection — one that users, empirically, find engaging rather than burdensome.
Data Humanism Principles Applied
← Back to Research