Platform for Unified Learning, Supervision and Education.
Clinical education runs on data. Right now, that data lives in Excel files, email threads, and disconnected forms. PULSE puts it where it belongs — in one place, captured once, trusted everywhere.
Every year, hundreds of medical, nursing, and allied health students rotate through NUH's clinical departments. Each interaction — every teaching session, every supervised procedure, every feedback moment — generates data that feeds into billing, appraisal, accreditation, and programme improvement.
That data currently lives in Excel trackers, FormSG exports, email chains, and three disconnected systems that don't speak to each other. Administrators don't trust the data because they didn't see it enter cleanly. Tutors don't log accurately because the systems punish them for trying. Students stop filling surveys because they receive one per interaction regardless of how many happen in a day.
Clinical education volume at NUH is growing. Partner institutions are multiplying. Regulatory reporting requirements are tightening. The manual coordination approach that worked at smaller scale is now a documented source of billing errors, appraisal inaccuracies, and missed accreditation submissions. Previous attempts — QR codes, Power Apps, BLUE system upgrade — failed because they digitised the surface without addressing the structural problem beneath it. The infrastructure to fix it properly — event-driven systems, mobile-first logging, AI-assisted intelligence — now exists and is production-ready.
No integrated system for end-to-end pre-boarding, onboarding, and offboarding. Duplicated effort, administrative errors, no single source of truth.
Hours manually logged retrospectively, often late or inconsistently. Low compliance. Event-by-event tagging is time-consuming and error-prone.
Manual report generation taking days instead of minutes. Limited customisation. Decisions made on data that's already outdated by the time it's compiled.
Students receive one survey per tutor interaction regardless of how many occur in a day. Result: low participation, degraded response quality, meaningless data.
No automation, no real-time visibility. Administrators manually track completion and send reminders. Missed submissions discovered only at billing time.
Every design decision in PULSE is traceable to a specific, identified need of a specific, identified person. These are not user types — they are the people this system will succeed or fail with.
Programme Administrator — Allied Health Education
Something slipping through — a missed billing entry, a student with no tutor, a feedback form never triggered. The consequences land on her.
Receives student lists via email. Manually creates records across multiple Excel files.
Uploads any-format roster. PULSE maps columns, flags duplicates, dry-run preview before commit.
Chases tutors for hours via email. Downloads FormSG exports, manually reconciles.
Ops Feed shows only exceptions. Automated escalation handles reminders. Zero manual chasing.
Spends days building billing and appraisal reports from multiple sources.
One-click report generation. Natural language queries. Export in finance-ready format.
Clinical Tutor — Medical Rotations
Hours not counted. Teaching effort not recognised. She discovers a discrepancy at billing time — too late to correct.
Teaches all day. Must recall and reconstruct sessions later in a cumbersome portal.
PULSE pre-populates sessions from schedule. She confirms in two taps. 20 seconds total.
Submits hours, hears nothing. No visibility into approval status until billing runs.
Real-time state machine: Submitted → Under Review → Approved. Every loop is closed visibly.
Reviews feedback across multiple disconnected screens. No consolidated view.
Teaching portfolio auto-compiled: hours by discipline, feedback trend, recognition history.
Medical Student — Clinical Posting
Missing a required submission and having it affect her academic standing — while being overwhelmed by the volume of forms already.
Receives onboarding instructions across multiple emails. Unclear tutor assignments.
Single onboarding link. Schedule, tutors, and requirements in one view.
Receives a separate survey for every tutor interaction. Up to 6 forms in one day.
One end-of-day consolidated prompt covering all tutors. Guided reflection framing. 3 minutes total.
No visibility into posting progress. Discovers missed submissions late.
Progress arc visible. Private reflection summary at posting end. Developmental, not just administrative.
Head of Medical Affairs — Strategic Oversight
Being asked a strategic question in a meeting and having the wrong answer — because the data was compiled manually and was already outdated.
Reviews manually compiled reports that took days to produce and may already be stale.
Opens PULSE on Monday to a 5-sentence narrative brief: what changed, what's at risk, what deserves recognition.
Extracts positive feedback manually for award nominations. Negative feedback for programme review.
Recognition queue auto-populated. Feedback clusters surfaced by theme. Zero manual extraction.
Senior Clinician — Approval Layer
Approving something incorrect and being held accountable for it. The system must give them enough context to be confident — or catch what's wrong before they sign off.
Receives generic notification. Must log in, navigate to submission, find context manually.
Notification contains full context: student, session, duration, historical baseline. Approve in 60 seconds.
No signal for anomalous hours. Approves without knowing a submission is 40% above baseline.
Anomaly flag shown inline with context. Not an accusation — a prompt for informed review before approval.
Every downstream process — billing, appraisal, feedback, accreditation — is a consequence of one teaching event being captured accurately. Not six separate workflows. One.
Centralised student and tutor records across all disciplines. Intelligent bulk upload, semester memory, bi-temporal data modelling.
CoreMobile-first session logging with proactive pre-population. Visible state machine. Anomaly detection on every submission.
High FrequencyConfigurable templates. Session-batched student prompts. Intelligent frequency capping. Guided reflection framing.
Data QualityNatural language queries. Narrative intelligence briefs. CQRS-separated read model for sub-minute report generation.
AI-PoweredBehaviour-adaptive escalation. Context-aware content. Urgency calibrated to operational consequence, not fixed schedules.
Compliance DriverAnti-Corruption Layer connecting BLUE, TAS, and Jobplan Portal. Core immune to legacy system changes.
Hospital SystemsWard Round — General Medicine · 45 min
✓ Pre-populated from scheduleBedside Teaching — Cardiology · 30 min
✓ Pre-populated from scheduleStudent · Duration · Type — 20 seconds
| Tutor | Department | Hours | Avg Feedback | Status |
|---|---|---|---|---|
| Sr. Mdm Lim Bee | Cardiology | 34.5h | 4.8/5 | Approved |
| Dr. Rajesh Kumar | Gen Medicine | 28.0h | 4.6/5 | Approved |
| Sr. Wong Ai Ling | Paediatrics | 22.5h | 4.4/5 | Pending |
Hover over any component to understand its role. Every layer was chosen to ensure that data entered at one end emerges correctly at the other — for billing, appraisal, and accreditation.
Trace how a single teaching session captured by Dr. Aisha flows through PULSE and emerges as billing data, appraisal record, feedback trigger, and audit entry — with zero manual intervention.
Every choice below was argued against its alternatives. Nothing was chosen for fashion. Everything was chosen because a specific requirement couldn't be met any other way.
Every competing submission will digitise NUH's existing workflows. PULSE is the only submission that identified why those workflows fail at a structural level — and designed a system that fixes the structure, not the symptoms.
Every platform in this space treats consolidation as a downstream step. PULSE makes consolidation impossible to need — data enters once, at source, and every downstream process is a consequence. There is no reconciliation gap to close because there is no gap.
The highest-stakes UX moment in any clinical education platform is the first semester upload. Every competing platform demands template conformance. PULSE reads what you have, infers what it means, and asks you to confirm — not reformat. Trust is built in the first five minutes, not demanded.
Teaching hour compliance is low not because tutors are unwilling but because logging systems require reconstruction after the fact. PULSE pre-populates sessions from known schedules. Tutors confirm rather than enter. The structural reason for non-compliance is removed.
The industry response to survey fatigue is fewer surveys — which trades compliance for data completeness. PULSE consolidates without reducing. One end-of-day prompt covers all tutors in a single submission flow. Data completeness maintained. Perceived burden reduced by 4×.
Static rules accumulate false positives as workloads change. PULSE's per-tutor model updates from every supervisor decision — approved anomalies update the baseline, sustained flags reinforce detection thresholds. The system becomes more accurately calibrated the longer it runs.
Most NL reporting implementations pass queries straight to LLM-generated SQL. PULSE shows the filter specification to the user before execution. Every report carries a visible filter summary. LLM errors surface before execution, not as corrupted billing data discovered at month-end.
Generic reminder systems treat all users and all deadlines identically. PULSE's engine maintains a communication profile per user — tone, channel, and urgency adapt to response history and operational consequence. Compliance improvement is a communication design problem, not just a UX one.
Tight coupling to legacy hospital systems is why every previous integration attempt has broken when systems change. PULSE's ACL means external system format changes touch only the connector — the core is architecturally immune. Integration failures queue and retry. The system never stops because an external system did.
A submission that owns its risks is more credible than one that pretends they don't exist. These are the three that matter most — and the specific responses to each.
Previous solutions failed here. A better interface alone doesn't fix behaviour established over years. Dr. Aisha has better things to do between patients than open a portal.
Proactive session detection reduces logging to confirmation. Professional portfolio framing creates intrinsic motivation. Supervisor-visible compliance creates social accountability. Plus: structured onboarding with department champions — clinicians who used the system in UAT advocating peer-to-peer. Technology gets you 70%. Change management gets the rest.
Bi-temporal tables are the right answer for audit integrity. They are also genuinely difficult to implement correctly. A wrong query returns incorrect billing data silently — no error, wrong output.
Query abstraction layer encapsulates all temporal logic. No service writes raw temporal SQL. The abstraction is exhaustively tested including explicit failure mode tests: cycle boundary queries, mid-posting edits, retrospective corrections. If timeline pressures, fallback is append-only versioned records — simpler, still audit-compliant.
Hospital legacy systems are undocumented, have unofficial APIs that change without notice, and are operated by IT teams with independent change management processes.
Anti-Corruption Layer means only the connector changes when external systems do — core is untouched. Graceful degradation: connectors queue and retry if unavailable. Prototype demonstrates the architecture with simulated connectors — pattern proven, specific connector implementation is a parallel workstream, not a prototype dependency.
The architecture is defined. The technology is chosen. The personas are mapped. The risks are owned. The prototype development begins the moment this is selected — and the system that NUH gets is one that compounds in value the longer it runs.
"Not selecting PULSE means selecting a system that digitises the surface of NUH's problem. PULSE is the only submission that has looked at what causes clinical education data to fail downstream — and designed the architecture to fix it at the root. That difference isn't in the feature list. It's in every decision that produced the feature list."