Hackathon Date: January 29, 2026 • 5:00 PM - 9:00 PM IST
January 2026: Call Recording Review Portal
The Problem
Our language consultants review thousands of voice agent recordings daily across Hindi, Marathi, Tamil, Kannada, Spanish, and other languages. Today’s workflow is painful:1
Log into AWS Console
Navigate through IAM authentication
2
Find S3 Bucket
Locate
ad2-production among dozens of buckets3
Construct the Path
Manually build path using long UUID:
media/{tenant}/freeswitch/{YYYY}/{MM}/{DD}/{uuid}/4
Download & Play
Download file locally, open in audio player
5
Track in Spreadsheet
Record review notes in separate Excel file
Your Mission
Build a web-based Call Recording Review Portal that reduces review friction from minutes to seconds.
Technical Context
Recording Storage (S3)
Recording Storage (S3)
Bucket: Example:
ad2-productionPath Format:Metadata (PostgreSQL)
Metadata (PostgreSQL)
Call metadata is stored with fields:
call_uuid— Unique identifiertimestamp— Call start timeduration— Call length in secondslanguage— hi, mr, ta, kn, es, etc.agent_id— Voice agent identifierphone_number— Customer phonetenant_id— Client identifier
Scale
Scale
| Metric | Value |
|---|---|
| Recordings per month | ~10 million |
| Average duration | 2 minutes |
| Concurrent users | 100 |
| Global access | Required |
Deployment
Deployment
- EC2 instance
- Must be accessible via internet
- HTTPS required
Requirements
P0 — Must Have
1
Authentication
Enable secure username/password login for all users.
2
Dashboard
Provide a central dashboard to review progress and quality trends.
3
Language-based access control
Ensure consultants can access only recordings for their assigned languages.
4
Search & Filter
Allow filtering by date range, language, UUID, and phone number.
5
Audio Playback
Stream call recordings directly in the app without downloads.
6
Rating System
Enable rating of recordings across defined quality dimensions.
P1 — Should Have
1
Notes & Tags
Allow users to add comments and categorize recordings using tags.
2
Enhanced Ratings
Support more detailed or multi-dimensional quality ratings.
3
Advanced Player
Add playback speed control, waveform visualization, and keyboard shortcuts.
4
Role-based Permissions
Support Consultant, Lead, and Admin roles with different access levels.
P2 — Bonus
1
AI Evaluation
Automatically rate recordings using an LLM-as-judge.
2
Human vs AI Comparison
Display side-by-side comparisons of human and AI-generated ratings.
3
Export
Allow exporting reviews and ratings as CSV or Excel files.
4
Sharing
Generate shareable links for team collaboration and reviews.
User Roles
| Role | Permissions |
|---|---|
| Consultant | View & rate recordings for assigned languages only |
| Lead | All consultant permissions + view team’s work + manage tags |
| Admin | Full access + user management + all languages |
Suggested Tech Stack
- Recommended
- Alternatives
| Layer | Technology |
|---|---|
| Backend | Python (Django or FastAPI) |
| Frontend | React/Angular |
| Database | PostgreSQL (existing) |
| Auth | Django Auth or JWT |
| Audio Player | wavesurfer.js |
Sample User Stories
Hindi Consultant
Hindi Consultant
“I want to see all unrated Hindi calls from yesterday so I can complete my daily review quota.”
Team Lead
Team Lead
“I want to filter calls tagged as ‘Training Example’ to compile onboarding materials.”
Admin
Admin
“I want to see which consultants have lowest agreement with AI ratings for calibration training.”
Any User
Any User
“I want to share a particularly good call with my manager via a simple link.”
Schedule
| Time | Activity |
|---|---|
| 4:00 PM | Kickoff, problem walkthrough, Q&A |
| 4:15 PM | Hacking begins 🚀 |
| 7:45 PM | Code freeze, prep demos |
| 8:00 PM | Team demos (10 min each) |
| 8:30 PM | Voting & judging |
| 9:00 PM | Winners announced |
Teams
Team 1
Members:
- Kapil
- Sandeep
- Natansh
Team 2
Members:
- Anurag
- Harsh
- Kashvi
Team 3
Members:
- Nikunj
- Manasvi
- Anusha
Evaluation Criteria
| Criteria | Weight | What We’re Looking For |
|---|---|---|
| Core Functionality | 40% | Search, play, rate — working end-to-end |
| User Experience | 20% | Intuitive, fast, minimal friction |
| Code Quality | 15% | Clean, maintainable, documented |
| Security | 10% | Auth works, no obvious vulnerabilities |
| Performance | 10% | Handles expected load without lag |
| Bonus Features | 5% | AI evaluation, dashboard, extras |
Resources
Tools Available
- Full access to coding tools including Claude Code
- Any open-source libraries
- AI assistants for code generation
FAQ
What if we can't finish everything?
What if we can't finish everything?
Focus on P0 requirements first. A working MVP beats an incomplete feature-rich app. Judges value “it works” over “it almost does everything.”
Can we use AI tools like Claude Code?
Can we use AI tools like Claude Code?
Yes! All coding tools are allowed. However, the problem is designed with constraints that require real engineering judgment. AI helps but doesn’t solve everything.
What happens to winning solutions?
What happens to winning solutions?
Winning solutions (or the best parts from multiple teams) may be developed further and deployed to production. You might be building something consultants use daily!
Remember: The best solutions come from understanding the user. Our consultants spend hours every day reviewing calls — build something that makes their work delightful.