AI Dev Tools Zoomcamp: Learn AI-powered coding assistants and agents Register here!

DataTalks.Club

Season 18, Episode 7

Building a Domestic Risk Assessment Tool: Data Cleaning, Risk Scoring Models and Privacy Compliance | Sabina Firtala

Show Notes

How do you build an accurate, privacy-compliant domestic risk assessment tool that frontline teams can actually use? In this episode Sabina Firtala — who leads Frontline's AI product development and brings experience in data wrangling, model validation, and applied analytics from finance, SaaS, and mission-driven projects — walks through a practical roadmap.

We cover problem framing and project scope; sources like case management, public records, and surveys; and hands-on data work: cleaning, linking, and feature engineering. Sabina explains risk scoring approaches and model architecture, evaluation metrics and bias assessment, plus privacy, ethical considerations, and legal data governance. You’ll also hear about deployment into frontline workflows, user interface and decision-support design, training and stakeholder trust, ongoing monitoring and drift detection, and examples of impact on triage and resource allocation. The conversation closes with collaboration strategies, funding and scaling, open documentation for reproducibility, and concrete lessons learned.

Listen for actionable guidance on data cleaning, building and validating risk scoring models, and ensuring privacy compliance so you can design responsible, usable domestic risk assessment tools.

Timestamps

Timestamps coming soon...


DataTalks.Club. Hosted on GitHub Pages. We use cookies.