📊 Building a Cross-Customer Analytics Workflow for Better Product Decisions
June 15, 2024
By Ted SteinmannOne of the recurring challenges in SaaS product management is that the questions leadership asks are often simple, but the data required to answer them is not.
Questions like:
- Which customer segments are most active?
- Where are users adopting key features?
- Which markets show the strongest product engagement?
- How should we prioritize migrations, support effort, or expansion?
In practice, answering those questions meant pulling data across many separate customer databases—often a slow, manual process.
To solve this, I designed and coded a reusable analytics workflow using Python and Jupyter notebooks. I wrote the utilities for data gathering and wrangling, making cross-customer analysis faster, more repeatable, and more useful for business decision-making.
The Business Problem
The product served many customers, each in its own isolated SQL Server database. A single business question could require querying dozens of databases, combining results, and organizing everything into a usable report. This led to slow turnaround, inconsistent methods, and limited visibility across segments and geographies.
The Approach
I built a lightweight analytics platform to run standardized queries across multiple customer databases, filter by segment or geography, reuse proven SQL, and consolidate results into a single dataset. Outputs were saved for repeatability and auditability, turning ad hoc work into reusable reporting assets.
This dramatically reduced the effort required to move from “we have a question” to “we have a usable answer.”
Why This Mattered for Product Decisions
The real value was supporting better decisions with less friction. The workflow made it easier to answer questions about customer activity, feature adoption, usage by segment, and prioritization for outreach or support. This visibility helped shape product strategy by combining multiple signals across the customer base.
Reusability and Repeatability
One of the biggest improvements was turning one-off reporting into a repeatable process. The workflow used reusable queries, shared setup, consistent filters, and saved outputs. The same analysis could be rerun as data changed, logic was easier to verify, and stakeholders had more confidence in the results.
This process led to repeatability in data pulls for things like the EMS workforce, making it easier to support roadmap, pricing, segmentation, and operational decisions.
Human-Readable and AI-Useful
Jupyter notebooks provided a strong middle ground between raw scripts and finished reports. They were readable for humans, structured for AI-assisted work, and made the workflow easier to review, hand off, and extend.
What This Reflects About My Approach
This project reflects my approach at the intersection of product, data, and operations:
- Reduce repetitive manual effort
- Improve speed and quality of decision support
- Create reusable systems
- Make analytics easier to understand and trust
- Connect technical tools to business value
Closing Thought
For me, this project was about building decision infrastructure. The outcome was a practical way to answer cross-customer business questions with more speed, consistency, and confidence—transforming analytics from a manual burden into a scalable capability.
That is the kind of work I enjoy most: building systems that make better decisions easier.
Categories: projects
Tags: product-management, analytics, data, systems-thinking, reporting