How AI Helped Us Listen Better, Stay Present, and Discover Deeper Insights — Dogfooding HearBack through 30 executive interviews

Over the past few weeks, we had the opportunity to put HearBack — our AI-powered feedback and engagement platform — into real-world use by applying it across 30 executive interview sessions. These weren’t short, pre-scripted tests. They were full-length conversations, each lasting between 60 and 90 minutes, involving a range of formats and settings — one-on-one dialogues in quiet meeting rooms, team discussions around conference tables, and even group interviews in large spaces with ongoing presentations.

The idea was simple: we wanted to see, without filters or shortcuts, whether HearBack could take in the raw material of natural conversations and return structured insights that were useful, trustworthy, and ready to act on.

And as it turned out, it could. But what we also got — which was equally valuable — was a deeper understanding of what it takes to run meaningful interviews at scale, and what kinds of decisions, habits, and tools help bring the best out of both the people in the room and the AI doing the post-processing.

What Is HearBack?

HearBack is a platform built around a core conviction: that conversations contain value that’s too often lost in the noise — or the backlog. Whether it’s a voice note, a casual debrief, or a strategic interview, there’s always something being said that matters.

At its heart, HearBack operates on a simple yet powerful pipeline: Collect → Analyze → Visualize.

  • It collects feedback in different forms — spoken audio, text inputs, recorded video, written notes, uploaded photos or PDFs.
  • It then analyzes by running that data through customized AI prompts that help extract meaning — summarizing, analyzing sentiment, highlighting risks, tracking action points, and more.
  • Finally, it visualizes — turning the AI’s output into formats that are actually usable: executive summaries, comparison tables, word clouds, sentiment analysis, and even audio briefings.

What makes it versatile is its configurability. You define how each session should be processed — and you get consistent, structured outputs across the board. And with everything running securely on the Microsoft Azure Cloud, it’s designed to meet enterprise-level expectations around data privacy and stability.

Setting Up a Session

Creating a new session in HearBack is deliberately kept lightweight. You name the session, choose the input types you want to allow (voice, text, photo uploads, documents, etc.), and define the prompt that should guide how feedback is processed for that session.

This prompt becomes the anchor for the AI’s understanding. It tells the system what to look for, what matters, and how to frame its output.

That’s especially important when you’re planning to process dozens of interviews in a batch. The consistency of prompt-driven analysis makes it possible to later compare, extract themes, and surface collective insights without needing to rework every transcript by hand.

Not Everyone Is Comfortable Being Recorded

A valuable lesson surfaced early in the process — and it had nothing to do with tech.

While setting up interviews, we found that not everyone was immediately at ease with the idea of being recorded. Even in internal contexts, where discussions were professional and meant to be shared within the organization, the mention of recording sometimes triggered a pause. Some were surprised. Others hesitated briefly before agreeing.

What made the difference, in most cases, was a simple, honest explanation.

We took the time to share what the recording was for — not for surveillance, not for evaluation, but to make it possible for the interviewer to stay engaged and for the AI to later generate clear, structured reporting. We emphasized that recordings would be handled privately and securely, and would not be repurposed outside the scope of the session.

In a few cases, participants asked for a pause in recording. That’s where a small design feature made a big impact: our USB omnidirectional mic had an LED light that glowed blue while recording, and red when muted. When the light turned red, it gave instant and visible assurance that their request was respected.

In the end, every participant agreed to the recording. But this interaction reminded us of something important: trust isn’t assumed — it’s earned, in small ways, through clarity and control.

Why Context Matters for Better AI Output

One of the most consistent findings from this whole process was this: context matters — a lot.

It’s tempting to assume that if you have a good transcription, the AI will figure out what the session was about and what needs to be highlighted. But AI doesn’t understand implicit intent unless you spell it out.

What made a tangible difference in the quality of HearBack’s output was taking a minute to either write a short description or speak an intro at the start of the session.

That gave the AI what it needed to frame its output properly. Without it, the summaries would still be accurate, but they might miss the point — burying the signal under the noise.

With context, the AI could tell the difference between a passing comment and a key insight. It could prioritize what mattered to the audience and deliver outputs that felt relevant — not generic.

Our Recording Setup

We kept our setup intentionally simple. A Lenovo 13″ Android tablet served as the base. Its screen was large enough to run split-view — showing the recorder on one side and the interview script or notes on the other.

We paired it with a USB omnidirectional mic, connected using a USB-C adapter. It was plug-and-play, and more importantly, it just worked.

The live waveform display on the recorder app gave constant reassurance that audio was being captured clearly. That was helpful not just for us, but for the participants — it subtly communicated that this was a proper setup, not a casual afterthought.

And of course, the LED light on the mic — glowing blue when live, and red when muted — helped reinforce that the tech was transparent, visible, and easy to trust.

Audio Format: Keep It Clean

We recorded in mono-channel WAV format. This was deliberate.

WAV is uncompressed, so it preserves every detail — overlapping speech, subtle tone shifts, trailing sentences. For AI transcription, that extra clarity translates into better accuracy, especially in environments where multiple people are speaking.

MP3 might be easier to store, but for this workflow, WAV paid off — in every transcript.

The Room Matters Too

Where you record affects what you get.

In smaller rooms, the audio was clean. But when we recorded in bigger rooms or spaces with glass walls, echo kicked in. Side conversations — even low-volume ones — blurred the main dialogue.

And whenever speakers relied heavily on visuals (“as you can see on this slide”), meaning was lost unless we had a way to anchor the context.

So we adapted:

  • We chose smaller rooms where possible.
  • We encouraged participants to verbalize what was being shown.
  • We uploaded supporting documents — PDFs, slide decks — to the same session, giving the AI something to refer to.

Use a Script — But Loosely

While the conversations were natural and open-ended, having a light structure helped us stay focused and ensure key areas were covered.

Our guide included:

  • What decisions were made
  • What risks were raised
  • What opportunities emerged
  • What actions were proposed
  • What stood out in terms of sentiment or tone

We also started each session with a short spoken introduction provide the context for HearBack to tag and organize the data correctly — and made the reports cleaner downstream. The power of AI allow us to use the script in a natural and lively manner rather following it robotically. This helps in creating a more cordial mood for productive interview.

We’ve also found it useful to generate a preliminary report even with just a few interviews at hand. This early analysis helps surface what information is naturally emerging — and more importantly, what’s missing. By reviewing these initial outputs, you can quickly identify gaps, adjust your line of questioning, and fine-tune your script to cover those areas. HearBack’s automation makes this discovery process seamless, and that feedback loop significantly improved the quality and focus of our subsequent interviews.

Paraphrasing in Real Time

A small but effective technique: paraphrasing what participants said during the session.

Repeating back what was shared — in our own words — did three things:

  • It showed we were listening.
  • It gave the speaker a chance to refine or correct.
  • It added clarity to the transcript, reinforcing key ideas and names.

The AI benefited from this too. Transcripts that included these paraphrased checkpoints were easier for HearBack to summarize accurately and coherently.

When Things Go Wrong

In one session, the tablet recorder crashed midway.

We didn’t stop the conversation. We switched to a phone app and kept going.

Afterward, we used Audacity, a lightweight desktop tool, to stitch the two WAV files together. The full recording was then uploaded to HearBack for processing — and it handled it just fine.

Lesson: always have a backup. But more importantly, don’t break flow when things break — tech can be fixed later.

One Transcript, Multiple Outputs

This is where HearBack proved its value.

From a single transcript, we generated:

  • Executive summaries
  • Action item trackers
  • Risk and issue registers
  • Opportunity briefs
  • Sentiment overviews

We tested the AI prompt on one session, refined it, and then ran that same prompt across the rest — all thirty sessions processed with consistent logic and structure.

With HearBack’s Storify feature, we then stepped back and looked across all sessions. Storify is designed to provide an aggregate analysis of all sessions and it pulled together:

  • Common themes
  • Differences across departments
  • Comparative tables
  • Word clouds
  • And even an audio-generated summary of how the interviews went collectively

Quotes Add a Human Layer

We asked HearBack to include direct quotes in the output wherever possible. This wasn’t just for style — it was a way to keep the reporting grounded.

Quotes did two things:

  • They acted as fact checks — if the AI couldn’t anchor a point to a real quote, it didn’t include it.
  • They made the reports feel more alive — reminding the reader that behind the bullet points were real people with real voices.

The combination of analysis and voice made the insights easier to trust — and more engaging to read.

What Would Have Happened Without HearBack

Without HearBack, here’s what we would’ve had to do:

  • Manually replay and review over 40 hours of audio
  • Take and organize handwritten notes
  • Extract action items one by one
  • Compare sessions by eye
  • Summarize inconsistently across analysts

Even with a tool like ChatGPT Plus, the sheer manual work of copy-pasting long transcripts and managing context would have made the whole process too heavy.

With HearBack, everything was collected under a shared event structure — making batch processing, cross-analysis, and reporting repeatable and efficient.

The HearBack Difference

The difference wasn’t just in the outputs — it was in the process itself.

  • We got to focus on the conversation, not on note-taking.
  • Transcripts were created automatically.
  • Reports were generated instantly, using prompts we controlled.
  • We could look at each session individually — or zoom out and compare them all.
  • Quotes kept us grounded in the voices of the people we spoke to.

What used to take days of analyst effort now took minutes — without losing depth or accuracy.

Final Thought

Interviews should feel like conversations, not clerical tasks. HearBack helped us preserve that — letting us stay in the moment, while AI took care of the mechanics.

We turned over 40 hours of recordings into actionable, trusted insights — faster, more consistently, and with far less friction than traditional methods.

And in doing so, we didn’t just save time. We gained the ability to listen better, reflect deeper, and respond with greater clarity. Most of all, we got our attention back — and that changed the quality of everything that followed.

Scroll to Top