When Zombies Deliver Performance Reviews: The AI-Powered 360 Review Framework First Encounter

Nov 6, 2025

ChatGPT-generated illustration symbolizing how AI supports human connection and insight — a tool that amplifies, not replaces, our understanding.

What happens when you hand AI fifteen people's worth of honest feedback and ask it to find the signal in the noise? We just found out.

Last week, our first AI-powered 360 review was delivered by a mummy and a zombie to someone dressed as the walking dead. Halloween timing aside, something interesting happened: we proved that AI can synthesize complex human feedback into something actually useful for growth conversations.

🎯 The Setup: 15 People, 7 Values, 1 Framework

The inputs:

  • 15 stakeholders (direct reports, peers, cross-functional partners, management)

  • Survey responses grounded in 7 company values

  • Quantitative ratings + qualitative narratives

  • ~20 minutes per person of thoughtful feedback

The AI's job:

  • Find patterns across diverse perspectives

  • Map feedback to behavioral indicators

  • Surface strengths with evidence

  • Identify 2-3 highest-impact development priorities

  • Make it actionable, not just descriptive

The human facilitation:

  • Document shared 48 hours ahead

  • 40-minute developmental conversation

  • Focus on "what's next" not "what was"

  • Leave with concrete experiments to try

💡 What We Learned

1. AI Excels at Pattern Recognition

When fifteen people give feedback, patterns emerge—some obvious, some subtle. AI synthesized 15+ responses into coherent themes without losing nuance.

Example: Multiple stakeholders mentioned "calm during crises" + "honest communication" + "team feels safe to experiment." AI recognized these as facets of Authenticity (one of our values) and grouped them into a strength pattern.

2. Three AIs Walked into a Bar… 🤖🤖🤖

We ran the same feedback through three AI tools:

  • Claude (Anthropic): Most complete, actionable, grounded in examples

  • ChatGPT (OpenAI): Concise and scannable, but surface-level

  • NotebookLM (Google): Decent structure, middle ground

Winner: Claude. It produced a 6-page review that felt thoughtful and thorough.

Learning: We're asking Claude to be 20% more concise for the next five reviews. Depth is valuable, but so is scannability.

(Next week: The technical deep-dive with prompt design, side-by-side comparisons, and how to replicate this.)

3. The Human Element is Non-Negotiable 🧠

Here's what AI can't do:

  • Create psychological safety for the conversation

  • Read the room when someone is overwhelmed

  • Push for concrete actions when priorities stay abstract

  • Connect feedback to organizational context

Key insight: "Facilitation plan is the skeleton that holds the conversation together. Warmth and curiosity are the muscle—but muscle without skeleton = shapeless."

Translation: You need structure (time markers, phases, facilitation moves) AND human intuition (reading energy, knowing when to dig deeper).

✨ What Made the Meeting Work

Five things we'd replicate:

1. Document shared 48 hours ahead ✅
No surprises. Strategic conversation instead of reactive processing.

2. Developmental framing from minute one
"Fifteen people invested time in your growth" is different from "here's your evaluation."

3. Questions, not telling 💬
"What resonated? What surprised you?" The reviewee drove the conversation.

4. Focus on 1-2 priorities, make them concrete 🎯
"I need to delegate more" became "What's one thing you could hand off this week?"

5. Leave with a plan
Specific next actions. Growth happens through experiments, not intentions.

🚀 Why This Matters

Most performance reviews suck. They're:

  • Too vague ("needs to improve communication")

  • Too late (annual cycle, anyone?)

  • Too scary (political theater)

  • Too time-consuming (so they get rushed)

AI changes the equation. It synthesizes 15 perspectives in minutes, spots patterns humans miss, and grounds feedback in examples. This frees human time for what matters: the developmental conversation.

This isn't about replacing human judgment. It's about augmenting it.

Bottom line:

🎃 Performance reviews delivered by zombies? Optional.


🤖 Performance reviews that synthesize fifteen perspectives into actionable growth priorities using AI? Worth experimenting with.

—Smara

P.S. Yes, the costumes stayed on for the entire meeting. I highly recommend it.

💭 Have you tried AI-assisted performance reviews? What worked? What broke? Let me know—I'm genuinely curious what others are learning.

👋 Smaranda Onuțu is a Fractional CTO and Founder of Big Light Studio. She helps tech companies bridge business and technology—translating technical chaos into clear strategy and building systems that scale for both the engineering and the people.

A note on this content

This post was created with AI assistance and human direction. All ideas, insights, and experiences come from real project work. AI helped with drafting and structure, while I provided the strategic thinking, design decisions, and final curation.

At Big Light Studio, we see AI not as a replacement for human judgment, but as an amplifier of it — a tool that surfaces patterns and possibilities, leaving the interpretation and meaning to people. Just like the 360 review framework itself: AI extends our reach, but humans make the calls.

Just like the 360 review framework itself: AI extends our reach, but humans make the calls.

What happens when you hand AI fifteen people's worth of honest feedback and ask it to find the signal in the noise? We just found out.

Last week, our first AI-powered 360 review was delivered by a mummy and a zombie to someone dressed as the walking dead. Halloween timing aside, something interesting happened: we proved that AI can synthesize complex human feedback into something actually useful for growth conversations.

🎯 The Setup: 15 People, 7 Values, 1 Framework

The inputs:

  • 15 stakeholders (direct reports, peers, cross-functional partners, management)

  • Survey responses grounded in 7 company values

  • Quantitative ratings + qualitative narratives

  • ~20 minutes per person of thoughtful feedback

The AI's job:

  • Find patterns across diverse perspectives

  • Map feedback to behavioral indicators

  • Surface strengths with evidence

  • Identify 2-3 highest-impact development priorities

  • Make it actionable, not just descriptive

The human facilitation:

  • Document shared 48 hours ahead

  • 40-minute developmental conversation

  • Focus on "what's next" not "what was"

  • Leave with concrete experiments to try

💡 What We Learned

1. AI Excels at Pattern Recognition

When fifteen people give feedback, patterns emerge—some obvious, some subtle. AI synthesized 15+ responses into coherent themes without losing nuance.

Example: Multiple stakeholders mentioned "calm during crises" + "honest communication" + "team feels safe to experiment." AI recognized these as facets of Authenticity (one of our values) and grouped them into a strength pattern.

2. Three AIs Walked into a Bar… 🤖🤖🤖

We ran the same feedback through three AI tools:

  • Claude (Anthropic): Most complete, actionable, grounded in examples

  • ChatGPT (OpenAI): Concise and scannable, but surface-level

  • NotebookLM (Google): Decent structure, middle ground

Winner: Claude. It produced a 6-page review that felt thoughtful and thorough.

Learning: We're asking Claude to be 20% more concise for the next five reviews. Depth is valuable, but so is scannability.

(Next week: The technical deep-dive with prompt design, side-by-side comparisons, and how to replicate this.)

3. The Human Element is Non-Negotiable 🧠

Here's what AI can't do:

  • Create psychological safety for the conversation

  • Read the room when someone is overwhelmed

  • Push for concrete actions when priorities stay abstract

  • Connect feedback to organizational context

Key insight: "Facilitation plan is the skeleton that holds the conversation together. Warmth and curiosity are the muscle—but muscle without skeleton = shapeless."

Translation: You need structure (time markers, phases, facilitation moves) AND human intuition (reading energy, knowing when to dig deeper).

✨ What Made the Meeting Work

Five things we'd replicate:

1. Document shared 48 hours ahead ✅
No surprises. Strategic conversation instead of reactive processing.

2. Developmental framing from minute one
"Fifteen people invested time in your growth" is different from "here's your evaluation."

3. Questions, not telling 💬
"What resonated? What surprised you?" The reviewee drove the conversation.

4. Focus on 1-2 priorities, make them concrete 🎯
"I need to delegate more" became "What's one thing you could hand off this week?"

5. Leave with a plan
Specific next actions. Growth happens through experiments, not intentions.

🚀 Why This Matters

Most performance reviews suck. They're:

  • Too vague ("needs to improve communication")

  • Too late (annual cycle, anyone?)

  • Too scary (political theater)

  • Too time-consuming (so they get rushed)

AI changes the equation. It synthesizes 15 perspectives in minutes, spots patterns humans miss, and grounds feedback in examples. This frees human time for what matters: the developmental conversation.

This isn't about replacing human judgment. It's about augmenting it.

Bottom line:

🎃 Performance reviews delivered by zombies? Optional.


🤖 Performance reviews that synthesize fifteen perspectives into actionable growth priorities using AI? Worth experimenting with.

—Smara

P.S. Yes, the costumes stayed on for the entire meeting. I highly recommend it.

💭 Have you tried AI-assisted performance reviews? What worked? What broke? Let me know—I'm genuinely curious what others are learning.

👋 Smaranda Onuțu is a Fractional CTO and Founder of Big Light Studio. She helps tech companies bridge business and technology—translating technical chaos into clear strategy and building systems that scale for both the engineering and the people.

A note on this content

This post was created with AI assistance and human direction. All ideas, insights, and experiences come from real project work. AI helped with drafting and structure, while I provided the strategic thinking, design decisions, and final curation.

At Big Light Studio, we see AI not as a replacement for human judgment, but as an amplifier of it — a tool that surfaces patterns and possibilities, leaving the interpretation and meaning to people. Just like the 360 review framework itself: AI extends our reach, but humans make the calls.

Just like the 360 review framework itself: AI extends our reach, but humans make the calls.