Building an AI-Powered 360 Review Framework: The Design

Oct 26, 2025

ChatGPT-generated illustration symbolizing how AI supports human connection and insight — a tool that amplifies, not replaces, our understanding.

💡How we’re combining company values and AI to make performance reviews more meaningful, actionable, and human.

Performance reviews often feel like a chore — a box to check rather than a moment for growth.

But done right, they can be among the most valuable conversations we have at work: opportunities to see ourselves clearly, align on values, and grow as leaders.

I'm currently partnering with a tech company to design and pilot a 360 performance review framework for their engineering leaders.

🎯 The goal? Build a repeatable system that makes engineering leaders more focused and effective—something scalable, impactful, and grounded in their company values while using AI to handle synthesis work so managers can focus on what matters: meaningful conversations, not process.

I'm sharing the journey here—successes, challenges, and lessons learned—as we build this in real time.

⚙️ The Problem with Traditional Performance Reviews

Most performance reviews fall into one of two traps:

🌀 Too generic. Vague feedback like "great team player" or "needs to improve communication" doesn't give people actionable paths forward.

⏳ Too time-consuming. Managers spend hours writing reviews, synthesizing feedback, and trying to identify patterns across dozens of comments—often missing important themes in the process.

We wanted to solve both problems: make reviews more meaningful while making them easier to execute.

🌱 The Framework: Values-Driven 360 Feedback

We started with the company's core values and built the entire framework around them.

What made this approach different

Behavioral indicators for each value. We defined exactly what each value looks like in practice for engineers at different levels. Instead of asking "Is this person collaborative?" we got specific:

  • Curiosity: "Questions assumptions when reviewing technical decisions" vs. generic "open to new ideas"

  • Quality: "Balances delivery speed with technical debt management" vs. vague "produces quality work"

  • Authenticity: "Follows through on commitments and escalates early when needed" vs. "reliable team member"

Each value became a lens for specific, observable behaviors.

👥 Tailored questions by stakeholder group. Direct reports answered questions about management effectiveness. Peers focused on cross-team collaboration. Cross-functional partners evaluated partnership quality. Each group got questions matched to their unique perspective.

🤖 AI-powered synthesis. Instead of a manager reading through 15-20 individual responses and manually identifying patterns, we used AI to synthesize feedback, map it to company values, and generate actionable development priorities—all while maintaining confidentiality and avoiding direct quotes.

🚀 The Pilot Launch: Week One

We launched with an Engineering Manager leading the company's largest team. The 360 included: self-assessment, direct reports, peer engineering managers, cross-functional partners and management perspectives.

📈 We achieved a 70% participation rate.

Within the first week, something interesting happened.

💬 Early Lessons: Quality Over Quantity

Some cross-functional partners reached out proactively to say they didn't work closely enough with the manager to provide meaningful feedback. Rather than complete the survey with generic responses, they opted out.

This was gold.

It highlighted something we'll change for future reviews: pre-screen stakeholders upfront. Ask: "Do you work with this person regularly enough to provide specific behavioral examples?"

✅ Quality feedback from 15 invested stakeholders beats hollow responses from 25.

🤝 The AI Component: Tool, Not Replacement

Here's what the AI does:

  • Synthesizes patterns across all responses

  • Maps feedback to company values

  • Identifies recurring themes in strengths and development areas

  • Generates draft performance examples and development priorities

  • Creates sentiment analysis showing engagement levels

What it doesn't do:

  • Make judgment calls

  • Write the final review without human validation

  • Replace the manager's voice and perspective

AI makes the process faster and more consistent, but human judgment remains essential.

🌟 What's Working So Far

Anonymity encourages candor. We didn't collect email addresses with survey responses. People respond anonymously within their stakeholder group. Early feedback suggests this led to honest, constructive input—especially from direct reports providing upward feedback.

Values-based frameworks create clarity. By grounding everything in company values with clear behavioral indicators, feedback is becoming more specific and actionable. Instead of "needs to communicate better," we're getting "opportunity to increase visibility into delivery timelines and help stakeholders understand project complexity."

People are invested. Response quality has been impressive. Most participants are providing detailed examples and thoughtful suggestions, not just checking boxes.

💭 Join the Conversation

What's your experience with performance reviews? What's worked? What's been a disaster? Drop me an email with your thoughts—your insights might shape where we take this next.

🧭 This is the first in a three-part series exploring how we’re rethinking performance reviews with AI and values-based design. Next up: what we learned from the first pilot.

👋 Smaranda Onuțu is a Fractional CTO and Founder of Big Light Studio. She helps tech companies bridge business and technology—translating technical chaos into clear strategy and building systems that scale for both the engineering and the people.

A note on this content

This post was created with AI assistance and human direction. All ideas, insights, and experiences come from real project work. AI helped with drafting and structure, while I provided the strategic thinking, design decisions, and final curation.

At Big Light Studio, we see AI not as a replacement for human judgment, but as an amplifier of it — a tool that surfaces patterns and possibilities, leaving the interpretation and meaning to people. Just like the 360 review framework itself: AI extends our reach, but humans make the calls.

Just like the 360 review framework itself: AI extends our reach, but humans make the calls.

💡How we’re combining company values and AI to make performance reviews more meaningful, actionable, and human.

Performance reviews often feel like a chore — a box to check rather than a moment for growth.

But done right, they can be among the most valuable conversations we have at work: opportunities to see ourselves clearly, align on values, and grow as leaders.

I'm currently partnering with a tech company to design and pilot a 360 performance review framework for their engineering leaders.

🎯 The goal? Build a repeatable system that makes engineering leaders more focused and effective—something scalable, impactful, and grounded in their company values while using AI to handle synthesis work so managers can focus on what matters: meaningful conversations, not process.

I'm sharing the journey here—successes, challenges, and lessons learned—as we build this in real time.

⚙️ The Problem with Traditional Performance Reviews

Most performance reviews fall into one of two traps:

🌀 Too generic. Vague feedback like "great team player" or "needs to improve communication" doesn't give people actionable paths forward.

⏳ Too time-consuming. Managers spend hours writing reviews, synthesizing feedback, and trying to identify patterns across dozens of comments—often missing important themes in the process.

We wanted to solve both problems: make reviews more meaningful while making them easier to execute.

🌱 The Framework: Values-Driven 360 Feedback

We started with the company's core values and built the entire framework around them.

What made this approach different

Behavioral indicators for each value. We defined exactly what each value looks like in practice for engineers at different levels. Instead of asking "Is this person collaborative?" we got specific:

  • Curiosity: "Questions assumptions when reviewing technical decisions" vs. generic "open to new ideas"

  • Quality: "Balances delivery speed with technical debt management" vs. vague "produces quality work"

  • Authenticity: "Follows through on commitments and escalates early when needed" vs. "reliable team member"

Each value became a lens for specific, observable behaviors.

👥 Tailored questions by stakeholder group. Direct reports answered questions about management effectiveness. Peers focused on cross-team collaboration. Cross-functional partners evaluated partnership quality. Each group got questions matched to their unique perspective.

🤖 AI-powered synthesis. Instead of a manager reading through 15-20 individual responses and manually identifying patterns, we used AI to synthesize feedback, map it to company values, and generate actionable development priorities—all while maintaining confidentiality and avoiding direct quotes.

🚀 The Pilot Launch: Week One

We launched with an Engineering Manager leading the company's largest team. The 360 included: self-assessment, direct reports, peer engineering managers, cross-functional partners and management perspectives.

📈 We achieved a 70% participation rate.

Within the first week, something interesting happened.

💬 Early Lessons: Quality Over Quantity

Some cross-functional partners reached out proactively to say they didn't work closely enough with the manager to provide meaningful feedback. Rather than complete the survey with generic responses, they opted out.

This was gold.

It highlighted something we'll change for future reviews: pre-screen stakeholders upfront. Ask: "Do you work with this person regularly enough to provide specific behavioral examples?"

✅ Quality feedback from 15 invested stakeholders beats hollow responses from 25.

🤝 The AI Component: Tool, Not Replacement

Here's what the AI does:

  • Synthesizes patterns across all responses

  • Maps feedback to company values

  • Identifies recurring themes in strengths and development areas

  • Generates draft performance examples and development priorities

  • Creates sentiment analysis showing engagement levels

What it doesn't do:

  • Make judgment calls

  • Write the final review without human validation

  • Replace the manager's voice and perspective

AI makes the process faster and more consistent, but human judgment remains essential.

🌟 What's Working So Far

Anonymity encourages candor. We didn't collect email addresses with survey responses. People respond anonymously within their stakeholder group. Early feedback suggests this led to honest, constructive input—especially from direct reports providing upward feedback.

Values-based frameworks create clarity. By grounding everything in company values with clear behavioral indicators, feedback is becoming more specific and actionable. Instead of "needs to communicate better," we're getting "opportunity to increase visibility into delivery timelines and help stakeholders understand project complexity."

People are invested. Response quality has been impressive. Most participants are providing detailed examples and thoughtful suggestions, not just checking boxes.

💭 Join the Conversation

What's your experience with performance reviews? What's worked? What's been a disaster? Drop me an email with your thoughts—your insights might shape where we take this next.

🧭 This is the first in a three-part series exploring how we’re rethinking performance reviews with AI and values-based design. Next up: what we learned from the first pilot.

👋 Smaranda Onuțu is a Fractional CTO and Founder of Big Light Studio. She helps tech companies bridge business and technology—translating technical chaos into clear strategy and building systems that scale for both the engineering and the people.

A note on this content

This post was created with AI assistance and human direction. All ideas, insights, and experiences come from real project work. AI helped with drafting and structure, while I provided the strategic thinking, design decisions, and final curation.

At Big Light Studio, we see AI not as a replacement for human judgment, but as an amplifier of it — a tool that surfaces patterns and possibilities, leaving the interpretation and meaning to people. Just like the 360 review framework itself: AI extends our reach, but humans make the calls.

Just like the 360 review framework itself: AI extends our reach, but humans make the calls.