The upleashed®
capability framework (0–5)

A research-backed scale for consistent capability
assessments, training plans, and workforce readiness in PulseAI.

Built in real operational environments, informed by Lean Service principles derived from the Toyota Production System, and grounded in doctoral research into skills matrices and AI-enabled workforce capability.

Why this framework exists

Explore Upleashed Learning Lab

See PulseAI in action

Why this framework exists

Capability scoring breaks when people interpret levels differently. We designed the Upleashed 0–5 framework to remove ambiguity, so teams can assess consistently, plan training with intent, and track progress over time. 

This approach aligns with our applied research into skills matrices and AI-enabled workforce capability, including a mixed-methods doctoral study exploring AI integration, ethics, data privacy, and bias in modern skills systems.

Research-backed, and informed by Toyota Production System principles

The Toyota Production System (TPS) is widely referenced in discussions about effective organisational resource use and is often associated with skills matrices. Our view is practical: TPS provides a strong baseline, while modern skills matrices have evolved well beyond any single framework to meet digital, global, and workforce dynamics. 

Our capability framework has been refined over more than 20 years of operational delivery and continuous improvement. Two design choices make it especially effective in practice:

The Upleashed 0–5 capability scale

Use this scale to rate a person against a single skill, task, process, or competency. The intent is consistency, evidence, and development, not judgement. Guiding principle: Capabilities and responsibilities develop over time. New processes, changes in existing procedures, fresh rules, or lack of practice can temporarily lower an individual’s performance ability.

Capabilities and responsibilities develop over time
0 1 2 3 4 5
No skill required or desired In training / Trainee Developing capabilities Capable Subject Matter Expert / Trainer Strategic ownership / Leadership
No expectation that the individual or role requires the specific skill(s) within the next year (take a longer-term view when assessing skills requirements and do not use this measure for short-term assessments). Expectation to be proficient within a year. Completed up to 75% of training. The individual does not fully understand the quality requirements. Completed more than 75% of training. Is likely to be able to perform the task alone, although consistent quality and productivity requirements not yet evidenced. Complex output will require checking / verification. Has completed 100% of the training. Has demonstrated consistent quality and productivity standards. Where not mandated by regulation, checks can now be omitted, releasing capacity back into the business. Has prolonged experience at a consistent quality and productivity level. This individual is motivated, works autonomously, and is ready to accept responsibility for skill ownership and training. Is likely to be able to train others to a high standard. If the specific skill has not been completed in last 3 months then the ‘skill level’ should drop back to Level 3 for the individual to reconfirm competence standards. Can define and develop new processes & skills requirements. Can demonstrate cross-function subject matter expertise. Can demonstrate leadership capabilities.

No expectation that the individual or role requires the specific skill(s) within the next year (take a longer-term view when assessing skills requirements and do not use this measure for short-term assessments).

Expectation to be proficient within a year. Completed up to 75% of training. The individual does not fully understand the quality requirements.

Completed more than 75% of training. Is likely to be able to perform the task alone, although consistent quality and productivity requirements not yet evidenced. Complex output will require checking / verification.

Has completed 100% of the training. Has demonstrated consistent quality and productivity standards. Where not mandated by regulation, checks can now be omitted, releasing capacity back into the business.

Has prolonged experience at a consistent quality and productivity level. This individual is motivated, works autonomously, and is ready to accept responsibility for skill ownership and training. Is likely to be able to train others to a high standard. If the specific skill has not been completed in last 3 months then the ‘skill level’ should drop back to Level 3 for the individual to reconfirm competence standards.

Can define and develop new processes & skills requirements. Can demonstrate cross-function subject matter expertise. Can demonstrate leadership capabilities.

New processes, changes in existing procedures, fresh rules, or lack of practice can temporarily lower an individual’s performance ability.

How PulseAI uses the framework

PulseAI applies this scale to create a consistent capability picture across teams, locations, and roles. It supports:

Governance, trust, and ethical implementation

As skills systems become more data-driven and AI-assisted, organisations must actively manage privacy, fairness, and bias. Our doctoral research highlights the importance of ethical culture, clear governance, and bias safeguards when AI is used in workforce contexts.

Our intent is simple: use capability data to help people grow, support better decisions, and stay aligned with modern expectations around data protection and responsible technology use. 

1) What is the Upleashed capability framework (0-5)?

The Upleashed capability framework is a consistent rating scale for assessing capability against a specific skill, task, process, or competency. It makes scoring repeatable across teams and time, so you can see skill gaps clearly, plan training, and track improvement with confidence.

It works in spreadsheets, in multi-team programmes, and inside AI-enabled skills matrix software such as PulseAI. The value comes from shared definitions, consistent evidence, and regular calibration.

2) Where is this 0-5 framework used?

You will see the 0-5 scale used in full, or in part, across our wider skills matrix ecosystem, including our Advanced Skills Matrix (Excel), our free templates, Ability6, SkillsMatrixTemplate.com, ExcelSkillsMatrix.com, and PulseAI.

This consistency means your organisation can start with an Excel skills matrix template, then move to PulseAI later without changing the underlying scoring approach.

3) Why does your scale include Level 0?

Level 0 prevents forced scoring when a skill is genuinely not required, or not desired, within the time horizon for a role. This improves reporting accuracy, reduces noise, and helps leaders focus training investment on skills that will move outcomes.

4) Why did you add Level 5?

Level 5 separates expert delivery from strategic ownership. It identifies people who can define standards, improve processes, shape requirements, and lead capability across functions, not just perform the task well.

5) What is the difference between Level 4 and Level 5?

Level 4 reflects prolonged, consistent, high-quality performance, often with the ability to coach and assure standards. Level 5 is strategic ownership and leadership: defining requirements, improving processes, setting standards across functions, and leading capability at scale.

6) Is this a capability framework, or a proficiency framework?

It is designed for operational capability, meaning what someone can do reliably, to the expected quality standard, in the real environment. Proficiency can be a useful input, but capability also considers consistency, quality, productivity, and context remember, new processes or lack of practice can temporarily reduce performance.

7) What evidence do we need to justify a score?

Match evidence strength to risk. For low-risk skills, evidence can be light-touch (observed delivery, quality of outputs, peer review). For regulated or safety-critical skills, require stronger evidence (certifications, logged practice, audited checks, formal sign-off).

Consistency matters most: same skill, same evidence standard, across the organisation.

8) How do we stop score inflation, or inconsistent scoring across managers?

Use calibration and evidence standards. Agree what “good evidence” looks like per skill, then run short calibration sessions to align what each level means in your context.

Keep a simple rule: when unsure, score lower, agree the evidence needed to move up, and review.

9) Can someone be Level 4 in one skill and Level 1 in another?

Yes. Capability is skill-specific, not a personal label. A skills matrix is most useful when it captures a true pattern of strengths and gaps, rather than averaging people into a single score.

10) How often should we reassess capability scores?

Reassess after training, when processes change, when standards change, when risk changes, and at a cadence that matches the role. For fast-changing work, capability can drift without practice, so reassessment protects quality and fairness.

11) What if someone has not used a skill for a while?

Apply a recency mindset for critical skills. If someone has not performed the skill recently, it is sensible to reconfirm competence before relying on the previous score, especially where the environment, tooling, rules, or standards have changed.

12) What does “Level 3 means checks can be omitted” mean in practice?

It means the person has demonstrated consistent quality and productivity for that skill, so routine verification may be reduced where controls are not mandated. In regulated environments, required controls still apply regardless of score.

Treat the framework as decision support, not a replacement for governance.

13) How do we handle new joiners fairly?

Start with what you can evidence, not assumptions. Use a shorter onboarding skills set, score conservatively at first, and reassess early once you have observed performance. This protects delivery, and gives people a clear path to progress.

14) How do we score contractors, agency staff, or temporary cover?

Use the same definitions, but keep the scope tight. Score only role-critical skills for immediate delivery, then expand if the engagement continues. Capture evidence quickly, and reassess early.

15) How do we apply the framework to soft skills, not just technical skills?

Define behaviours and observable indicators. For example, stakeholder management can include meeting outcomes, clarity of written communication, conflict handling, and follow-through. Then score against those indicators with examples.

16) How do we decide which skills to include in signalling, and reporting?

Start with outcomes. Pick the smallest skills set that drives quality, safety, customer outcomes, speed, and resilience. Expand in layers once scoring is consistent. This avoids creating a massive matrix that nobody maintains.

17) What is the best way to set target capability levels for each role?

Set targets that reflect real need, not perfection. Identify which skills matter most, define the minimum safe and effective level per skill, and review quarterly based on what the data shows. If everything is “must be Level 4”, the plan will not be achievable.

18) How do we turn scores into a training plan that actually gets done?

Prioritise gaps where moving from Level 1 or 2 to Level 3 reduces dependency, rework, and risk quickly. Then build depth by developing Level 4 trainers, and Level 5 owners for long-term resilience.

19) Can the framework support workforce planning and succession planning?

Yes. A well-maintained skills matrix provides a practical view of coverage, single points of failure, bench strength, and future readiness. It supports decisions about hiring, redeployment, training focus, and succession planning.

20) Can we start in Excel, and move to PulseAI later?

Yes. Many organisations start with an Excel skills matrix template to agree skills, definitions, and evidence standards. Once the model is stable, moving to PulseAI helps scale assessment, reporting, and team-level capability management without changing the framework.

21) How does PulseAI use the 0-5 framework?

PulseAI uses the 0-5 scale to standardise assessment, surface skill gaps, and support AI-enabled insights and prompts while keeping human oversight. Clear definitions reduce guesswork, and improve consistency across teams.

22) How do we approach privacy, fairness, and bias when capturing capability data?

Treat capability data as personal data. Set a clear purpose, limit access, define retention, and ensure transparency for team members. If you use AI features, keep humans accountable for decisions, monitor for bias, and make scoring criteria explicit.

23) Should employees be able to see their own scores?

In most organisations, yes. When people can see their scores, definitions, and the evidence expected to progress, the framework becomes developmental rather than judgemental. It supports better 1 to 1 conversations, clearer training plans, and higher trust.

24) What support is available to implement the framework properly?

We can help you define the skills list, agree evidence standards, run calibration, set targets, and implement reporting in Excel or PulseAI. The aim is a simple, consistent model that becomes a habit.

Have more questions, or want help tailoring the framework to your roles, evidence standards, reporting, or PulseAI setup? Ask our team via upleashed.com/support/ Open

Skip to content