Hands holding pens point to a coloured skills matrix on a desk covered with charts and reports, with a laptop and coffee cup nearby, and the words “Skills & Capability Development” overlaid.

EU AI Act for HR and Ops: a practical readiness playbook built on a skills matrix

AI governance is no longer abstract. The EU AI Act entered into force on 01 August 2024. Most obligations apply from 02 August 2026, with some exceptions running to 02 August 2027. Some milestones are already live, including the AI literacy obligation (from 02 February 2025). This playbook shows how to get ready without boiling the ocean.

Last updated: 27 February 2026.

The dates that matter

  • 02 February 2025: prohibited practices apply, and AI literacy obligations apply.
  • 02 August 2025: obligations for general-purpose AI models apply, and governance must be in place.
  • 02 August 2026: most rules apply, including Annex III high-risk systems, and enforcement starts.
  • 02 August 2027: extended transition for certain high-risk AI embedded in regulated products.

Watch-out: the Commission has signalled implementation simplification work, including proposals that could affect how some timelines are applied in practice. Plan to the published baseline, then keep a short monthly review cadence to track updates.

Step 1: inventory every place AI touches people decisions

If you only do one thing this week, do this. Most organisations already use AI through HR systems, recruitment tools, scheduling, performance tooling, learning platforms, customer support, and internal assistants. Create a single list of systems, use-cases, and decision points. Keep it boring and complete.

AI inventory template (copy into a sheet)

  • System name
  • Vendor, owner, and business process (recruitment, performance, learning, scheduling, etc.)
  • What the AI does (rank, filter, recommend, generate, predict)
  • Who is affected (candidates, employees, contractors)
  • Decision impact (advice-only, human decision, automated decision)
  • Data types used (CV data, performance data, behavioural data, biometrics, etc.)
  • Geography and audience (EU, UK, global)
  • Existing controls (policy, DPIA, documentation, audit logs, human review)
  • Risk flags (employment decisions, monitoring, traits, vulnerable groups)

Step 2: classify “high risk” fast, and do not guess

Under Annex III, “Employment, workers’ management and access to self-employment” includes AI used for recruitment or selection (including analysing and filtering applications and evaluating candidates), and AI used for decisions affecting terms of work, promotion, termination, task allocation based on behaviour or traits, and monitoring or evaluating performance and behaviour.

Likely high-risk in HR (treat as high priority for controls)

  • Screening, ranking, or filtering job applications.
  • Candidate evaluation scoring, video interviewing analysis, or psychometric inference.
  • Promotion, performance, or termination recommendations that materially shape outcomes.
  • Productivity or behaviour monitoring that feeds into action.
  • Task allocation based on behaviour, traits, or inferred characteristics.

Usually lower risk, but still needs care

  • Drafting job ads, templates, policies, and learning content, where humans remain accountable and review happens.
  • Internal copilots for admin and summarisation, where outputs are reviewed and not used as the sole basis for decisions.

Step 3: turn obligations into controls you can evidence

A good compliance posture is evidence you can hand to a competent person quickly. Focus on controls that prevent harm and prove oversight.

Minimum control set to implement now

  • Ownership and accountability: name a business owner, a technical owner, and a risk owner per system.
  • Human oversight: define exactly where humans must review, approve, and override.
  • Documentation: keep vendor docs, model information where available, and a plain-language “how we use it” note.
  • Logging and audit: retain decision inputs, outputs, and reviewer actions where feasible.
  • Change control: document material changes to configuration, prompts, thresholds, and workflows.
  • Incident path: define what counts as an incident, how to pause, and how to investigate.

If you deploy systems that interact directly with people, or you publish AI-generated content in ways that trigger transparency duties, note that transparency rules are scheduled to start applying from 02 August 2026.

Step 4: meet the AI literacy obligation with role-based training

Article 4 requires providers and deployers to ensure a sufficient level of AI literacy for staff and others using AI systems on their behalf, taking account of context and the people affected. Do not run one generic session and call it done. Build a small, role-based programme and keep evidence.

Role-based training outline

  • All staff: what AI is used, when to escalate, what not to do with personal data, and how to spot unsafe outputs.
  • People managers and HR: how AI can bias decisions, what “human oversight” means in practice, and how to document review.
  • Recruitment and talent: high-risk use-cases, defensible decisions, candidate communications, and audit trails.
  • System owners: configuration hygiene, testing, monitoring, and incident response.

Keep evidence: attendance, a short assessment, and a quarterly refresher for high-risk areas.

Step 5: build an AI-aware skills matrix to prove competence and improve it

Most organisations fail on the same point: they cannot prove who is competent to oversee AI in people processes. A skills matrix fixes that, if you define skills clearly and score consistently.

Start with a capability framework and evidence standards. For reference, see the Upleashed capability framework approach here: https://upleashed.com/capability-framework/

AI oversight matrix: suggested skill rows

  • AI literacy and limitations
  • Decision accountability and human oversight
  • Bias and fairness checks
  • Data protection basics in AI workflows
  • Documentation discipline (what to keep, where, and why)
  • Incident handling and escalation
  • Vendor and tool evaluation basics
  • Prompting and output verification for HR use-cases

Evidence rubric example (simple, defensible)

  • Level 1 (In training / Trainee): Can explain the process, but cannot evidence correct use yet. Needs close guidance and review.
  • Level 2 (Developing capabilities): Can complete most steps with guidance. Evidence is emerging but inconsistent. Requires documented checks, and sign-off before outputs are used.
  • Level 3 (Capable): Can run the process end-to-end with documented review and correct escalation. Produces consistent outcomes under normal conditions.
  • Level 4 (Subject Matter Expert / Trainer): Can run the process autonomously at a consistently high standard. Can train others on standard scenarios, strengthens documentation, and proactively identifies control gaps.
  • Level 5 (Strategic ownership / Leadership): Can coach others, improve controls, and handle edge cases and incidents. Defines new processes, and sets the standard across teams.

If you want this to run with less admin, and with better visibility across teams, PulseAI is the platform route: https://upleashed.com/pulse-ai-skills-matrix/ also see our capability framework: https://upleashed.com/capability-framework/

A 60-day plan that works in the real world

Days 1 to 10

  • Build the AI inventory.
  • Identify employment-related use-cases, and triage likely high-risk.
  • Assign owners, and set a weekly 30-minute governance check-in.

Days 11 to 30

  • Write the minimum control set into one page per system.
  • Define human review points, and logging expectations.
  • Build the AI oversight skills matrix, and baseline current capability.

Days 31 to 60

  • Deliver role-based AI literacy training, then keep the record.
  • Run a calibration session so scoring is consistent.
  • Pilot in one high-risk process, then expand.

What to publish internally so this sticks

  • “Where AI is used in people processes” register (living doc).
  • “How we use AI in recruitment and people decisions” policy note.
  • “Human oversight checklist” for managers and HR.
  • “AI literacy training record” with refresh cadence.

Next step

If you want a clean, practical path to run this as an operating rhythm, publish the playbook as a Learning Lab post, then link to your capability framework and PulseAI pages. Start here: https://upleashed.com/learninglab/


Sources

  1. European Commission, “AI Act: Application timeline”. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  2. AI Act Service Desk, “Timeline for the Implementation of the EU AI Act”. https://ai-act-service-desk.ec.europa.eu/en/ai-act/eu-ai-act-implementation-timeline
  3. AI Act Service Desk, “Annex III: High-risk AI systems (Employment, workers’ management and access to self-employment)”. https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-3
  4. European Commission, “AI talent, skills and literacy (AI literacy in the AI Act)”. https://digital-strategy.ec.europa.eu/en/policies/ai-talent-skills-and-literacy

Note: this article is informational and not legal advice. If you operate across multiple jurisdictions, validate your approach with appropriate legal and data protection counsel.

Skip to content