Skills matrix and AI: why capability visibility is now one of the most important advantages an organisation can build

In one sentence: the organisations that will use AI best are unlikely to be the ones that buy the most tools, they will be the ones that can see workforce capability most clearly, redeploy it most intelligently, and develop it most consistently.

That is why a skills matrix matters so much now. A good skills matrix does not just support learning and development. It gives leaders a structured view of who can do what, to what standard, with what evidence, where the risks sit, which work can be automated, which work should be augmented, and which capabilities need to be built next.

What this article covers

  • What a skills matrix is, and why it is fundamental to workforce capability management.
  • How AI is already changing employment, with concrete examples of workforce replacement and task compression.
  • How forward-looking workforces are adapting to AI integration.
  • How documented skills have been used directly to automate or improve workforce processes.
  • Why the most resilient organisations treat capability visibility as infrastructure, not admin.
  • How Upleashed’s capability framework, free skills matrix, advanced Excel matrix, and PulseAI fit into a practical operating model.

Why this topic matters now

Artificial intelligence is changing work faster than many organisations can update job descriptions, training plans, or operating controls. Tasks that once required significant human effort can now be drafted, summarised, classified, analysed, or routed by software. At the same time, the human side of work is not disappearing. In many cases, it is becoming more valuable. Judgement, oversight, verification, escalation, context, stakeholder communication, and process discipline all become more important once AI starts touching the workflow.

This creates a simple but serious management problem. Leaders can no longer rely on role titles, tenure, or training attendance as a proxy for readiness. They need a clearer view of capability at task level. They need to know who can review AI outputs safely, who understands the process well enough to spot bad recommendations, who can handle exceptions, who can redesign work, and where the organisation is exposed because too much critical knowledge is still sitting in too few people.

That is where a skills matrix becomes strategic. Not fashionable, strategic. The basic logic of a skills matrix has always been useful. In the age of AI, it becomes one of the clearest mechanisms for understanding workforce capability, deployment risk, reskilling opportunity, and operational resilience.

What a skills matrix actually is, and what mature organisations use it for

A skills matrix is a structured view of who can do what, to what standard, with what level of evidence, and with what degree of coverage across a team, function, or organisation. At its simplest, it can look like a table of people, skills, and scores. At its most mature, it becomes a decision system that links skills to operational outcomes, workforce planning, learning priorities, succession, recruitment, role design, risk management, and continuous improvement.

This distinction matters because many organisations still mistake fragments of data for capability visibility. A job description tells you what a role is called. A CV tells you what a person says they have done. A learning management system tells you what training they attended. A manager’s view tells you what someone seems good at. None of those on its own gives a dependable, current, evidence-based view of capability in context. A skills matrix can, provided it uses clear definitions, a shared rating model, and disciplined review.

The shift towards a skills-based view of work is not a fringe idea. It is increasingly mainstream in workforce strategy. The World Economic Forum’s Future of Jobs Report 2025 says employers expect 39% of key skills to change by 2030. PwC’s 2025 AI Jobs Barometer says skills are changing faster in AI-exposed jobs and that workers with AI skills command a substantial wage premium. The strategic implication is obvious. If skills are changing faster than role structures, organisations need better skills visibility.

In practical terms, a mature skills matrix does at least seven important jobs. It creates a shared language for capability. It exposes gaps. It reveals operational risk. It supports targeted development. It improves workforce planning. It strengthens internal mobility. And it provides a more defensible basis for governance when work begins to move between human-led, AI-assisted, and partially automated states.

That is why the Upleashed capability framework matters. It gives organisations a consistent way to define what different levels of capability actually mean, from early-stage awareness through to strategic ownership. A clear framework prevents one of the most common failure points in workforce assessment, namely that different managers read the same score differently.

Why AI makes a skills matrix more important, not less

One of the laziest mistakes in the AI debate is to assume that if software can now complete part of a workflow, workforce capability becomes less important. In reality, the opposite is often true. AI does not remove the need for skill. It changes which skills matter most, and where the control points now sit.

Consider customer support, HR operations, finance admin, marketing production, software testing, or reporting. In each of these areas, AI can reduce time spent on drafting, data extraction, summarisation, classification, and routine analysis. But that only increases the need for stronger judgement in other parts of the process. Somebody still needs to define prompts well, verify outputs, handle exceptions, protect personal data, interpret nuance, challenge weak recommendations, and take responsibility for the final decision.

That is why a strong skills matrix helps organisations move from a job-based lens to a work-based lens. It reveals which tasks are suitable for automation, which should be AI-assisted, and which must remain human-led. It also shows where the organisation has enough review capability to adopt AI safely, and where it does not.

This matters for governance as well as performance. Your EU AI Act playbook for HR and Ops already makes the case that organisations need evidence, role-based training, clear human oversight, and a practical inventory of where AI touches people decisions. A skills matrix gives that guidance somewhere operational to land. It helps leaders prove not just that they have a policy, but that the people using or overseeing AI systems have the right level of capability.

What the evidence says about AI and employment

The employment impact of AI is real, but it needs to be described carefully. The strongest evidence does not support a cartoon view where whole professions disappear overnight. What it does show is that AI can replace, compress, or reduce demand for specific categories of work, especially repetitive, rules-based, information-processing tasks.

There are clear corporate examples. Reuters reported in 2023 that IBM expected to pause or slow hiring for roles that could be replaced by AI and automation, with around 7,800 jobs potentially affected over five years. Reuters reported in 2024 that Klarna said its AI assistant was doing the work of 700 customer-service employees while headcount reduced largely through attrition. Reuters reported in 2026 that WiseTech planned significant cuts as part of a two-year AI overhaul. These are not abstract scenarios. They are examples of leadership teams using AI capability as part of workforce redesign.

There is also labour-market evidence outside payroll announcements. A Brookings summary of freelance market evidence found that workers in more exposed occupations saw contract and earnings declines after new generative AI tools entered the market. That matters because it shows displacement can occur without formal redundancy programmes. Demand can simply weaken for certain categories of human work.

Even so, the most mature conclusion is not that AI destroys work in a simple one-directional way. The more accurate conclusion is that AI redistributes work. Some tasks disappear. Some accelerate. Some move from production to review. Some create fresh demand for new skills. Some roles shrink, while other roles become more valuable because they combine domain knowledge, communication, governance, and technical fluency. That is precisely why capability visibility matters. If leaders wait until a role appears obviously obsolete, they are already late. The useful intervention point is earlier, when the task mix inside the role starts to change.

Your own related article, AI is already replacing some tasks and jobs, sits neatly alongside this piece because it makes the core distinction clearly. The strategic question is not simply whether AI replaces work. It is how leaders decide what to automate, what to augment, and what human capability must be strengthened next.

Why capability visibility changes the quality of the response

When organisations do not have a credible skills matrix, their response to AI is often blunt. They freeze hiring. They buy generic training. They rely on vocal enthusiasts to lead adoption. They over-trust vendors. They fail to spot where one individual has become a dangerous single point of failure. And they confuse activity with readiness.

By contrast, organisations with strong capability visibility can ask much better questions. Which processes are truly rules-based? Which steps still need human review? Which teams already have enough AI oversight capability to pilot safely? Which staff have adjacent skills that make redeployment realistic? Which gaps are small and trainable, and which are structural? Which parts of the business are at risk because their knowledge is still tacit rather than documented?

This is why capability visibility is more than HR admin. It is operational intelligence. The same matrix that helps a manager plan development can also help a leader redesign a process, a risk owner validate controls, a project lead allocate work more sensibly, and a growth team decide whether to hire, redeploy, or automate.

Examples of workforces that have adapted successfully to AI integration

The best adaptation stories do not treat AI as a software launch. They treat it as work redesign, supported by capability data, pathway design, and management discipline.

One of the strongest empirical examples comes from Brynjolfsson, Li, and Raymond’s work on customer support, later published in the Quarterly Journal of Economics. Studying more than 5,000 agents, the researchers found that access to a generative AI assistant increased productivity materially, with the biggest gains among novice and lower-skilled workers. That finding matters because it suggests AI can work as a capability amplifier when deployed well. It can help narrower or less experienced performers close the gap faster. It does not make human capability irrelevant. It makes the quality of human capability deployment more important.

A second strong example comes from the Unilever and Walmart Future Skills pilot, which used AI-supported analysis to break roles down into component skills, identify overlap between declining and emerging roles, and reveal transition paths that would have been missed in a job-title-only model. The pilot found that employees typically identified far fewer skills in their own roles than the system was able to infer. It also found meaningful overlap between seemingly different jobs, making faster redeployment possible. Unilever’s own summary made the philosophy clear: skilling is smart business, AI can help eliminate bias, and internal mobility should be strengthened rather than left to chance.

DBS shows another useful adaptation model. Rather than waiting for employees to navigate the future on their own, the bank has used AI-enabled tools to guide coaching, learning, and workforce preparation. The pattern is important. Successful adaptation is not passive. It is designed. It requires infrastructure that helps people see where they are, where demand is moving, and what they need to build next.

JPMorgan Chase offers a related lesson. Its public workforce messaging frames AI readiness as a people agenda, not just a technology agenda. The organisation has invested heavily in training because it recognises that the value of AI is not captured only by tool deployment. It is captured by whether people can work with the technology well, responsibly, and at scale.

These examples point to a practical pattern. High-performing organisations break work down into tasks and skills. They infer hidden capability rather than relying on self-reporting alone. They design transitions instead of waiting for people to discover them. They target learning towards the smallest viable capability gaps. They define governance clearly. And they help managers use skills data as part of day-to-day workforce decisions.

How documented skills have been used directly to automate or improve workforce processes

This is where the discussion becomes especially useful for leaders. A skills matrix is often imagined as a manual spreadsheet exercise. In reality, once skills are documented consistently and linked to roles, tasks, or outcomes, they can begin driving automation and decision support.

Workday’s MSD customer story is a clear example. MSD used Workday to automate recruiting, compensation planning, and talent management workflows, while enabling Skills Cloud to apply machine learning and an enterprise-wide ontology to infer skills and improve workforce visibility. The important point is not just that the system stored skills. It used skills data to improve matching and workflow decisions.

Schneider Electric’s work with Gloat shows how a skills-led talent marketplace can surface projects, gigs, and mentoring opportunities across the organisation. The reported gains in unlocked hours and reduced recruitment costs highlight the commercial value of turning documented skills into a matching engine rather than leaving capability buried in org charts and local knowledge.

Standard Chartered’s Skills Passport offers a similar example. Employees can use a structured skills profile to match themselves to internal gigs and opportunities, improving mobility and making better use of capability already inside the business. That is not simply a learning tool. It is a workforce deployment mechanism.

SAP’s Talent Intelligence Hub shows the same logic at platform level. A centralised skills framework creates a single source of truth that can support recruiting, learning, career development, and planning more coherently. Again, the strategic idea is not that skills data should sit in a file. It should power decisions.

In practical terms, documented skills can support at least six forms of automation or semi-automation. They can drive role and project matching. They can improve learning recommendations. They can support hiring prioritisation by identifying adjacent skill profiles. They can improve succession and bench-strength analysis. They can support workforce planning. And in controlled environments, they can help determine who is qualified to perform, review, approve, or escalate critical tasks.

This is also where the progression from a spreadsheet to a platform matters. A free skills matrix or advanced Excel matrix is often the right place to define the framework, prove the method, and start building capability discipline. But once an organisation wants cleaner reporting, broader visibility, faster updates, and AI-enabled support, it needs something more dynamic. That is exactly the gap PulseAI is designed to fill.

Case studies of organisations that retained or expanded workforce value despite AI

Some organisations respond to AI by treating labour mostly as a cost line. Others respond by treating capability as a renewable asset that can be redeployed, strengthened, and recombined. The second group tends to create more durable value.

Unilever is one of the clearest examples. The guiding principle behind its Future Skills work was effectively that jobs may change or disappear, but people should be given a fair chance to move where their underlying capability still has value. This is an important distinction. A role-based organisation sees decline and concludes that labour must go. A skills-based organisation sees decline and asks what adjacent paths become visible if the work is decomposed properly.

Schneider Electric’s internal talent marketplace followed a similar path. By giving employees better visibility of opportunities and matching them to work more intelligently, the business improved retention and reduced external hiring pressure. The commercial benefit came not from cutting capability, but from finding and redeploying it faster.

Standard Chartered’s Skills Passport makes the same strategic point in financial services. The bank’s emphasis on employability, gigs, and internal opportunity reflects a more mature understanding of workforce resilience. Retention does not have to mean holding people in static roles. It can mean keeping people valuable by helping them move.

There is a wider business lesson here. AI can improve productivity, but the value of that productivity depends on what the organisation does next. Freed capacity can be turned into growth, service improvement, better coverage, stronger control, deeper innovation, or faster time to value. Or it can be wasted. The firms that retain or expand workforce value despite AI are usually the ones that have enough capability visibility to redeploy sensibly, enough management discipline to redesign work well, and enough learning infrastructure to close skill gaps quickly.

Why this matters to HR, operations, and leadership teams in practice

For HR teams, the significance is clear. Recruitment, workforce planning, internal mobility, succession, performance, learning, and compliance all improve when capability is visible. A job-title-led organisation often misses hidden talent and over-relies on formal hierarchy. A skills-led organisation can see potential, adjacent capability, and better development pathways earlier. That is why your Human Resources skills matrix page and EU AI Act playbook are relevant supporting assets for this article.

For operations leaders, the argument is equally strong. Operations fails less often because effort is low and more often because capability is invisible, coverage is uneven, training is inconsistent, and too much work relies on the same few people. Your operations skills development playbook already speaks directly to that problem. AI adds a further layer by changing the task mix and increasing the importance of verification, exception handling, and workflow design.

For technology and transformation teams, the point is that AI adoption without capability visibility is risky. Tool deployment can be fast. Competent deployment is slower. The ability to evidence who can oversee a process, who can validate outputs, and who can improve the workflow matters as much as the model itself. That is why sector-specific resources such as the AI and ML skills matrix and the IT skills matrix are commercially relevant pages to connect into this discussion.

What a strong AI-aware skills matrix should include

If a leadership team wants to use a skills matrix well in the age of AI, it should avoid building a vague list of generic competencies. It should build a working model that connects capability to real work.

That usually means starting with a limited set of high-value skills tied to real workflows. Some will be technical. Some will be behavioural. Some will be governance-oriented. Common examples include process understanding, data handling, prompt design, output verification, escalation judgement, stakeholder communication, documentation discipline, training capability, vendor awareness, and basic AI literacy.

It also means defining rating levels properly. This is where the 0 to 5 capability framework becomes useful. A score should not simply represent confidence. It should represent the standard of performance that can be evidenced in practice. Can the person explain the work only? Can they complete it with guidance? Can they do it independently? Can they coach others? Can they improve the system and handle edge cases?

Evidence matters as well. In lower-risk settings, simple manager observation may be enough. In higher-risk or regulated settings, organisations may need documented examples, observed performance, calibration, or sign-off. This is especially relevant where AI touches people decisions, compliance work, health data, regulated processes, or customer-facing outputs with legal or reputational consequences.

Finally, the matrix should sit inside a management rhythm. It should not be created once and left untouched. Capability shifts when processes change, when tools change, when people move, and when the external environment changes. AI accelerates all four.

A practical model for getting started without overcomplicating it

The best starting point is rarely enterprise transformation. It is a pilot. Pick one team where the work is changing already, or where risk and inefficiency are easy to see. Map the core tasks. Tag them as human-led, AI-assisted, or automation candidates. Define the handful of skills that genuinely matter. Apply a clear rating model. Baseline current state. Define target state. Then act on the gap.

A sensible first move for many organisations is to start with the free Upleashed skills matrix template. This lowers friction and gives managers something practical. Where more sophistication is needed, the advanced Excel skills matrix gives richer structure. Where the organisation wants cross-team visibility, AI-supported suggestions, heat maps, reporting, development roadmaps, and easier ongoing administration, the next step is PulseAI.

If leaders want employees to start with a personal capability baseline before they ever enter a team matrix, Insynode is a relevant bridge. It turns a job title into a practical skills baseline and creates a clear starting point for development conversations.

This sequence matters because it is realistic. Many teams do not fail because the framework is bad. They fail because the first step feels too heavy. The right operating model is light enough to start, rigorous enough to trust, and scalable enough to grow.

The mistakes leaders make when they skip capability visibility

Many organisations make the same five mistakes when they approach AI without a robust skills view. The first is assuming that enthusiasm equals readiness. A team may be excited about a tool, but still lack the judgement, controls, or process understanding needed to use it well. The second is treating AI literacy as a one-off awareness session rather than a role-based capability requirement. The third is focusing on tool features instead of workflow redesign. The fourth is assuming job titles tell you enough about who can be redeployed. The fifth is failing to document who is actually competent to review, approve, or challenge AI-supported work.

These mistakes are expensive because they create invisible risk. A manager may believe the business is moving quickly, while the reality is that outputs are being accepted without enough review, critical process knowledge is still concentrated in one or two people, and development activity is not aligned to the work that is changing fastest. This is one reason so many AI conversations become noisy. Leaders sense both opportunity and risk, but they do not yet have a clear operating instrument for separating one from the other.

A skills matrix gives them that instrument. It does not solve every problem on its own, but it gives a leadership team a practical place to start. It makes capability discussable. It turns opinion into a rating conversation. It converts vague training ambition into a defined gap. It also makes it far easier to see where a person is closer to a new role than their current title suggests.

A 90-day plan for building an AI-aware skills matrix that people will actually use

Days 1 to 15: choose one function, not the whole organisation. Pick a team where AI is already touching the workflow, or where capability visibility is currently weak. List the tasks that matter most to output, quality, safety, and decision-making. Mark each task as human-led, AI-assisted, or a candidate for automation. Keep the list practical and specific.

Days 16 to 30: define the skills that sit underneath those tasks. Keep the list tight. Many matrices fail because they become bloated and generic. It is better to track 12 highly relevant skills well than 60 loosely. Apply a clear rating scale. The Upleashed 0 to 5 framework is useful here because it links rating to capability maturity rather than confidence alone.

Days 31 to 45: baseline current capability and calibrate the scoring. This is essential. One manager’s “good” is another manager’s “developing”. Calibration conversations improve consistency and build trust in the matrix. They also reveal where evidence is weak or where hidden capability has been under-recognised.

Days 46 to 60: connect the matrix to action. Identify the smallest set of skill improvements needed to reduce risk, unlock redeployment, or support safer AI use. Build short development actions, not vague annual plans. That might mean shadowing, supervised use, peer review, real-world exercises, or focused learning pathways.

Days 61 to 75: link the matrix to workforce decisions. Use it to guide task allocation, pilot selection, development priorities, internal mobility discussions, and where relevant, hiring requirements. This is the point where the matrix stops being a document and starts becoming an operating tool.

Days 76 to 90: decide whether the team can continue in a spreadsheet or whether it now needs a more scalable system. If leaders want better visibility, heat maps, cleaner reporting, individual roadmaps, and AI-supported suggestions without losing the underlying framework, this is where PulseAI becomes a logical next step.

The commercial and operational lesson is simple. Start small, score consistently, act on the gaps, and only then scale. This is how organisations avoid turning skills management into bureaucracy while still building something robust enough to support AI-era workforce decisions.

Why this article should connect readers to the wider Upleashed ecosystem

This topic is not a one-page conversation. Readers arriving on a long-form article about skills matrix and AI are likely to sit at different levels of maturity. Some will still need a basic explanation of how a matrix works. Some will want a free starting point. Some will want a more advanced Excel solution. Some will care most about HR and governance. Some will care most about operations, sector-specific use, or how to get a personal baseline before they touch a team-wide tool. That is why the strongest version of this page should guide readers naturally into adjacent Upleashed resources rather than leaving the article isolated.

For readers who need a broad foundation, the Ultimate Skills Matrix Guide is the most obvious internal destination. For those who want to act straight away, the free skills matrix template and the advanced Excel matrix make sense. For HR and governance-led readers, the EU AI Act playbook and Human Resources skills matrix page are strong next steps. For operational leaders, the operations capability playbook is a natural continuation. For technology-focused readers, the AI and ML skills matrix and IT skills matrix add depth. For individuals who want to start with self-assessment, Insynode provides a practical on-ramp.

From a reader-value perspective, this matters because it respects intent. From an SEO, AEO, GEO, and LLMO perspective, it matters because it strengthens topical clustering and makes the site easier for both humans and machines to understand. A useful article should help the reader move forward, not just finish reading.

Where PulseAI fits in the commercial and practical story

PulseAI should not be positioned as “AI for the sake of AI”. It makes more sense as the platform route once an organisation already accepts the management case for capability visibility. That is a stronger story, and a more credible one.

In that story, the value of PulseAI is that it helps organisations run a disciplined capability system with less friction and more visibility. It can help surface skill gaps, support structured assessment, provide cleaner reporting, generate development roadmaps, and enable leaders to see where capability is strengthening or decaying over time. The key point is that it builds on a framework, rather than replacing one.

That is commercially important because buyers are becoming more sceptical of AI claims that float free from operational method. They do not need more noise. They need a clearer line from framework to evidence to action. PulseAI makes the most sense when presented as the operating layer above a sound capability model, exactly as your current PulseAI page and capability framework page already imply.

What this means for search, answer engines, and long-term authority

From an SEO, AEO, GEO, and LLMO perspective, this topic is valuable because it sits at the intersection of several high-intent searches and high-value organisational questions. People are searching for terms such as skills matrix and AI, AI workforce planning, workforce capability visibility, skills-based organisation, internal talent marketplace, AI oversight, skills inventory, and reskilling strategy. They are also increasingly asking answer engines and language models more nuanced versions of the same question, such as whether AI is replacing jobs, how to build an AI-ready workforce, and how to use a skills matrix to support redeployment.

That means the strongest version of this article should do four things well. First, it should define the concept clearly and early. Second, it should answer the key questions directly, not bury them. Third, it should connect the argument to practical actions and relevant internal resources. Fourth, it should cite trustworthy sources and case studies that strengthen retrieval and trust.

This version is designed to do that. It covers the concept, addresses the workforce impact question directly, explains how adaptation works, shows how documented skills can support automation, and links naturally to relevant Upleashed destinations that deepen the reader journey rather than interrupt it.

Final thoughts

The organisations that will handle AI best are not necessarily the ones with the biggest budgets, the loudest claims, or the fastest pilots. They are likely to be the ones that understand their own capability well enough to redesign work deliberately.

A skills matrix helps make that possible. It turns workforce capability into something visible, discussable, and manageable. It helps leaders see gaps sooner, redeploy talent more intelligently, reduce dependence on guesswork, improve internal mobility, and govern AI use more responsibly. It also creates a bridge between today’s performance pressures and tomorrow’s strategic workforce choices.

That is why capability visibility is now one of the most important advantages an organisation can build. In a world where tools can be bought quickly, visibility, judgement, and disciplined execution become harder to copy. A well-designed skills matrix, supported by a clear framework and the right operating system, helps turn those qualities into something repeatable.

Practical next step: start with the free skills matrix template, review the research-backed capability framework, and then explore PulseAI if you want a more scalable, AI-enabled way to manage workforce capability.

FAQ: What is the main purpose of a skills matrix in an AI context?

Its main purpose is to make capability visible at the level where AI changes work. A strong skills matrix shows who can do what, to what standard, with what evidence, and where the organisation can safely automate, augment, redeploy, or develop work next.

FAQ: Can AI replace whole jobs, or only tasks?

In most current cases, the first effect is task replacement, task compression, or reduced hiring demand rather than instant whole-job elimination. However, once enough tasks inside a role change, the role itself can be redesigned, narrowed, or reduced.

FAQ: Why is a skills matrix better than job descriptions for AI-era planning?

Job descriptions are broad and often static. A skills matrix is granular and current. It helps leaders see adjacent capability, hidden potential, coverage risk, and practical redeployment opportunities.

FAQ: How do documented skills help automate business processes?

Once skills are structured consistently, they can support automated or semi-automated matching to roles, projects, gigs, learning pathways, succession plans, and workforce planning decisions.

FAQ: Where does PulseAI add value?

PulseAI adds value when an organisation wants to scale a capability framework beyond a spreadsheet and turn skills data into clearer reporting, faster insight, development support, and AI-enabled assistance while keeping human oversight in place.

References

  1. Accenture. (2021). Future skills pilot: Reimagining talent mobility. https://www.accenture.com/content/dam/accenture/final/a-com-migration/r3-3/pdf/pdf-149/accenture-future-skills-case-study.pdf
  2. Brookings Institution. (2025, 08 July). Is generative AI a job killer? Evidence from the freelance market. https://www.brookings.edu/articles/is-generative-ai-a-job-killer-evidence-from-the-freelance-market/
  3. Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at work (NBER Working Paper No. 31161). National Bureau of Economic Research. https://www.nber.org/papers/w31161
  4. Brynjolfsson, E., Li, D., & Raymond, L. (2025). Generative AI at work. The Quarterly Journal of Economics, 140(2), 889-942.
Skip to content