How to Build a Weekly AI Skills Report for Your Team That Shows Exactly Who Needs Training and Why

Published 2026-05-12 by

An AI skills gap assessment report tracks which team members are using AI tools as expected and who is falling behind. Build it weekly using Microsoft Viva Insights for data, Claude for analysis, and Notion or Google Sheets for delivery.

We built a weekly AI skills report for a 12-person corporate team using three tools and one structured prompt. It took 90 minutes to set up and now runs in under 20 minutes every Friday. This guide covers what the report includes, which tools to use, and how to build it step by step.

What Is an AI Skills Gap Assessment and Why Does It Matter?

An ai skills gap assessment is a structured review of what your team can actually do with AI versus what the role requires. It is not a quiz. It is a weekly snapshot that shows you who is using AI tools, who is not, and where the gaps are costing you output.

Most companies skip this. They buy tool licenses, run one training session, and assume adoption happens. It does not. According to McKinsey, only 1 in 5 employees who receive AI training apply it consistently within 90 days. The gap is not awareness. It is accountability.

This report gives managers a one-page view every week. It names names. It shows usage data. It tells you exactly who needs help and what kind. If you want to become the person who brings AI into your organization, this is one of the most visible things you can build. We cover that positioning in more depth in How to Sell an AI Process Audit Service to Your Company and Become the Person Who Brings AI In.

Which Tools Should You Use?

You need three things: a data source, an analysis layer, and a reporting layer. Here is what we tested.

ToolRoleCostBest For
Microsoft Viva InsightsUsage data collectionIncluded in M365 E3/E5Teams already on Microsoft 365
Claude (Anthropic)Pattern analysis and report drafting$20/month per user (Pro)Summarizing data into readable insights
Notion or Google SheetsReport delivery and trackingFree to $10/monthStoring weekly snapshots and trends
ChatGPT (OpenAI)Alternative analysis layer$20/month (Plus)Works if your org blocks Anthropic

We use Claude for the analysis step. You paste in the raw usage data and a prompt that defines your team's AI expectations by role. Claude returns a structured summary with names, gaps, and recommended next steps. ChatGPT and Gemini work too, but Claude handles longer context better when you are pasting in data for 10 or more people at once.

If your team is already doing workflow audits, pair this with the tools covered in Best AI Tools for Auditing Your Team's Workflows and Spotting 20 Hours of Hidden Automation Opportunities.

How to Get Started Step by Step

  • Define expectations by role. Open a Google Sheet. List each role on your team. Next to each role, write 2 to 3 specific AI tasks that person should be doing weekly. Example: "Account Manager: use AI to draft client recaps, summarize meeting notes, prep status emails."
  • Pull usage data. In Microsoft Viva Insights, go to Advanced Insights, then Analyst Workbench, then create a custom query for Copilot activity by person. Export as CSV. If you do not have Viva, use a simple self-report form in Google Forms sent every Friday at 4pm.
  • Build your analysis prompt. Open Claude at claude.ai. Paste this structure: "Here is my team's AI usage data for the week: [paste CSV or form responses]. Here are the expectations by role: [paste your role list]. Identify who is meeting expectations, who is falling short, and what specific training each person needs. Format as a one-page report with a summary table."
  • Review and edit the output. Claude will return a draft report in under 60 seconds. Read it. Adjust any names or context it got wrong. This takes 5 minutes.
  • Paste into Notion or Google Docs. Create a weekly template. Each Friday, paste the new report into a new page. Tag the relevant manager. Done.

This connects directly to spotting where AI can replace manual work. If you want to go deeper on that, How to Analyze Your Company's Work and Spot Where AI Can Replace Manual Tasks in One Day walks through the full process.

What to Watch Out For

Self-reported data is unreliable. If you use a Google Form instead of Viva Insights, people will overreport. They want to look good. We saw this in our own test. Two team members reported daily AI use. Their actual output showed no signs of it. Whenever possible, pull data from the tools directly rather than asking people to self-assess.

Also, this report can feel punitive if you frame it wrong. Do not position it as surveillance. Position it as a coaching tool. The goal is to find who needs support, not who to blame. If your manager reads it as a performance review, you will get pushback fast.

What to Do Right Now

Open a Google Sheet right now. List your team's roles. Write two AI tasks each role should be doing this week. That list is the foundation of your first report. You can have a working version by end of day.

Someone on your leadership team is already asking about AI adoption. They do not have data yet. You could be the one who shows up next week with a report that answers the question. Every week you wait, that opportunity goes to someone else.

Zero Day AI gives you mission files that tell your AI exactly what to build. You paste. It builds. You walk away with a working system in under an hour. Try it for $1. Two weeks. Full access. If it is not for you, cancel. But if you do nothing, the gap does not close itself.

Every week you wait, someone in your industry gets further ahead with AI. They are building faster, charging less, and winning the clients you are still chasing manually. That gap does not close on its own.

Get started for $1

Step by step mission files that build real AI systems for you. Cancel anytime.