3 Practical Ways to Bring Your AI Policy to Life
.png)
.png)
This article was written in collaboration with Odile de Jong, AI Adoption Lead at Digital Bricks.
.png)
I spend my days helping organizations move from “we switched AI on” to “people actually use it safely and get value from it.” A lot of that work is adoption: onboarding, skills, habits, and the human side of getting teams comfortable with tools like Copilot and internal AI solutions.
I also support AI adoption efforts across the Novature Group. Digital Bricks joined Novature as its Center of Excellence for AI. This includes Copilot, and AI agents, with adoption and education programs embedded into Novature’s international portfolio.
Through previous experience i have noticed that many organisations have invested serious time in building a solid AI policy. But having a policy is one thing. Getting people to know it, understand it, and use it day to day is something else. Most AI policies don’t fail because the content is “wrong.” They fail because the policy stays abstract, invisible, or disconnected from daily work. If the main experience employees have is “a PDF somewhere on the intranet,” they won’t build the habits you actually want. This is a known challenge in policy compliance more broadly, and that’s why mature governance approaches emphasise awareness, training, and reinforcement, not just writing the policy.
To make this practical and easy to follow, I’m sharing the three methods I use most often.
I start with learning. If you want people to use AI responsibly, you need to teach them in the same moment you teach them how to use the tool.
There is also a regulatory reason to get serious about literacy. In the EU, Article 4 of the AI Act introduces an AI literacy obligation: providers and deployers of AI systems must take measures to ensure a sufficient level of AI literacy for staff and others operating AI systems on their behalf. What this means in practice:
If your employees only encounter the AI policy as a document, adoption will stay low. Instead, make the policy a core part of your L&D offering. This mirrors what well-established governance and compliance programs do: build structured awareness and training cycles, not one-off communications.
Here’s what I include when I embed AI policy into training:
If your organization is using Microsoft 365 Copilot, this is the perfect place to connect policy to reality. Microsoft’s documentation is very clear that Copilot honors existing security and data protection controls and only accesses data a user is authorized to access. That’s not just a technical fact, it’s something employees can understand and use when making decisions.

Microsoft also states that prompts, responses, and data accessed through Microsoft Graph aren’t used to train the foundation models used by Microsoft 365 Copilot, and interactions remain within the Microsoft 365 service boundary under its privacy, security, and compliance commitments. These are exactly the kinds of facts that reduce fear and help people make better choices in line with the AI Policy.
You run a campaign. Policies are not meant to be read once. They need repetition and reinforcement.
In change management terms, this is the Awareness and Desire problem. People need a reason to care, and they need the message more than once. Prosci’s ADKAR model explicitly separates Awareness (understanding the need for change) and Desire (willingness to participate and support it).
Prosci also recommends repeating key messages multiple times (often 5 to 7) for communications to land.
This is why I like an internal “AI policy awareness campaign” format, ideally inside a place people already use for community and conversation.
For many Microsoft lovers, that’s Viva Engage. Microsoft positions Viva Engage as the social layer for communities and conversation at work, including leadership communication and broad engagement. Microsoft also provides built-in publishing options for posts and announcements, including scheduling for planned comms.

What the campaign looks like? You highlight one part of the policy each week. You keep it short. You use normal language. You focus on scenarios people recognize from their own work. This is policy storytelling: small, digestible pieces that build familiarity without overwhelming anyone.
Microsoft even has a “campaigns” concept in Viva Engage, with guidance on setup and best practices. A realistic four-week starter campaign could look like this:
The point is not perfection. The point is repetition, familiarity, and giving people the confidence to engage with the policy instead of ignoring it. One more practical detail: measure whether it’s working. Viva Engage provides analytics to track engagement on posts and within communities, which helps you refine the campaign instead of guessing.
So the real test of an AI policy is not “do people remember it.” The real test is “can they apply it when it matters.”
This idea is strongly supported by usability and performance-support research: people are far more likely to use guidance when it shows up in context, at the time they need it. So I translate policy principles into practical scaffolding employees can use instantly:
Keep it to a single page. Put it where people work (repeatedly). Examples of questions that work well:
These questions are simple, but they do the job: they turn abstract principles into a decision moment.
Instead of describing the policy as “principles,” write it as scenarios:
Then state: allowed/not allowed, conditions, and the escalation path. If your employees use Microsoft 365 Copilot, you can align these scenarios with real safeguards and real limits. For example, Microsoft states Copilot only surfaces organizational data users already have at least view permissions for, and it respects existing access controls. That’s a practical anchor point for “what data can the AI see?” questions.
Just-in-time materials work best when leaders and AI champions regularly refer to them in real conversations: “What does the policy say here?” “Which checklist question applies?” Champions are a proven adoption technique in Microsoft rollouts because they help peers day to day and keep change alive after launch.
.png)
If you only do the three steps above once, you’ll get a short-term bump. If you want this to stick, you need reinforcement and feedback loops. Prosci describes reinforcement as the final ADKAR building block, focusing on activities that help the change “stick” and prevent people from drifting back to old habits.
I look for three signals: First, are people engaging with the policy content and campaign materials. Viva Engage analytics can help you see whether your messages are landing. Second, are people using the just-in-time materials. You can track downloads, page views in your internal hub, and the number of questions in your community channel that reference the checklist. Third, is leadership reinforcing the behavior. A champions network helps a lot here, because it creates a repeatable way for teams to get support and share good practices.
Finally, keep the “why” visible. Governance should connect to existing organisational controls and risk processes, not sit separately as a standalone artifact. If your AI policy is part of how decisions get made, it stays alive.
If your AI policy is already written, you have done the hard governance work. Now you need adoption work. Embed it in learning so people understand it. Run an awareness campaign so people remember it. Put practical guidance in the flow of work so people can apply it when it counts. And reinforce it through measurement, leadership, and champions. That’s how you turn an AI policy from a document into a shared habit.