All wiki notes
Heuristic

Start AI governance imperfect; iterate rather than wait

AI governance should follow the same experimental posture as AI adoption — start imperfect, gather evidence, iterate — because waiting for clarity guarantees the technology gets ahead of the policy.

Last updated 24 April 2026 First captured 24 April 2026

ai-governancestrategic-framing

Most organisations apply very different standards to AI adoption and AI governance. Adoption proceeds experimentally — tools are trialled, workflows are tweaked, practices evolve in contact with real use. Governance is expected to arrive fully formed: a policy that is correct, stable and approved before it is issued. The asymmetry is the problem. By the time a governance framework meets that standard, the technology has moved, the use cases have multiplied, and the policy is already out of date.

The working rule is that AI governance should follow the same posture as AI adoption. Start imperfect. Publish provisional policies that take a view on the questions that matter now, even if the answers feel premature. Gather feedback. Iterate. Treat governance as a live capability being built, not a document being finalised.

Why deferral is the default, and why it fails

Most organisations defer AI governance because the ground keeps shifting: tools change, use cases proliferate, the regulatory picture is unclear, and it feels reasonable to wait until the picture settles before committing to a position. The mistake is that the picture does not settle. By the time deferral feels safe, the patterns of use have already hardened, the staff have already developed expectations, the vendors have already collected data on the defaults the organisation never contested, and pulling any of it back is dramatically more expensive than starting with a provisional rule would have been. See Passive AI adoption is an implicit policy choice.

What an imperfect-first policy looks like

A provisional policy does not need to address everything; it needs to address the small number of decisions that are being made by default right now. For most organisations, those are:

Access. Who can see AI-generated outputs such as transcripts, summaries and analyses. The default is often “anyone with a licence”; the considered position is usually narrower.

Retention. How long AI-related records are kept. The default is often “indefinitely”; the considered position usually has a shorter horizon with explicit preservation for defined purposes.

Purpose limitation. What AI outputs can be used for. The default is often “any organisational purpose”; the considered position usually excludes performance evaluation, disciplinary processes, and training of third-party models.

Transparency. Whether staff know what is being captured and analysed. The default is usually opaque; the considered position is usually explicit.

Consent. Whether staff agree to the analytical tools, and whether consent can be withdrawn. The default is usually implicit; the considered position depends on the purpose but is often explicit and revocable.

A policy that takes a position on these five is already significantly more governance than most organisations have. It does not need to be the final answer; it needs to exist as the starting point from which the iteration happens.