A leading Abu Dhabi-based holding group is building a first-of-its-kind AI Governance function, and we're hiring an AI Security & Governance Lead to translate governance policy into enforceable technical controls across the Microsoft security stack. You will be the governance authority — not a solution designer — ensuring every AI initiative meets the bar for security, privacy, and Responsible AI before reaching production.What you'll own:Translate AI governance policies into technical controls, playbooks, and automated checks; implement the enterprise AI Governance Framework across all AI solutions.Onboard datasets and applications to Microsoft Purview; define classifications, sensitivity labels, DLP, and access policies.Implement tenant, application, and data security baselines across Entra ID, PIM, Conditional Access, and Defender.Establish AI risk assessment, threat modeling, red-teaming, jailbreak testing, and prompt/content safety controls.Operate auditability: Purview Audit, retention, investigation runbooks, and evidence management for reviews.Run DSPM for AI posture management; track risks, drive remediation, and report to governance councils.Work with Legal and Compliance on data residency, IP, and regulatory requirements; support vendor due diligence.Own the AI Governance Stage Gates (client-owned):Pre-development approval (data usage, risk classification, control baseline).Pre-production approval (security/privacy/model-risk evidence pack, release criteria).Post-deployment assurance (monitoring, drift and abuse checks, incident readiness, auditability).Act as a governance interface to the delivery partner's solution architects and delivery leads — without designing or implementing AI solutions yourself.Define and enforce governance requirements for all external vendors covering transparency, data usage boundaries, audit rights, and assurance artefacts.