Get the latest news delivered to your inbox! Sign up for our AIDG newsletter.
You're invited to share your insights on assurance practices across the AI supply chain by participating in Artificial Intelligence Quality Infrastructure (AIQI) Consortium's short survey.
They are seeking feedback from organizations involved in data provision, telecommunications, hardware manufacturing and AI model development. Input gathered will help inform the design of AIQI’s upcoming AI Assurance tool designed to support trustworthy and transparent AI systems.
Participants may also be contacted for follow-up interviews once the tool’s prototype is ready.
The Government of Canada signed a memorandum of understanding with Cohere Inc., a Canadian AI multinational technology company, to enhance public service operations and strengthen Canada’s AI ecosystem. The partnership aims to deploy sovereign AI solutions across government, promote responsible AI development, and support Canadian innovation. Cohere will help Canada build domestic capabilities and global competitiveness. Through this initiative, the government emphasizes digital sovereignty, talent development, and international leadership in ethical AI. Read the full news release.
The initiative aligns with Canada’s broader AI strategy, which includes over $4.4 billion in investments since 2016, the Pan-Canadian AI Strategy, the Canadian AI Safety Institute, and voluntary codes of conduct for generative AI.
Sovereign AI solutions refer to technologies developed and managed within Canada, ensuring data privacy, security, and alignment with Canadian values. For the public service, this means increased confidence that AI systems are transparent and accountable to Canadians. As new initiatives roll out, the Hub will continue to share updates on partnerships and opportunities for engagement. Stay connected to SCC and the AIDG Hub for the latest developments in Canada’s AI leadership and how they benefit public and private sector organizations alike.
At the 2025 AI for Good Summit, global leaders and standards bodies highlighted how international collaboration can help translate AI governance principles into real-world action. Sessions emphasized the role of standards in enabling trustworthy, inclusive AI systems aligned with human values and societal needs.
SCC joined partners from Canada and abroad to reflect on the importance of cross-sector collaboration in shaping responsible AI systems. The Summit underscored how Canadian expertise contributes to global efforts to build standards that promote human-centered innovation.
Global standards build trust, promote interoperability, and strengthen Canada’s role in shaping the future of AI governance. By contributing to international dialogue, SCC helps align technical standards with public policy priorities and societal needs.
Canadian experts continue to play an active role in shaping future discussions and contributing to the evolution of international standards. Participation in global events like the AI for Good Summit ensures Canadian perspectives and priorities are well-represented. Ongoing collaboration will be key as new challenges and opportunities emerge in AI governance. To learn more about Canada’s involvement and stay up to date on upcoming initiatives, visit the AIDG Hub for the latest resources and updates
A KPMG International and University of Melbourne survey of more than 48 000 people across 47 countries places Canada 44th in AI training and literacy and 42nd in public trust. Only 24 percent of Canadians report any formal AI training, compared with 39 percent globally, and just 34 percent say they are willing to trust AI-generated information versus 46 percent worldwide. Nearly half of those surveyed in Canada believe the risks of AI outweigh its benefits, and three-quarters want stronger regulation, that researchers link to low levels of knowledge, skills and hands-on experience.
The data reveal a skills and confidence gap that could have an economic impact on recent federal and private-sector AI investments. SCC’s Data Literacy Project aims to address precisely this deficit by developing a National Workshop Agreement that defines core competencies and learning pathways for Canadians.
Data literacy means having the knowledge and skills to understand, use, and critically assess data and AI technologies in daily life and work. As Canada invests in AI across sectors, boosting public confidence will require accessible training, hands-on experience, and clear communication about the benefits and risks of new tools. The Data Literacy Project will grow to help Canadians to participate in upcoming workshops and help shape national standards for responsible and informed use of AI. By building these core competencies, Canada can bridge the gap in trust and ensure everyone has a stake in the country’s digital future.
SCC and the Inclusive Design Research Centre at the Ontario College of Art & Design (OCAD) University are developing a Trust Meter technical specification (TS) to address statistical discrimination against minorities and data outliers in mechanized statistical reasoning in AI decision tools.
The TS is intended to alert AI operators when the scenario, group or individual about whom the decision is made is out-of-distribution relative to the training set the model is trained on, meaning the AI should not be trusted to make decisions in this context.
The Trust Meter project seeks to create an inclusive and balanced data environment, beginning with people with disabilities and extending to other marginalized groups. Through this tool, the project aims to address barriers across the data ecosystem, promoting fair treatment and mitigating risks associated with statistical discrimination.
The Trust Meter project is being advanced under our AI and Data Governance program and is expected to be completed by May 2026.
A new report from the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto, charts a roadmap for AI regulatory markets. It suggests an AI-governance model where governments license private regulatory-service providers (RSPs) to audit AI developers.
The report draws on insights from leading experts from across sectors brought together by SRI to co-design a regulatory market model. Workshop participants flagged three prerequisites—a stronger government technical capacity, market infrastructure (licensing and insurance) and an implementation strategy with real-world pilots.
For enterprises deploying AI, the report signals that competitive, third-party oversight may soon shift from nice-to-have to regulatory expectation—creating both compliance pressure and a market for best-in-class evaluators.
The reports highlight Canada’s requirements to carry out functions essential to a well-functioning AI regulatory market. It provides practical steps to address the identified challenges and emphasizes the need for ongoing collaboration across the AI ecosystem.
Learn more on the SRI page and read the report.
The Standards Council of Canada (SCC) is pleased to announce that the AIDG Standardization Hub is now live. This website offers a database of standards related to AI and data governance, as well as a collection of resources to support micro, small and medium enterprises as they navigate the world of standardization.
To ensure that the Hub remains a useful resource, we are looking for your input and feedback on content to include. There is additional work to be done to maximize the offerings of the Hub, and we appreciate your collaboration in this effort.
As the Hub grows, visitors will find regular updates highlighting new standards, case studies, and learning opportunities. We encourage users to explore the database, subscribe to our newsletter for the latest developments, and share their feedback to help shape future resources. Together, we can ensure the AIDG Standardization Hub remains a trusted destination for navigating the evolving landscape of AI and data governance in Canada.
Last year, the Treasury Board Secretariat launched public consultations on Canada’s first AI strategy for the federal public service. Participants’ comments centred on four areas of focus: procurement, sustainable AI practices, talent and training and ethical use.
The ISO/IEC workshop on AI held December 11, 2024 featured key presentations on AI safety. Surdas Mohit, Director of AI Safety Policy and International Engagement at ISED introduced the key initiatives of the Canadian AI Safety Institute.
The Canadian government is investing up to $2 billion to enhance AI compute infrastructure. As part of this effort, the AI Compute Challenge aims to build and expand AI data centers, ensuring data sovereignty and security.
The European Commission has published the first draft of the General-Purpose AI Code of Practice.
Approximately 1,000 stakeholders, including EU Member States representatives and international observers, participated in dedicated working group meetings to discuss the draft.
On November 12, the Government of Canada announced the creation of the Canadian AI Safety Institute (CAISI), with an initial budget of $50 million over five years. CAISI will collaborate globally to address AI risks, with a particular focus on cybersecurity and national security.
On October 22, Deputy Prime Minister Chrystia Freeland announced the launch of two new programs to grow Canada’s AI ecosystem: the $200 million Regional AI Initiative to accelerate AI adoption and the $100 million AI Assist Program for SMEs developing generative AI solutions.
The Government of Canada has introduced new guidelines for employees using generative AI tools like ChatGPT or Copilot on their job to ensure the technology is being used responsibly.
The High-level Advisory Body on AI of the United Nations issued the Governing AI for Humanity report The report proposes creating an AI Standards Exchange that unites representatives from national and international standard-development organizations, technology companies, civil society, and the International Scientific Panel.
Artificial intelligence (AI) and other digital technologies have the potential to deliver significant benefits for Canada and the world, but only if they are developed and used responsibly. Standards and conformity assessment (or assurance) are vital to ensuring that happens. That’s why SCC has been at the forefront of setting those standards and making sure they’re applied appropriately.
ARIA (Assessing Risks and Impacts of AI) aims to help organizations and individuals determine whether a given AI technology will be valid, reliable, safe, secure, and fair once deployed. ARIA will consider AI beyond the model and assess systems in context, including what happens when people interact with AI technology in realistic settings.
SCC visited the offices of our European partner CEN-CENELEC this month to discuss ongoing cooperation on AI and other digital sectors. CEN-CENELEC is the European standardization body that coordinates standards for the EU and is responsible for delivering on the standardization requests in support of the EU AI Act.
On May 21, 2024, the Council of the European Union approved the AI Act, the world's first all-encompassing AI legislation. This law takes a risk-based approach, applying stricter regulations to higher-risk AI systems to ensure their safety and protect fundamental rights.
The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions. It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.
ISO/IEC 42001 is the world’s first AI management system standard, providing guidance for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations.