How has the government sidelined public interest?
Since Canada welcomed its first-ever AI Minister in June 2025, the government has focused heavily on scaling up the AI industry and steering away from regulation in the name of economic growth.1,2 This “innovation vs. regulation” mindset has pushed Canada toward an industry-centred approach and sidelined public interest at every step of figuring out what to do about AI.
From choosing a 26-member AI Task Force dominated by industry voices, to running a rushed 30-Day Sprint “public” consultation built around a survey that only asked industry-focused questions,3,4 the process ignored the many concerns Canadians—including our community—have been raising about this new technology.5 With so little room for meaningful public participation,6 Canada is on track to end up with AI policy shaped by Big Tech, not by the people it impacts most.
What is the government not asking about?
Canada’s AI consultation has major blind spots, leaving out a long list of issues people are most worried about. The government’s survey barely touched on environmental sustainability, even though today’s AI systems consume massive amounts of energy, water, and resources.7 It ignored copyright and artists’ rights,8 despite AI training practices threatening creative ownership and livelihoods.9 It overlooked the dealing with the growing impact of automation on employment and economic stability.10 It skipped over education and digital literacy, even as young people are increasingly rely on AI tools without understanding their risks.11 And it downplayed misinformation and democratic integrity at a time when AI-generated content is eroding trust in public institutions.
What AI issues do Canadians care about that are being ignored?
In August 2025, OpenMedia asked our national community what kind of AI future Canada should build. From more than 3,000 responses, the message was unmistakable: Canadians are deeply worried about the harms and risks the government keeps sidelining.
People raised major concerns about threats to rights and privacy, along with the growing loss of control over personal data. Many highlighted AI’s environmental footprint, including massive energy use, water consumption and new data-centre emissions. Others pointed to the rise of misinformation and deepfakes, and how AI is reshaping the way people learn, think, and access trustworthy information. And across the board, Canadians stressed the risks to jobs, workers, and creators who still lack fair compensation and basic protections.
Why speaking up now matters
Too often, people who raise real concerns about AI are treated as if they’re standing in the way of “innovation.” Canadians want innovation and regulation,12 because responsible rules are what make long-term progress possible. Yet the government continues to act as if these goals are in conflict, and concerns are better off buried than addressed. That mindset puts our rights, our data, and our future at risk.
If Canada keeps reacting instead of leading, we risk losing control over our data, our digital infrastructure, and our economic future. True digital sovereignty will come from listening to Canadians, protecting our rights, and building an accountable AI ecosystem, not from letting industry set the agenda and telling people to just trust the industry.
Why your message makes a difference
Your voice truly matters. MPs pay close attention when constituents speak up. Industry lobbying on this billion dollar set of issues is deafening and constant, which means public voices need to be even louder to rebalance the conversation.
Canada is actively shaping its national AI approach, and expecting to welcome its renewed AI strategy in 2026.13 By raising your concerns, you’re helping create the political will needed to protect our rights, democracy, workers, and digital future. Every message sent reminds our leaders that Canadians expect an AI strategy built for people, not just industry.
Take action now!