Why do we have two monumental pieces of legislation baked into one bill? Good question, one without a clear answer from our government.
The sneaky, secret reason? Since ChatGPT, Dall-E and all the other ‘generative’ AI techs started rolling out, AI industry stakeholders in Canada are demanding a loose bill with a light touch. The goal? Not so much regulating AI well; instead, they want plenty of legally permitted room to experiment on Canadians, our data, and our rights.
What’s wrong with the AI rules in C-27?
The AI rules in C-27 simply aren’t doing the job. We need vague definitions clarified and loopholes closed if they’re ACTUALLY going to protect us from AI surveillance and manipulation in the years ahead. Ideally, our legislators would pause and give these rules a thorough public hearing BEFORE passing them into law, with comprehensive protections. At minimum, they need to do their best to clean the AI rules in C-27 up before it passes, make sure they’re as strong and specific as possible, and that they can be rapidly improved by an independent regulator as we learn more about where they work – and where they don’t.1
AI is going to keep developing for the good and for the bad. Our laws can help nudge developers towards socially beneficial, user-centered AI – AI that serves us, respects our choices, and makes our lives better. But unless our laws are seriously updated with ironclad, unbreakable protections in place – a LOT could go wrong. Flimsy legislation will not protect us against the potential harms it can have. Either the government goes big or goes home. Email your MP and tell them to give AI regulation the full study it deserves!