AI’s Double-Edged Sword: Balancing Innovation and Privacy of Information
Canada enacted the first federal privacy protection in 1977 as part of Part IV of the Canadian Human Rights Act. The right to privacy was further supported in the enactment of the Canadian Charter of Rights and Freedoms in 1982 and when the federal Privacy Act and Access to Information Act were proclaimed in 1983. The first forms of Artificial Intelligence (AI) have been around for many decades; however, AI as we know it now, only began to emerge more recently. With further developments continuing in AI, it is natural that people’s concerns about how their privacy will be affected has had to evolve as well. As technology continues to advance, so do the risks of improperly collecting, using and disclosing individuals’ personal information and/or personal health information (pi/phi).
What is AI?
Bill C-27 (not passed) – Subsection 39(2) defines AI as a “technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.”
The Department of National Defence and Canadian Armed Forces (DND/CAF) recognizes there is no single accepted definition of AI, however, defines AI as “the capability of a computer to do things that are normally associated with human cognition, such as reasoning, learning, and self-improvement.”
AI and Privacy
As AI continues to transform industries and workflows worldwide, with some formal investigations underway, we are learning more about AI and its potential negative impacts on privacy. For instance, AI software may “scrape” pi/phi from websites without the requisite authority. The Privacy Commissioner of Canada (PCC) launched a joint investigation with three provincial Commissioners on OpenAI, which runs ChatGPT, to determine if their practices comply with Canadian privacy laws.
New Legislation
The Artificial Intelligence and Data Act (AIDA) as part of Bill C-27 is dead because parliament has prorogued. Bill C-27 or AIDA itself will have to be reintroduced into the House of Commons. If Bill C-27 were to pass, AIDA would be one of the first national frameworks specific to the creation and use of Artificial Intelligence in Canada.
The PCC notes that, while privacy laws require modernization, the current laws apply regarding the misuse of pi/phi in the AI space. The PCC also notes that if an organization or public body is considering adopting AI tools in their work, to complete a Privacy Impact Assessment (PIA) to determine if privacy rights are complied with in implementing new tools.
Even without specific legislation here in Saskatchewan governing AI, if a public body or trustee bound by FOIP, LA FOIP or HIPA uses AI in a way that creates a privacy breach, we could review or investigate the matter. More information as to who we have oversight on can be found in the Acts or on my office’s blog posts: “When We Cannot Help You | IPC” and “Why some reviews and investigations cannot pass go (updated) | IPC.”
Moving Forward
The risks of the misuse of AI and corresponding privacy implications have been raised by the PCC and several provincial privacy commissioners in Canada, including the Saskatchewan Information and Privacy Commissioner.
As a result, the Federal, Provincial and Territorial Information and Privacy Commissioners proposed 9 principles for the “development, provision, and use of generative AI systems” listed in the Principles for responsible, trustworthy and privacy-protective generative AI technologies document.
- Legal authority and consent: ensure consent for collection, use or disclosure and is as specific as possible.
- Appropriate purposes: collection, use and disclosure of pi/phi should only be for appropriate purposes.
- Necessity and proportionality: use of data to achieve intended purposes.
- Openness: open and transparent on the collection, use and disclosure of personal information and the potential privacy risks
- Accountability: establish accountability for compliance with privacy legislation.
- Individual access: individuals have the right to access their personal information collected during use of an AI software.
- Limiting collection, use, and disclosure: limit to only what is needed to fulfill the explicitly specified, appropriate identified purpose.
- Accuracy: ensure personal information is as accurate, complete, and up to date as necessary for the purposes it is used.
- Develop safeguards: to protect personal information and mitigate potential privacy risks.
Recommendations:
- Avoid using confidential data in AI software, including pi/phi.
- Implement data masking techniques such as replacing names or redaction to reduce privacy risk.
- Balance transparency of use with confidentiality with data and ensure controlled disclosure of information.
- Review and update policies to re-evaluate AI data privacy policies as AI standards are updated.
- Educate staff on the importance of data protection.
- Monitor and audit AI systems for potential vulnerabilities.
- Complete a PIA: My office has published a PIA Guidance Document which can support organizations in determining if AI has an impact on privacy.
AI can be a helpful tool to help automate the work that organizations and individuals do, but it does not come without risks. Anyone who plans to use AI tools in their work should review the recommendations from my office, and when in doubt, contact us.
Further Resources
The Artificial Intelligence and Data Act: Video
The Artificial Intelligence and Data Act (AIDA) – Companion document
The Law Society Issues “Guidelines for the Use of Generative AI in the Practice of Law” | IPC
References
Exploring privacy issues in the age of AI | IBM
Statement on Generative AI – Office of the Privacy Commissioner of Canada
Protecting privacy in a digital age – Office of the Privacy Commissioner of Canada
A regulatory roadmap to AI and privacy | IAPP