A framework for AI

  • November 28, 2023

Canada is taking part in international negotiations in the Council of Europe for a treaty on artificial intelligence, human rights and the rule of law. The Canadian Bar Association’s Privacy and Access Law Section, the Immigration Law Section and the Ethics and Professional Responsibility Subcommittee, in a comprehensive submission, express support for the proposed approach and offer a useful perspective on the survey questions contained in the government’s consultation document.

AI, the Sections say from the get-go, is rapidly altering our concept of reality. The proposed treaty is a critical effort in bringing forward universal legislation to govern it, with a view to protecting human rights, democracy and the rule of law.

The proposed treaty is compatible with Canadian interests and values, and this proposed comprehensive legislation would serve the country well.

Because citizens can’t opt out of dealing with their governments, it is imperative that the treaty apply to the public sector. “In contrast to dealings with private sector businesses, a citizen unsatisfied with the use of AI by their government is not able to select an alternative. This this creates a heightened duty of care and diligence.”

Of course, the treaty must also apply to the private sector. “The law must play a pivotal role to bridge the growing divide between the ethical and the legal and breathe new life into our fundamental legal rights, freedoms and protections as we reorient our existence in a digital world,” the submission reads.

One criticism of the proposed treaty is that it fails to highlight migrants as a vulnerable segment of the global population who are significantly impacted by AI deployment, considering how it contributes to a worsening of racial, socio-economic, political divides and discrimination. They should be singled out as a vulnerable group in Article 17.

“As AI increasingly influences meaningful immigration decisions, prioritizing individual well-being and rights becomes important,” the Sections write. The definition of AI in Article 3 of the proposed treaty is comprehensive and captures the technology’s nuances.

As the Sections note, everyone should have equal access to tools, like reliable phones, computers and the internet if we are to respect Article 20 of the proposed treaty on digital literacy skills. “Access is crucial to uphold the fundamental principles of fairness and equality under the law and prevent a digital divide” when assessing how AI applies to immigration cases. In addition, procedures should be open and transparent as to the reasons and steps taken to evaluable each case.

Article 7 of the proposed treaty, which speaks to transparency and oversight, should also address explicability. “Under the transparency mechanism, algorithms should be designed to allow for explanations of their process and decisions,” the CBA Sections write. “Through this, users can see the reasoning and challenge decisions if a step or procedure is contrary to law.”

Facial recognition, biometrics and human rights

The CBA submission notes that on this subject, the European Union’s Artificial Intelligence Act overlaps to some extent with the proposed treaty, by seeking to establish a global standard and rules on facial recognition, biometric surveillance and other AI applications. This framework has four tiers of risk: Unacceptable, high, limited and minimal.

Deep fake and digital disinformation are significant threats, the Sections say. “As deep fake technology improves and authenticity becomes harder to discern, it could erode the basis of law as well as society.” That’s why Article 4 of the proposed treaty receive more attention and should consider the need for a moratorium on this type of technology.

Civil and criminal liability

“Digital harm and discrimination cannot be restricted to civil sanctions,” the CBA submission says. “Penal consequences must be developed and applied where needed.” Here we should take as an example the powers given to a tribunal by the Immigration and Refugee Protection Act to pursue criminal liability to help deter AI malfeasance and send a clear message that AI and digital mismanagement has serious consequences.

Other issues

The Sections believe the growing use of digital profiling requires immediate attention, as it along with machine learning, or ML, usher in a new era of scientific racism. “The fundamentals of ML for image understanding detail how computers analyze physical features using precise calculations based on images, with a focus on the supervised learning approach involving labeled examples,” they write. “Technical nuances such as parameter tuning, overfitting, and the intricate relationship between the number of parameters and the required training data are also some considerations.”

And finally, the Sections suggest tightening some of the definitions in the proposed treaty, such as “rule of law” and “interferences” with human rights and fundamental freedoms.

The CBA submission concludes by expressing the hope that a comprehensive regulatory framework like the proposed treaty “will help to harness AI’s potential while minimizing its risks.”