Reference no: EM133424275
Question: Why Artificial Intelligence does not make neutral immigration decisions?
Case Study: A new University of Toronto study raises concerns about the effect of this new policy on human rights. The report's authors are very concerned about using algorithms to make these decisions.
Petra Molnar, a co-author, says that 'algorithms are by no means neutral. It's a set of instructions based on previous data analyses that you use to teach the machine to make a decision. (The machine) doesn't think or understand the decision it makes."
Artificial Intelligence's immigration decisions could harm applicants' human rights.
AI decision-makers rely on stereotypical factors - such as appearance, religion or travel patterns - and may often ignore more relevant data when making decisions. This imbeds bias into the automated decision-maker.
Migrants, residents without citizenship status and other vulnerable people may have their fates decided without a human official ever seeing their case. They probably won't be able to appeal the AI's decision as they have little access to legal assistance to protect their rights to privacy, due process, and freedom from discrimination.