Reference no: EM132816005
Overview
The proliferation of AI systems and algorithms is increasingly accelerating across the public and the private sectors. They are increasingly used for automated decision making systems that affect our daily lives, such as credit scoring, education (admission and success prediction, educator performance, etc.), emergency response, medical diagnosis, job candidate selection, social media posts that you see on your feed, etc.. However, these algorithms and systems typically operate as black boxes, and offer little explanations on how the decision was generated. With AI being used in more and more decision-making scenarios where people's lives or livelihood are at stake, responsible data science practice and FATE (Fairness, Accountability, Transparency, and Ethics) in AI is becoming more paramount. It is important for AI/Data Science practitioners to better understand and unpack these issues. Furthermore, answering the question about who is responsible for the decision that a machine make also becomes more challenging. In this assessment, you will have the opportunity work as a group and study the FATE and responsible data science issues of a chosen AI system/application.
Discuss the AI system/application with teacher
Assessment details
Report and Presentation
For this assignment you are required, in your group, to explore, discuss and provide a critical analysis as form of a group presentation, on transparency, explainability, and responsibility issues in regards to a chosen AI/data-driven system of the group's interest.
You are required to deliver a short written report, about 5-6 pages including references (with 12pt font), and a recorded presentation video, with length between 5-6 minutes. In the report, you should deliver at least (but not limited to) the following information:
Introduction: provide a description on the chosen AI/data-driven system, e.g., what it is about, the scope, who are the relevant stakeholders, etc.
• Introduction: provide a description on the chosen AI/data-driven system, e.g., what it is about, the scope, who are the relevant stakeholders, etc.
• Background: provide information on the context of the studied AI/data-driven application/system, e.g., what is the original purpose/aim of developing such a system? Why is there a need for such a system? What are the benefits of such a system?
• Fairness, transparency, explainability, and/or responsibility issues, ethical and/or discrimination risks, and concerns of such a system/algorithm, with example scenarios and critical analysis on its impact. (Pick at least two concerns to discuss in relation to the application/system).
• What may constitute best possible practices or legal codes of practice in the studied sectors experiencing widespread deployment of algorithmic systems to prevent biased, discriminatory, unintended, or socially undesirable outcomes.
• What processes, information, or evaluations may be desirable to provide the best possible fair, explainable, accountable, or trustworthhy operations of such a system (pick at least two concerns to discuss/focus on)?
• Conclusion: a brief summary of your main findings and insights about the study.
• Bibliography: List of all references
• The content should be in more depth, providing a more thorough understanding than what is covered in our course materials.
In the presentation, you should comply to the following information:
• Present the main ideas and content of the report without trying to pack too much into the talk. If there are details that cannot be covered in the talk, you can refer the details in the report (e.g. section or page number).
• You must use presentation slides during your talk.
• The video cannot be longer than 6 minutes.
• provide transcript for presentation with PowerPoint.