The Golden Circle: Creating Socio-technical Alignment in Content Moderation

Published in arXiv, 2022

This paper outlines a conceptual framework titled The Golden Circle that describes the roles of actors at individual, organizational, and societal levels, and their dynamics in the content moderation ecosystem. Centering harm reduction and context moderation, it argues that the ML community must attend to multimodal content moderation solutions, align their work with their organizations’ goals and values, and pay attention to the ever changing social contexts in which their sociotechnical systems are embedded. This is done by accounting for the why, how, and what of content moderation from a sociological and technical lens.

Recommended citation: Gupta, A., Kozlowska, I., & Than, N. (2022). The Golden Circle: Creating Socio-technical Alignment in Content Moderation. arXiv preprint arXiv:2202.13500. https://arxiv.org/abs/2202.13500

The State of AI Ethics Report (Volume 6, February 2022)

Published in Montreal AI Ethics Institute, 2022

This report from the Montreal AI Ethics Institute (MAIEI) covers the most salient progress in research and reporting over the second half of 2021 in the field of AI ethics. Particular emphasis is placed on an “Analysis of the AI Ecosystem”, “Privacy”, “Bias”, “Social Media and Problematic Information”, “AI Design and Governance”, “Laws and Regulations”, “Trends”, and other areas covered in the “Outside the Boxes” section. The two AI spotlights feature application pieces on “Constructing and Deconstructing Gender with AI-Generated Art” as well as “Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?”. Given the mission of MAIEI to democratize AI, submissions from external collaborators have featured, such as pieces on the “Challenges of AI Development in Vietnam: Funding, Talent and Ethics” and using “Representation and Imagination for Preventing AI Harms”. The report is a comprehensive overview of what the key issues in the field of AI ethics were in 2021, what trends are emergent, what gaps exist, and a peek into what to expect from the field of AI ethics in 2022. It is a resource for researchers and practitioners alike in the field to set their research and development agendas to make contributions to the field of AI ethics.

Recommended citation: Gupta, Abhishek, et al. "The State of AI Ethics Report (Volume 6, February 2022)." arXiv preprint arXiv:2202.07435 (2022). https://montrealethics.ai/volume6/

Assisting a More Accessible Home

Published in Ethical Intelligence Equation Issue 2, 2022

From controlling lighting to what music gets played and whether there is enough milk in the fridge, smart technologies have permeated into all facets of our homes.

Recommended citation: Gupta, Abhishek. “Assisting a More Accessible Home.” Equation Issue 2, Ethical Intelligence , 26 Jan. 2022, https://www.ethicalintelligence.co/equation-issue-two. https://www.ethicalintelligence.co/equation-issue-two

Beyond Single Dimensional Metrics for Digital Sustainability

Published in Branch Magazine, Green Software Foundation, and Data Center Dynamics, 2022

In measuring energy consumption of software a move towards multi-dimensional, rich metadata-supplemented metrics offer better opportunities to implement actions that actually make software greener.

Recommended citation: Gupta, Abhishek. “The Need to Move beyond Single-Dimensional Metrics to Guide Digital Greening.” Branch Magazine, Branch, 7 Dec. 2021, https://branch.climateaction.tech/issues/issue-3/beyond-single-dimensional-metrics-for-digital-sustainability/. https://branch.climateaction.tech/issues/issue-3/beyond-single-dimensional-metrics-for-digital-sustainability/

The Co-Designed Post-Pandemic University: A Participatory And Continual Learning Approach For The Future Of Work

Published in Post Pandemic University 2020 Conference, 2021

The pandemic has shattered the traditional enclosures of learning. The post-pandemic university (PPU) will no longer be contained within the 4 walls of a lecture theatre, and finish once students have left the premises. The use of online services has now blended home and university life, and the PPU needs to reflect this. Our proposal of a continuous learning model will take advantage of the newfound omnipresence of learning, while being dynamic enough to continually adapt to the ever-evolving virus situation. Universities restricting themselves to fixed subject themes that are then forgotten once completed, will miss out on the ‘fresh start’ presented by the virus.

Recommended citation: Gupta, Abhishek, and Connor Wright. "The Co-Designed Post-Pandemic University: A Participatory and Continual Learning Approach for the Future of Work." arXiv preprint arXiv:2112.05751 (2021). https://postpandemicuniversity.net/2020/09/06/the-co-designed-post-pandemic-university-a-participatory-and-continual-learning-approach-for-the-future-of-work/

Critical Analysis of “Responsible AI #AIforAll: Approach Document for India

Published in Proceedings of the 17th Annual Social Informatics Research Symposium and the 3rd Annual Information Ethics and Policy Workshop, 2021

Governments across the world have increasingly focused on creating national policy frameworks to take advantage of AI developments for their strategic national interests, as well as to adapt and adjust AI technologies that operate within their socio-cultural and political constraints (Schiff et al 2020). However, most empirical research has mainly utilized AI-related ethics documents produced by governments located in the Global North. In this study, we present a critical analysis of Responsible AI #AIforAll: Approach Document for India (thereafter, the Approach Paper), a national AI strategy document published by NITI Aayog, a premier public policy think-tank of the Government of India1. This document is one of the first of its kind in the Global South. Not only it would serve as an important public policy reference for creating and discussing responsible AI in India, but it also has potential to serve as an exemplary policy document for other developing countries. We identify and discuss key missing elements in the document such as lack of Indian context, deterministic framing, epistemic incompleteness, and inaccuracies. We conclude with a list of recommendations for improving the process of generating a national strategy document on responsible AI.

Recommended citation: Than, N., Gupta, A., & Jauhar, A. (2021, October). Critical Analysis of “Responsible AI# AIforAll: Approach Document for India”. In Proceedings of the 17th Annual Social Informatics Research Symposium and the 3rd Annual Information Ethics and Policy Workshop. https://www.ideals.illinois.edu/handle/2142/111789

Why Should Sustainability Be A First-Class Consideration For AI Systems?

Published in Green Software Foundation, 2021

Should sustainability be a first-class consideration for AI systems? Yes, because AI systems have environmental and societal implications. What can you do to make green AI a reality?

Recommended citation: Gupta, Abhishek. “Sustainability Should Be a Key Consideration for AI Systems.” Green Software Foundation, 27 Oct. 2021, https://greensoftware.foundation/articles/why-should-sustainability-be-a-first-class-consideration-for-ai-systems. https://greensoftware.foundation/articles/why-should-sustainability-be-a-first-class-consideration-for-ai-systems

Software Carbon Intensity: Crafting a Standard

Published in Green Software Foundation, 2021

The Software Carbon Intensity (SCI) standard gives an actionable approach to software designers, developers and deployers to measure the carbon impacts of their systems.

Recommended citation: Gupta, Abhishek. “Software Carbon Intensity: Crafting a Standard.” Green Software Foundation, 27 Oct. 2021, https://greensoftware.foundation/articles/software-carbon-intensity-crafting-a-standard. https://greensoftware.foundation/articles/software-carbon-intensity-crafting-a-standard

What Do We Need to Build More Sustainable AI Systems?

Published in Green Software Foundation, 2021

AI systems can have significant environmental impact. We are risking severe environmental and social harm if we fail to make greener AI systems.

Recommended citation: Gupta, Abhishek. “What Do We Need to Build More Sustainable AI Systems?” Green Software Foundation, 26 Oct. 2021, https://greensoftware.foundation/articles/what-do-we-need-to-build-more-sustainable-ai-systems. https://greensoftware.foundation/articles/what-do-we-need-to-build-more-sustainable-ai-systems

The Imperative for Sustainable AI Systems

Published in The Gradient, 2021

AI systems are compute-intensive: the AI lifecycle often requires long-running training jobs, hyperparameter searches, inference jobs, and other costly computations. They also require massive amounts of data that might be moved over the wire, and require specialized hardware to operate effectively, especially large-scale AI systems. All of these activities require electricity — which has a carbon cost. There are also carbon emissions in ancillary needs like hardware and datacenter cooling. Thus, AI systems have a massive carbon footprint. This carbon footprint also has consequences in terms of social justice as we will explore in this article. Here, we use sustainability to talk about not just environmental impact, but also social justice implications and impacts on society. Though an important area, we don’t use the term sustainable AI here to mean applying AI to solve environmental issues. Instead, a critical examination of the impacts of AI on the physical and social environment is the focus of our discussion.

Recommended citation: Gupta, Abhishek. “The Imperative for Sustainable AI Systems.” The Gradient, The Gradient, 6 Dec. 2021, https://thegradient.pub/sustainable-ai/. https://thegradient.pub/sustainable-ai/

The State of AI Ethics Report (Volume 5)

Published in Montreal AI Ethics Institute, 2021

This report from the Montreal AI Ethics Institute covers the most salient progress in research and reporting over the second quarter of 2021 in the field of AI ethics with a special emphasis on “Environment and AI”, “Creativity and AI”, and “Geopolitics and AI.” The report also features an exclusive piece titled “Critical Race Quantum Computer” that applies ideas from quantum physics to explain the complexities of human characteristics and how they can and should shape our interactions with each other. The report also features special contributions on the subject of pedagogy in AI ethics, sociology and AI ethics, and organizational challenges to implementing AI ethics in practice. Given the mission of MAIEI to highlight scholars from around the world working on AI ethics issues, the report also features two spotlights sharing the work of scholars operating in Singapore and Mexico helping to shape policy measures as they relate to the responsible use of technology. The report also has an extensive section covering the gamut of issues when it comes to the societal impacts of AI covering areas of bias, privacy, transparency, accountability, fairness, interpretability, disinformation, policymaking, law, regulations, and moral philosophy.

Recommended citation: Gupta, Abhishek, et al. "The State of AI Ethics Report (Volume 5)." arXiv preprint arXiv:2108.03929 (2021). https://montrealethics.ai/volume5/

A Social and Environmental Certificate for AI Systems

Published in Branch Magazine, 2021

AI systems are not without their flaws. There are many ethical issues to consider when thinking about deploying AI systems into society—particularly environmental impacts.

Recommended citation: Gupta, Abhishek. “What Does Ecologically Responsible AI Look like?” Branch, 21 July 2021, https://branch.climateaction.tech/issues/issue-2/secure-framework/. https://branch.climateaction.tech/issues/issue-2/secure-framework/

How data governance technologies can democratize data sharing for community well-being

Published in Data & Policy Journal, Cambridge University Press, 2021

Data sharing efforts to allow underserved groups and organizations to overcome the concentration of power in our data landscape. A few special organizations, due to their data monopolies and resources, are able to decide which problems to solve and how to solve them. But even though data sharing creates a counterbalancing democratizing force, it must nevertheless be approached cautiously. Underserved organizations and groups must navigate difficult barriers related to technological complexity and legal risk. To examine what those common barriers are, one type of data sharing effort—data trusts—are examined, specifically the reports commenting on that effort. To address these practical issues, data governance technologies have a large role to play in democratizing data trusts safely and in a trustworthy manner. Yet technology is far from a silver bullet. It is dangerous to rely upon it. But technology that is no-code, flexible, and secure can help more responsibly operate data trusts. This type of technology helps innovators put relationships at the center of their efforts.

Recommended citation: Wu, D., Verhulst, S., Pentland, A., Avila, T., Finch, K., & Gupta, A. (2021). How data governance technologies can democratize data sharing for community well-being. Data & Policy, 3. https://www.cambridge.org/core/journals/data-and-policy/article/how-data-governance-technologies-can-democratize-data-sharing-for-community-wellbeing/2BFB848644589873C00E22ADEA6E8AB3

How to build an AI Ethics team at your organization?

Published in Towards Data Science, 2021

This article addresses the common challenges that someone building an AI ethics team at an organization is likely to face and what they can do to overcome those challenges.

Recommended citation: Gupta, Abhishek. “How to Build an AI Ethics Team at Your Organization?” Medium, Towards Data Science, 10 July 2021, https://towardsdatascience.com/how-to-build-an-ai-ethics-team-at-your-organization-373823b03293. https://towardsdatascience.com/how-to-build-an-ai-ethics-team-at-your-organization-373823b03293

The current state of affairs and a roadmap for effective carbon-accounting tooling in AI

Published in Microsoft Developer Blogs, 2021

Digital services consume a lot of energy and it goes without saying that in a world with accelerating climate change, we must be conscious in all parts of life with our carbon footprints. In the case of the software that we write, specifically, the AI systems we build, these considerations become even more important because of the large upfront computational resources that training some large AI models consume, and the subsequent carbon emissions resulting from it. Thus, effective carbon accounting for artificial intelligence systems is critical!

Recommended citation: Gupta, Abhishek. “The Current State of Affairs and a Roadmap for Effective Carbon-Accounting Tooling in AI.” Sustainable Software, 17 June 2021, https://devblogs.microsoft.com/sustainable-software/the-current-state-of-affairs-and-a-roadmap-for-effective-carbon-accounting-tooling-in-ai/. https://devblogs.microsoft.com/sustainable-software/the-current-state-of-affairs-and-a-roadmap-for-effective-carbon-accounting-tooling-in-ai/

Building for resiliency in AI systems

Published in Towards Data Science, 2021

Resiliency is a key idea if we want to build sustainable ethical, safe, and inclusive AI systems that don’t succumb to failures in their lifespan of existence.

Recommended citation: Gupta, Abhishek. “Building for Resiliency in AI Systems.” Medium, Towards Data Science, 29 May 2021, https://towardsdatascience.com/building-for-resiliency-in-ai-systems-24eed076d3d6. https://towardsdatascience.com/building-for-resiliency-in-ai-systems-24eed076d3d6

The State of AI Ethics Report (Volume 4)

Published in Montreal AI Ethics Institute, 2021

The 4th edition of the Montreal AI Ethics Institute’s The State of AI Ethics captures the most relevant developments in the field of AI Ethics since January 2021. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, with a particular focus on four key themes: Ethical AI, Fairness & Justice, Humans & Tech, and Privacy. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Opening the report is a long-form piece by Edward Higgs (Professor of History, University of Essex) titled ‘AI and the Face: A Historian’s View.’ In it, Higgs examines the unscientific history of facial analysis and how AI might be repeating some of those mistakes at scale. The report also features chapter introductions by Alexa Hagerty (Anthropologist, University of Cambridge), Marianna Ganapini (Faculty Director, Montreal AI Ethics Institute), Deborah G. Johnson (Emeritus Professor, Engineering and Society, University of Virginia), and Soraj Hongladarom (Professor of Philosophy and Director, Center for Science, Technology and Society, Chulalongkorn University in Bangkok). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.

Recommended citation: Gupta, Abhishek, et al. "The State of AI Ethics Report (Volume 4)." arXiv preprint arXiv:2105.09060 (2021). https://montrealethics.ai/volume4/

The importance of systems adaptability for meaningful Responsible AI deployment

Published in Towards Data Science, 2021

Given that the sociotechnical environment within which AI systems are deployed are inherently dynamic and complex, we need the systems to be adaptable to mitigate negative consequences that arise from the deployment of these systems.

Recommended citation: Gupta, Abhishek. “The Importance of Systems Adaptability for Meaningful Responsible AI Deployment.” Medium, Towards Data Science, 26 Apr. 2021, https://towardsdatascience.com/the-importance-of-systems-adaptability-for-meaningful-responsible-ai-deployment-a14e6ccd0f35. https://towardsdatascience.com/the-importance-of-systems-adaptability-for-meaningful-responsible-ai-deployment-a14e6ccd0f35

Systems Design Thinking for Responsible AI

Published in Towards Data Science, 2021

When thinking about building an AI system and the impact that it might have on society, it is important to take a systems design thinking approach to be as comprehensive as possible in assessing the impacts and proposing redressal mechanisms.

Recommended citation: Gupta, Abhishek. “Systems Design Thinking for Responsible AI.” Medium, Towards Data Science, 19 Apr. 2021, https://towardsdatascience.com/systems-design-thinking-for-responsible-ai-a0e51a9a2f97. https://towardsdatascience.com/systems-design-thinking-for-responsible-ai-a0e51a9a2f97

Tradeoff determination for ethics, safety, and inclusivity in AI systems

Published in Towards Data Science, 2021

Design decisions for AI systems involve value judgements and optimization choices. Some relate to technical considerations like latency and accuracy, others relate to business metrics. But each require careful consideration as they have consequences in the final outcome from the system. This article provides details on how to make tradeoff determinations in an effective manner.

Recommended citation: Gupta, Abhishek. “Tradeoff Determination for Ethics, Safety, and Inclusivity in AI Systems.” Medium, Towards Data Science, 4 Apr. 2021, https://towardsdatascience.com/tradeoff-determination-for-ethics-safety-and-inclusivity-in-ai-systems-60f20a3d0d0c. https://towardsdatascience.com/tradeoff-determination-for-ethics-safety-and-inclusivity-in-ai-systems-60f20a3d0d0c

Why free and open source software (FOSS) should be the future of Responsible AI

Published in Towards Data Science, 2021

In a highly fragmented software ecosystem, this article explores the role that FOSS can play in holding the field more accountable in the tools that are built for Responsible AI

Recommended citation: Gupta, Abhishek. “Why Free and Open Source Software (FOSS) Should Be the Future of Responsible AI.” Medium, Towards Data Science, 29 Mar. 2021, https://towardsdatascience.com/why-free-and-open-source-software-foss-should-be-the-future-of-responsible-ai-a3691b47fd79. https://towardsdatascience.com/why-free-and-open-source-software-foss-should-be-the-future-of-responsible-ai-a3691b47fd79

The importance of goal setting in product development to achieve Responsible AI

Published in Towards Data Science, 2021

The exercise of goal setting helps us foreground the reasons for doing a certain project. This is the first step in making sure that we can centre responsible AI principles in the project and inject that in the foundations.

Recommended citation: Gupta, Abhishek. “The Importance of Goal Setting in Product Development to Achieve Responsible AI.” Medium, Towards Data Science, 23 Mar. 2021, https://towardsdatascience.com/the-importance-of-goal-setting-in-product-development-to-achieve-responsible-ai-eda040809292. https://towardsdatascience.com/the-importance-of-goal-setting-in-product-development-to-achieve-responsible-ai-eda040809292

AI Governance In 2020 - A Year In Review: Observations From 52 Global Experts

Published in Shanghai Institute for Science of Science, 2021

The report was contributed by 52 experts from 47 institutions, including AI scientists, academic researchers, industry representatives, policy experts, and others. This group of experts covers a wide range of regional developments and perspectives, including those in the United States, Europe and Asia.

Recommended citation: Shanghai Institute for Science of Science. “AI Governance in 2020 - a Year in Review: Observations from 52 Global Experts.” A Year in Review: Observations from 52 Global Experts | AI Governance in 2019 - A Year in Review: Observations from 50 Global Experts, https://www.aigovernancereview.com/. https://www.aigovernancereview.com/

Survey of EU ethical guidelines for commercial AI: case studies in financial services

Published in Springer AI & Ethics Journal, 2021

A macro perspective examining the general nature of AI implementations and how enforcement should be structured under the new frontier of AI technologies is severely needed. The paper critically analyzes real and potential ethical impacts of AI-enabled systems as well as the standard process regulators, researchers, and firms use to assess the risks of these technologies.

Recommended citation: Huang, Jimmy Yicheng, Abhishek Gupta, and Monica Youn. "Survey of EU ethical guidelines for commercial AI: case studies in financial services." AI and Ethics 1.4 (2021): 569-577. https://link.springer.com/article/10.1007/s43681-021-00048-1

Simple prompts to make the right decisions in AI ethics

Published in Towards Data Science, 2021

An organizational and design methodology for making AI ethics easier to implement in the existing workflows of your employees

Recommended citation: Gupta, Abhishek. “Simple Prompts to Make the Right Decisions in AI Ethics.” Medium, Towards Data Science, 18 Mar. 2021, https://towardsdatascience.com/simple-prompts-to-make-the-right-decisions-in-ai-ethics-e0475bda8f41. https://towardsdatascience.com/simple-prompts-to-make-the-right-decisions-in-ai-ethics-e0475bda8f41

Get transparent about your AI ethics methodology

Published in Towards Data Science, 2021

Building more trust with your customers and showcasing maturity in the AI ethics methodology of an organization

Recommended citation: Gupta, Abhishek. “Get Transparent about Your AI Ethics Methodology.” Medium, Towards Data Science, 8 Mar. 2021, https://towardsdatascience.com/get-transparent-about-your-ai-ethics-methodology-ec88103aa28. https://towardsdatascience.com/get-transparent-about-your-ai-ethics-methodology-ec88103aa28

AI, Ethics, and Your Business

Published in Springer Nature AI & Ethics Journal Blog, 2021

A discussion on the business considerations and importance of AI Ethics; summarized thoughts from the Springer Nature AI & Ethics Journal Panel

Recommended citation: Gupta, Abhishek. “Ai, Ethics, & Your Business.” AI, Ethics, & Your Business | For Librarians | Springer Nature, Springer Nature, 3 Mar. 2021, https://www.springernature.com/gp/librarians/landing/aiandethics. https://www.springernature.com/gp/librarians/landing/aiandethics

AI for AI - Developing Artificial Intelligence for an Atmanirbhar India

Published in Vidhi Center for Legal Policy, 2021

India needs more indigenous research on ethical & governance frameworks for application of AI.

Recommended citation: Jauhar, Ameen, and Abhishek Gupta. “AI for AI - Developing Artificial Intelligence for an ATMANIRBHAR India.” Vidhi Centre for Legal Policy, Vidhi Centre for Legal Policy, 23 Feb. 2021, https://vidhilegalpolicy.in/blog/ai-for-ai-developing-artificial-intelligence-for-an-atmanirbhar-india/. https://vidhilegalpolicy.in/blog/ai-for-ai-developing-artificial-intelligence-for-an-atmanirbhar-india/

Small steps to actually achieving Responsible AI

Published in Towards Data Science, 2021

A list of actionable steps that one can take to move towards practicing Responsible AI rather than just talking about it.

Recommended citation: Gupta, Abhishek. “Small Steps to *Actually* Achieving Responsible Ai.” Medium, Towards Data Science, 7 Feb. 2021, https://towardsdatascience.com/small-steps-to-actually-achieving-responsible-ai-69f998f9eefb. https://towardsdatascience.com/small-steps-to-actually-achieving-responsible-ai-69f998f9eefb

Making Responsible AI the Norm rather than the Exception

Published in Submitted to the National Security Commission on AI, 2021

This report prepared by the Montreal AI Ethics Institute provides recommendations in response to the National Security Commission on Artificial Intelligence (NSCAI) Key Considerations for Responsible Development and Fielding of Artificial Intelligence document. The report centres on the idea that Responsible AI should be made the Norm rather than an Exception. It does so by utilizing the guiding principles of: (1) alleviating friction in existing workflows, (2) empowering stakeholders to get buy-in, and (3) conducting an effective translation of abstract standards into actionable engineering practices. After providing some overarching comments on the document from the NSCAI, the report dives into the primary contribution of an actionable framework to help operationalize the ideas presented in the document from the NSCAI. The framework consists of: (1) a learning, knowledge, and information exchange (LKIE), (2) the Three Ways of Responsible AI, (3) an empirically-driven risk-prioritization matrix, and (4) achieving the right level of complexity. All components reinforce each other to move from principles to practice in service of making Responsible AI the norm rather than the exception.

Recommended citation: Gupta, Abhishek. "Making Responsible AI the Norm rather than the Exception." arXiv preprint arXiv:2101.11832 (2021). https://arxiv.org/abs/2101.11832

The State of AI Ethics Report (January 2021)

Published in Montreal AI Ethics Institute, 2021

The 3rd edition of the Montreal AI Ethics Institute’s The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the field’s ever-changing developments. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: algorithmic injustice, discrimination, ethical AI, labor impacts, misinformation, privacy, risk and security, social media, and more. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Unique to this report is ‘The Abuse and Misogynoir Playbook,’ written by Dr. Katlyn Tuner (Research Scientist, Space Enabled Research Group, MIT), Dr. Danielle Wood (Assistant Professor, Program in Media Arts and Sciences; Assistant Professor, Aeronautics and Astronautics; Lead, Space Enabled Research Group, MIT) and Dr. Catherine D’Ignazio (Assistant Professor, Urban Science and Planning; Director, Data + Feminism Lab, MIT). The piece (and accompanying infographic), is a deep-dive into the historical and systematic silencing, erasure, and revision of Black women’s contributions to knowledge and scholarship in the United Stations, and globally. Exposing and countering this Playbook has become increasingly important following the firing of AI Ethics expert Dr. Timnit Gebru (and several of her supporters) at Google. This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.

Recommended citation: Gupta, Abhishek, et al. "The State of AI Ethics Report (January 2021)." arXiv preprint arXiv:2105.09059 (2021). https://arxiv.org/abs/2105.09059

To achieve Responsible AI, close the ‘believability’ gap

Published in Towards Data Science, 2021

Making discussions in AI more inclusive and effective, we need to move beyond just looking at haughty credentials and focus on lived experiences, multidisciplinary background, and pay attention to the body of work.

Recommended citation: Gupta, Abhishek. "To Achieve Responsible AI, Close the Believability Gap​." Medium, Towards Data Science, 23 Jan. 2021, https://towardsdatascience.com/to-achieve-responsible-ai-close-the-believability-gap-cf809dc81c1e. https://towardsdatascience.com/to-achieve-responsible-ai-close-the-believability-gap-cf809dc81c1e

Why civic competence in AI ethics is needed in 2021

Published in Towards Data Science, 2021

Outlining how civic competence is one of the more effective instruments in the addressing of ethics, safety, and inclusivity concerns in AI

Recommended citation: Gupta, Abhishek. “Why Civic Competence in AI Ethics Is Needed in 2021?” Medium, Towards Data Science, 18 Jan. 2021, https://towardsdatascience.com/why-civic-competence-in-ai-ethics-is-needed-in-2021-697ca4bed688. https://towardsdatascience.com/why-civic-competence-in-ai-ethics-is-needed-in-2021-697ca4bed688

Prudent Public Sector Procurement of AI Products

Published in Towards Data Science, 2021

Tips on enhancing the procurement process for AI products to improve Responsible AI outcomes in practice in the use of AI systems in the public sector

Recommended citation: Gupta, Abhishek. “Prudent Public-Sector Procurement of AI Products.” Medium, Towards Data Science, 17 Jan. 2021, https://towardsdatascience.com/prudent-public-sector-procurement-of-ai-products-779316d513a8. https://towardsdatascience.com/prudent-public-sector-procurement-of-ai-products-779316d513a8

Decoded Reality

Published in NeurIPS 2020 Resistance AI Workshop, 2020

Decoded Reality is a creative exploration of the Power dynamics that shape the Design, Development, and Deployment of Machine Learning and Data-driven Systems.

Recommended citation: Khan, Falaah Arif, and Abhishek Gupta. “Decoded Reality.” Resistance AI Workshop @ NeurIPS 2020, NeurIPS 2020, 11 Dec. 2020, https://sites.google.com/view/resistance-ai-neurips-20/home?authuser=0. https://sites.google.com/view/resistance-ai-neurips-20/accepted-papers-and-media?authuser=0

The Gray Rhino of Pandemic Preparedness: Proactive digital, data, and organizational infrastructure to help humanity build resilience in the face of pandemics

Published in Future of Privacy Forum 2020 - Privacy & Pandemics: Responsible Uses of Technology & Health Data, 2020

COVID-19 has exposed glaring holes in our existing digital, data, and organizational practices. Researchers ensconced in epidemiological and human health work have repeatedly pointed out how urban encroachment, climate change, and other human-triggered activities and patterns are going to make zoonotic pandemics more frequent and commonplace. The Gray Rhino mindset provides a useful reframing (as opposed to viewing pandemics such as the current one as a Black Swan event) that can help us recover faster from these (increasingly) frequent occurrences and build resiliency in our digital, data, and organizational infrastructure. Mitigating the social and economic impacts of pandemics can be eased through building infrastructure that elucidate leading indicators via passive intelligence gathering so that responses to containing the spread of pandemics are not blanket measures; instead, they can be fine-grained allowing for more efficient utilization of scarce resources and minimizing disruption to our way of life.

Recommended citation: Gupta, Abhishek. "The Gray Rhino of Pandemic Preparedness: Proactive digital, data, and organizational infrastructure to help humanity build resilience in the face of pandemics." arXiv preprint arXiv:2011.02773 (2020). https://fpf.org/wp-content/uploads/2020/10/5-The-Gray-Rhino-of-Pandemic-Preparedness-pages-Main-Article.pdf

State of AI Ethics Report October 2020

Published in Montreal AI Ethics Institute, 2020

The 2nd edition of the Montreal AI Ethics Institute’s The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: AI and society, bias and algorithmic justice, disinformation, humans and AI, labor impacts, privacy, risk, and future of AI ethics. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. These experts include: Danit Gal (Tech Advisor, United Nations), Amba Kak (Director of Global Policy and Programs, NYU’s AI Now Institute), Rumman Chowdhury (Global Lead for Responsible AI, Accenture), Brent Barron (Director of Strategic Projects and Knowledge Management, CIFAR), Adam Murray (U.S. Diplomat working on tech policy, Chair of the OECD Network on AI), Thomas Kochan (Professor, MIT Sloan School of Management), and Katya Klinova (AI and Economy Program Lead, Partnership on AI). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.

Recommended citation: Gupta, Abhishek, et al. "The State of AI Ethics Report (October 2020)." arXiv preprint arXiv:2011.02787 (2020). https://montrealethics.ai/oct2020

Report prepared by the Montreal AI Ethics Institute (MAIEI) for Publication Norms for Responsible AI by Partnership on AI

Published in Submitted to the Partnership on AI, 2020

Work done with the team at the Montreal AI Ethics Institute in response to the publication norms work done by the Partnership on AI to help improve their process.

Recommended citation: Gupta, Abhishek, Camylle Lanteigne, and Victoria Heath. "Report prepared by the Montreal AI Ethics Institute (MAIEI) on Publication Norms for Responsible AI." arXiv preprint arXiv:2009.07262 (2020). https://arxiv.org/abs/2009.07262

AI ethics groups are repeating one of society’s classic mistakes

Published in MIT Technology Review, 2020

Too many councils and advisory boards still consist mostly of people based in Europe or the United States.

Recommended citation: Gupta, Abhishek, and Victoria Heath. "AI Ethics Groups Are Repeating One of Societys Classic Mistakes." MIT Technology Review, MIT Technology Review, 14 Sept. 2020, https://www.technologyreview.com/2020/09/14/1008323/ai-ethics-representation-artificial-intelligence-opinion/. https://www.technologyreview.com/2020/09/14/1008323/ai-ethics-representation-artificial-intelligence-opinion/

Report prepared by the Montreal AI Ethics Institute In Response to Mila’s Proposal for a Contact Tracing App

Published in Montreal AI Ethics Institute, 2020

Joint work to analyze the deficiencies and public risks in the contact-tracing solution from Mila and recommendations for improvements.

Recommended citation: Cohen, Allison, and Abhishek Gupta. "Report prepared by the Montreal AI Ethics Institute In Response to Milas Proposal for a Contact Tracing App." arXiv preprint arXiv:2008.04530 (2020). https://arxiv.org/abs/2008.04530

Montreal AI Ethics Institute’s (MAIEI) Submission to the World Intellectual Property Organization (WIPO) Conversation on Intellectual Property (IP) and Artificial Intelligence

Published in Submitted to the World Intellectual Property Organization, 2020

This document posits that, at best, a tenuous case can be made for providing AI exclusive IP over their ‘inventions’. Furthermore, IP protections for AI are unlikely to confer the benefit of ensuring regulatory compliance. Rather, IP protections for AI ‘inventors’ present a host of negative externalities and obscures the fact that the genuine inventor, deserving of IP, is the human agent. This document will conclude by recommending strategies for WIPO to bring IP law into the 21st century, enabling it to productively account for AI ‘inventions’. Theme: IP Protection for AI-Generated and AI-Assisted Works Based on insights from the Montreal AI Ethics Institute (MAIEI) staff and supplemented by workshop contributions from the AI Ethics community convened by MAIEI on July 5, 2020.

Recommended citation: Cohen, Allison, and Abhishek Gupta. "Montreal AI Ethics Institutes (MAIEI) Submission to the World Intellectual Property Organization (WIPO) Conversation on Intellectual Property (IP) and Artificial Intelligence (AI) Second Session." arXiv preprint arXiv:2008.04520 (2020). https://www.wipo.int/export/sites/www/about-ip/en/artificial_intelligence/conversation_ip_ai/pdf/ngo_maiei.pdf

Comprehensiveness of Archives: A Modern AI-enabled Approach to Build Comprehensive Shared Cultural Heritage

Published in Datafication + Cultural Heritage ECSCW 2020, 2020

Archives play a crucial role in the construction and advancement of society. Humans place a great deal of trust in archives and depend on them to craft public policies and to preserve languages, cultures, self-identity, views and values. Yet, there are certain voices and viewpoints that remain elusive in the current processes deployed in the classification and discoverability of records and archives. In this paper, we explore the ramifications and effects of centralized, due process archival systems on marginalized communities. There is strong evidence to prove the need for progressive design and technological innovation while in the pursuit of comprehensiveness, equity and justice. Intentionality and comprehensiveness is our greatest opportunity when it comes to improving archival practices and for the advancement and thrive-ability of societies at large today. Intentionality and comprehensiveness is achievable with the support of technology and the Information Age we live in today. Reopening, questioning and/or purposefully including others voices in archival processes is the intention we present in our paper. We provide examples of marginalized communities who continue to lead ‘community archive’ movements in efforts to reclaim and protect their cultural identity, knowledge, views and futures. In conclusion, we offer design and AI-dominant technological considerations worth further investigation in efforts to bridge systemic gaps and build robust archival processes.

Recommended citation: Gupta, Abhishek, and Nikitasha Kapoor. "Comprehensiveness of archives: A modern AI-enabled approach to build comprehensive shared cultural heritage." arXiv preprint arXiv:2008.04541 (2020). https://dataficationandculturalheritage.blogs.dsv.su.se/program/

Canada Protocol: an ethical checklist for the use of Artificial Intelligence in Suicide Prevention and Mental Health

Published in National Library of Medicine, 2020

Joint work to provide a concrete checklist format and items for the ethical use of AI in mental health applications

Recommended citation: Mörch, Carl-Maria, Abhishek Gupta, and Brian L. Mishara. "Canada protocol: An ethical checklist for the use of artificial Intelligence in suicide prevention and mental health." Artificial intelligence in medicine 108 (2020): 101934. https://pubmed.ncbi.nlm.nih.gov/32972663/

Trust me!: How to use trust-by-design to build resilient tech in times of crisis

Published in 38 No. 04 Westlaw Journal Computer & Internet 02, 2020

In this article, we make the argument that social trust is critical to crisis management, and that by putting trust at the center of their decision-making framework, public and private organizations can develop more efficient crisis-management reflexes. Part I defines social trust and describes how it can be leveraged to build social norms that reinforce cohesiveness, thus allowing for more efficient responses to crisis management within a society. Part II argues that organizations that fail to understand the importance of trust generate responses to crises that are more likely to divide rather than reinforce cohesiveness. We apply this criticism to the present 2020 SARS-COV-2 pandemic and demonstrate how structures which have been built without trust-by-design principles are less likely to be resilient when stress-tested by a crisis. Finally, Part III discusses how trust can be included by design in data governance initiatives, and how organizations can build more resilient systems, applications, products and social groups by actively leveraging trust. Examples are drawn from technical and practical cases, and discuss current initiatives that are on-going and which should be of interest for organizations seeking to effectively maintain social cohesiveness during crises such as the 2020 SARS-COV-2 pandemic, providing an effective development and management strategy for addressing future crises.

Recommended citation: Gagnon, Gabrielle Paris, et al. “Trust Me!: How to Use Trust-by-Design to Build Resilient Tech in Times of Crisis.” 38 No. 04 Westlaw Journal Computer & Internet 02, Thomson Reuters WestLaw, 24 July 2020, https://content.next.westlaw.com/Document/I83ead3cccb1811eabea4f0dc9fb69570/View/FullText.html?contextData=(sc.Default)&transitionType=Default&firstPage=true. https://content.next.westlaw.com/Document/I83ead3cccb1811eabea4f0dc9fb69570/View/FullText.html?contextData=(sc.Default)&transitionType=Default&firstPage=true

Green Lighting ML: Confidentiality, Integrity, and Availability of Machine Learning Systems in Deployment

Published in arXiv preprint, 2020

Security and ethics are both core to ensuring that a machine learning system can be trusted. In production machine learning, there is generally a hand-off from those who build a model to those who deploy a model. In this hand-off, the engineers responsible for model deployment are often not privy to the details of the model and thus, the potential vulnerabilities associated with its usage, exposure, or compromise. Techniques such as model theft, model inversion, or model misuse may not be considered in model deployment, and so it is incumbent upon data scientists and machine learning engineers to understand these potential risks so they can communicate them to the engineers deploying and hosting their models. This is an open problem in the machine learning community and in order to help alleviate this issue, automated systems for validating privacy and security of models need to be developed, which will help to lower the burden of implementing these hand-offs and increasing the ubiquity of their adoption.

Recommended citation: Gupta, Abhishek, and Erick Galinkin. "Green lighting ML: confidentiality, integrity, and availability of machine learning systems in deployment." arXiv preprint arXiv:2007.04693 (2020). https://arxiv.org/abs/2007.04693

Response by the Montreal AI Ethics Institute to the Santa Clara Principles on Transparency and Accountability in Online Content Moderation

Published in Submitted to the Santa Clara Principles for revision on Online Content Moderation approaches, 2020

Joint work with colleagues at the Montreal AI Ethics Institute to provide recommendations to improve the existing Santa Clara Principles on online content moderation.

Recommended citation: Ganapini, Marianna Bergamaschi, Camylle Lanteigne, and Abhishek Gupta. "Response by the Montreal AI Ethics Institute to the Santa Clara Principles on Transparency and Accountability in Online Content Moderation." arXiv preprint arXiv:2007.00700 (2020). https://arxiv.org/abs/2007.00700

The State of AI Ethics Report (June 2020)

Published in Montreal AI Ethics Institute, 2020

These past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivalled pace has left the internet and technology watchers aghast. Artificial intelligence has become the byword for technological progress and is being used in everything from helping us combat the COVID-19 pandemic to nudging our attention in different directions as we all spend increasingly larger amounts of time online. It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other. With this inaugural edition of the State of AI Ethics we hope to bring forward the most important developments that caught our attention at the Montreal AI Ethics Institute this past quarter. Our goal is to help you navigate this ever-evolving field swiftly and allow you and your organization to make informed decisions. This pulse-check for the state of discourse, research, and development is geared towards researchers and practitioners alike who are making decisions on behalf of their organizations in considering the societal impacts of AI-enabled solutions. We cover a wide set of areas in this report spanning Agency and Responsibility, Security and Risk, Disinformation, Jobs and Labor, the Future of AI Ethics, and more. Our staff has worked tirelessly over the past quarter surfacing signal from the noise so that you are equipped with the right tools and knowledge to confidently tread this complex yet consequential domain.

Recommended citation: Gupta, Abhishek, et al. "The State of AI Ethics Report (June 2020)." arXiv preprint arXiv:2006.14662 (2020). https://arxiv.org/abs/2006.14662

Response by the Montreal AI Ethics Institute to the European Commission’s Whitepaper on AI

Published in Submitted to the European Commission, 2020

Joint work with colleagues at the Montreal AI Ethics Institute to provide recommendations to the European Commission on their Trustworthy AI whitepaper

Recommended citation: Gupta, Abhishek, and Camylle Lanteigne. "Response by the Montreal AI Ethics Institute to the European Commissions Whitepaper on AI." arXiv preprint arXiv:2006.09428 (2020). https://arxiv.org/abs/2006.09428

The Social Contract for AI

Published in Presented at the IJCAI 2019 AI for Social Good workshop, 2020

Exploration of the notion of the social contract in relation to AI systems

Recommended citation: Caron, Mirka Snyder, and Abhishek Gupta. "The Social Contract for AI." arXiv preprint arXiv:2006.08140 (2020). https://arxiv.org/abs/2006.08140

Response to Office of the Privacy Commissioner of Canada Consultation Proposals pertaining to amendments to PIPEDA relative to Artificial Intelligence

Published in Submitted to the Office of the Privacy Commissioner of Canada, 2020

A blend of legal and technical recommendations provided to the Office of the Privacy Commissioner of Canada in their amendments to the Canadian Privacy Law relative to AI

Recommended citation: Caron, Mirka Snyder, and Abhishek Gupta. "Response to Office of the Privacy Commissioner of Canada Consultation Proposals pertaining to amendments to PIPEDA relative to Artificial Intelligence." arXiv preprint arXiv:2006.07025 (2020). https://arxiv.org/abs/2006.07025

SECure: A Social and Environmental Certificate for AI Systems

Published in Canadian Society for Ecological Economics 2020, 2020

In a world increasingly dominated by AI applications, an understudied aspect is the carbon and social footprint of these power-hungry algorithms that require copious computation and a trove of data for training and prediction. While profitable in the short-term, these practices are unsustainable and socially extractive from both a data-use and energy-use perspective. This work proposes an ESG-inspired framework combining socio-technical measures to build eco-socially responsible AI systems. The framework has four pillars: compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certificate. Compute-efficient machine learning is the use of compressed network architectures that show marginal decreases in accuracy. Federated learning augments the first pillar’s impact through the use of techniques that distribute computational loads across idle capacity on devices. This is paired with the third pillar of data sovereignty to ensure the privacy of user data via techniques like use-based privacy and differential privacy. The final pillar ties all these factors together and certifies products and services in a standardized manner on their environmental and social impacts, allowing consumers to align their purchase with their values.

Recommended citation: Gupta, Abhishek, Camylle Lanteigne, and Sara Kingsley. "SECure: A social and environmental certificate for AI systems." arXiv preprint arXiv:2006.06217 (2020). http://www.cansee.ca/cansee2020-student-symposium/

Response to the AHRC and WEF regarding Responsible Innovation in AI

Published in Australian Human Rights Commission, 2019

Joint work to recommend organizational changes to spark responsible innovation in the use of AI

Recommended citation: Gupta, Abhishek, and Mirka Snyder Caron. “Response to the AHRC and WEF Regarding Responsible Innovation in AI.” Australian Human Rights Commission, Australian Human Rights Commission and World Economic Forum, 18 Mar. 2019, https://tech.humanrights.gov.au/sites/default/files/inline-files/48%20-%20Montreal%20AI%20Ethics%20Institute.pdf. https://tech.humanrights.gov.au/sites/default/files/inline-files/48%20-%20Montreal%20AI%20Ethics%20Institute.pdf

Montreal Declaration for Responsible AI

Published in University of Montreal, 2018

Public consultation on the ethical development of AI

Recommended citation: Dilhac, Marc-Antoine. “Montreal Declaration for Responsible AI.” Montreal Declaration for a Responsible Development of Artificial Intelligence - La Recherche - Université De Montréal, Université De Montréal, https://recherche.umontreal.ca/english/strategic-initiatives/montreal-declaration-for-a-responsible-ai/. https://www.montrealdeclaration-responsibleai.com/

AI Ethics: Inclusivity in Smart Cities

Published in Montreal AI Ethics Institute, 2018

A topic that has so many implications and requires wide-ranging expertise can best be tackled by bringing together an eclectic group of people together and providing a loose framework within which they can debate and discuss their ideas. There are certainly many lessons to be learned in terms of what to watch out for and how best to integrate informed and competent policy making when it comes to the use of AI-enabled solutions in a smart city context.

Recommended citation: Gupta, Abhishek. “AI Ethics: Inclusivity in Smart Cities.” Medium, Montreal AI Ethics Institute, 27 Aug. 2018, https://medium.com/montreal-ai-ethics-institute/ai-ethics-inclusivity-in-smart-cities-6b8faebf7ce3. https://medium.com/montreal-ai-ethics-institute/ai-ethics-inclusivity-in-smart-cities-6b8faebf7ce3

Inclusive Design - Methods to ensure a high degree of participation in Artificial Intelligence (AI) systems

Published in Oxford Internet Institute, 2018

Oxford University - Connected Life 2018

Recommended citation: Gupta, Abhishek, and Shirley Ogolla. “Inclusive Design - Methods to Ensure a High Degree of Participation in Artificial Intelligence (AI) Systems.” Connected Life Conference, Oxford Internet Institute, 20 July 2018, https://connectedlife.oii.ox.ac.uk/past-conferences/conference-life-2018/. http://connectedlife.oii.ox.ac.uk/conference-proceedings-2018/

Artificial Intelligence as a Force For Good

Published in Stanford Social Innovation Review, 2018

Recent breakthroughs in artificial intelligence offer enormous benefits for mission-driven organizations and could eventually revolutionize how they work.

Recommended citation: Rosenblatt, Gideon, and Abhishek Gupta. “Artificial Intelligence as a Force for Good (SSIR).” Stanford Social Innovation Review: Informing and Inspiring Leaders of Social Change, Stanford Social Innovation Review, 11 June 2018, https://ssir.org/articles/entry/artificial_intelligence_as_a_force_for_good. https://ssir.org/articles/entry/artificial_intelligence_as_a_force_for_good

AI in Smart Cities: Privacy, Trust and Ethics

Published in NewCities, 2018

When we think about the use of Artificial Intelligence (AI) in a smart city context, one of the primary issues that come up are around the privacy of citizens. But there are quite a number of other issues that can arise, especially around the areas of fairness, safety, bias and transparency. In this article I will explore some of the tangible possibilities where AI-enabled solutions can trigger adverse outcomes with a lack of due process and awareness to be able to address them.

Recommended citation: Gupta, Abhishek. “AI in Smart Cities: Privacy, Trust and Ethics.” NewCities, NewCities, 7 May 2018, https://newcities.org/the-big-picture-ai-smart-cities-privacy-trust-ethics/. https://newcities.org/the-big-picture-ai-smart-cities-privacy-trust-ethics/

The Finance and AI Ecosystem

Published in Medium, 2018

In this article we’ll explore the finance and AI ecosystem from the lens of the different functional areas that people are working on, the geographical distribution of companies applying machine learning to finance and a peek into the Canadian ecosystem. We’ll conclude with some predictions for where this is going in the near future.

Recommended citation: Gupta, Abhishek, and Sydney Swaine-Simon. “The Finance and AI Ecosystem.” Medium, District 3, 29 Mar. 2018, https://medium.com/district3/the-finance-and-ai-ecosystem-45d614e0a478. https://medium.com/district3/the-finance-and-ai-ecosystem-45d614e0a478

Introduction to the Impact of AI-enabled Automation in the Financial Services Industry

Published in Medium, 2018

District 3 Innovation Center, Concordia University

Recommended citation: Gupta, Abhishek, and Sydney Swaine-Simon. “Introduction to the Impact of AI-Enabled Automation in the Financial Services Industry.” Medium, District 3, 22 Feb. 2018, https://medium.com/district3/introduction-to-the-impact-of-ai-enabled-automation-in-the-financial-services-industry-b5cd4cd0d392. https://medium.com/district3/introduction-to-the-impact-of-ai-enabled-automation-in-the-financial-services-industry-b5cd4cd0d392

Legal and ethical implications of data accessibility for public welfare and AI research advancement

Published in Towards Data Science, 2018

Ethics and Law of AI series

Recommended citation: Gupta, Abhishek, and Gabrielle Paris Gagnon. “Legal and Ethical Implications of Data Accessibility for Public Welfare and Ai Research Advancement.” Medium, Towards Data Science, 12 Feb. 2018, https://towardsdatascience.com/legal-and-ethical-implications-of-data-accessibility-for-public-welfare-and-ai-research-advancement-9fbc0e75ea26. https://towardsdatascience.com/legal-and-ethical-implications-of-data-accessibility-for-public-welfare-and-ai-research-advancement-9fbc0e75ea26

The Evolution of Fraud: Ethical Implications in the Age of Large-scale Data Breaches and Widespread Artificial Intelligence Solutions Deployment

Published in International Telecommunications Union - United Nations, 2018

Artificial intelligence is being rapidly deployed in all contexts of our lives, often in subtle yet behavior nudging ways. At the same time, the pace of development of new techniques and research advancements is only quickening as research and industry labs across the world leverage the emerging talent and interest of communities across the globe. With the inevitable digitization of our lives, increasingly sophisticated and ever larger data security breaches in the past few years, we are in an era where privacy and identity ownership are becoming a relic of the past. In this paper, we will explore how large-scale data breaches, coupled with sophisticated deep learning techniques, will create a new class of fraud mechanisms allowing perpetrators to deploy “Identity Theft 2.0”.

Recommended citation: Gupta, Abhishek. "The evolution of fraud: Ethical implications in the age of large-scale data breaches and widespread artificial intelligence solutions deployment." International Telecommunication Union Journal 1.7 (2018). https://www.itu.int/en/journal/001/Pages/12.aspx