Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
AI Ethics Researcher & Machine Learning Engineer
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
Montreal’s Booming Tech Industry Juggles Ethics With Innovation - McGill University News
Media Link
Published:
Dan Delmar: Quebec Is Well Placed To Become A Leader In AI Ethics - Montreal Gazette
Media Link
Published:
Montreal Turns Attention To Responsible Artificial Intelligence Research - BNN Bloomberg
Media Link
Published:
Artificial Intelligence Ethics - Year In Review 2017 - CJAD 800 Montreal Radio Interview With Dan Delmar
Media Link
Published:
As Our AI Systems Become More Capable, Should Ethics Be An Integral Component To Your Business Strategy? - Rework
Media Link
Published:
Mapping The Future Of Artificial Intelligence - The Concordian
Media Link
Published:
Ethics Of Using AI In Autonomous Weapons Systems - CJAD 800 Montreal Radio Interview With Dan Delmar
Link unavailable
Published:
Militarisation De L’Intelligence Artificielle : « L’Ampleur Des Dégâts Pourrait Être Sans Limites » - CBC News And Radio Canada
Media Link
Published:
Leslie Roberts Show: How AI Will Affect The Future Of Work - CJAD 800 Montreal - Radio Interview
Link unavailable
Published:
AI Vs. Ethics: Where Does Hr’S Loyalty Lie? - Hrd Hrtech News
Media Link
Published:
Aaron Rand Show: This Insta-Star With Over A Million Followers Is… A Robot? - CJAD 800 Montreal - Radio Interview
Link unavailable
Published:
Should You Tell An Employee If They’re Talking To A Robot? - HRTech News
Media Link
Published:
Montreal’s Startup Community Honors Its Best - Montreal In Technology
Media Link
Published:
Q&A With Abhishek Gupta, Founder Of The Montreal AI Ethics Institute - McGill Dobson Chronicles
Media Link
Published:
Don’t Outsource Killing To AI, And Other Principles In The Montreal Declaration For Ethical Tech - CBC
Media Link
Published:
Quand L’Intelligence Artificielle Dérape - L’Actualité
Media Link
Published:
AI And The Human Touch - Human Resources Director Magazine (Pg. 10-11)
Media Link
Published:
5 Ways Students Can Graduate Fully Qualified For The Fourth Industrial Revolution - World Economic Forum
Media Link
Published:
Artificial Intelligence: Will It Be Fair, Unbiased And Non-Discriminatory? - Queertech Montreal
Media Link
Published:
Exploring The Ethical Implications Of AI - McGill Alumni Magazine
Media Link
Published:
E Is For Ethics In AI — And Montreal’s Playing A Leading Role - Montreal Gazette
Media Link
Published:
Tech Giants And Civil Society Seek To Institutionalize AI Ethics - Biometric Update
Media Link
Published:
Ethics And Artificial Intelligence: These Researchers Say Tech Has To Have A Moral Backbone - CBC
Media Link
Published:
A.I. Engineers Should Spend Time Training Not Just Algorithms, But Also The Humans Who Use Them - Fortune
Media Link
Published:
Researchers Propose Framework To Measure AI’S Social And Environmental Impact - VentureBeat
Media Link
Published:
Could Machine Learning Help Bring Marginalized Voices Into Historical Archives? - VentureBeat
Media Link
Published:
Experts Pick Their Dream AI Panel - REWORK Blog
Media Link
Published:
AI Experts Discuss The Possibility Of Another AI Winter - Re-Work Blog
Media Link
Published:
20+ Pieces Of Advice From AI Experts To Those Starting Out In The Field - Re-Work Blog
Media Link
Published:
Ethics, Technology And Innovation With Abhishek Gupta - Season 2 Episode 2 - Centre Stage Podcast From The Project Management Institute
Media Link
Published:
Ethics Experts Gives 2021 Predictions - REWORK Blog
Media Link
Published:
‘There’s A Chilling Effect’: Google’s Firing Of Leading AI Ethicist Spurs Industry Outrage - Protocol
Media Link
Published:
AI, Machine Learning & Deep Learning Expert Predictions For 2021 - Re-Work Blog
Media Link
Published:
‘At Discussions On AI Ethics, You’D Be Hard-Pressed To Find Anyone With A Background In Anthropology Or Sociology’ - Times Of India
Media Link
Published:
Why Companies Are Thinking Twice About Using Artificial Intelligence - Fortune
Media Link
Published:
How A Quest To Build Ethical AI Detonated A Battle Inside Google - Business Insider
Media Link
Published:
Montreal, Centre Of The A.I. World|#4 The City Of EthicS - Radio Canada International
Media Link
Published:
Council Of Canadian Academies (CCA) Appoints Expert Panel On AI For Science And Engineering - Council Of Canadian Academies
[Media Link]Press Release
Published:
Putting AI Ethics Into Practice - Abhishek Gupta - Alteryx Data Science Mixer Podcast
Media Link
Published:
AI Weekly: The Road To Ethical Adoption Of AI - VentureBeat
Media Link
Published:
8 Leading Activists That Are Promoting Ethical AI - Analytics India Magazine
Media Link
Published:
New York City looks to regulate AI hiring: City to mandate bias audits, allow candidates to choose in-person recruitment - Canadian HR Reporter
Media Link
Published:
Seven AI ethics experts predict 2022’s opportunities and challenges for the field - Morning Brew
Media Link
Published:
The State of AI Ethics in 2022: From principles to tools via regulation. Featuring Montreal AI Ethics Institute Founder / Principal Researcher Abhishek Gupta - Orchestrate all the Things podcast: Connecting the Dots with George Anadiotis
Media Link
Published:
The state of AI ethics: The principles, the tools, the regulations - VentureBeat
Media Link
Published:
Wyden, Booker and Clarke Introduce Algorithmic Accountability Act of 2022 To Require New Transparency And Accountability For Automated Decision Systems - US Senator Wyden’s Office Press Release
Media Link
Published:
We need to design and implement standardized AI ethics regulations across everything AI touches, so, everything, while also asking questions like: what is “ethical”? And who gets to decide? And why do they get to decide? And how are they incentivized to decide, in today’s society? And who provides those incentives? Who gets to regulate all of this? Who elects the regulators? And how do we make sure companies actually implement all of this? - Important Not Important Podcast
Media Link
Published in Treasury Board Secretariat of Canada, 2018
Early Contributor - Digital Disruption White Paper Series
Recommended citation: https://docs.google.com/document/d/1Sn-qBZUXEUG4dVk909eSg5qvfbpNlRhzIefWPtBwbxY/edit
Published in International Telecommunications Union - United Nations, 2018
Artificial intelligence is being rapidly deployed in all contexts of our lives, often in subtle yet behavior nudging ways. At the same time, the pace of development of new techniques and research advancements is only quickening as research and industry labs across the world leverage the emerging talent and interest of communities across the globe. With the inevitable digitization of our lives, increasingly sophisticated and ever larger data security breaches in the past few years, we are in an era where privacy and identity ownership are becoming a relic of the past. In this paper, we will explore how large-scale data breaches, coupled with sophisticated deep learning techniques, will create a new class of fraud mechanisms allowing perpetrators to deploy “Identity Theft 2.0”.
Recommended citation: Gupta, Abhishek. "The evolution of fraud: Ethical implications in the age of large-scale data breaches and widespread artificial intelligence solutions deployment." International Telecommunication Union Journal 1.7 (2018). https://www.itu.int/en/journal/001/Pages/12.aspx
Published in Towards Data Science, 2018
Ethics and Law of AI series
Recommended citation: Gupta, Abhishek, and Gabrielle Paris Gagnon. “Legal and Ethical Implications of Data Accessibility for Public Welfare and Ai Research Advancement.” Medium, Towards Data Science, 12 Feb. 2018, https://towardsdatascience.com/legal-and-ethical-implications-of-data-accessibility-for-public-welfare-and-ai-research-advancement-9fbc0e75ea26. https://towardsdatascience.com/legal-and-ethical-implications-of-data-accessibility-for-public-welfare-and-ai-research-advancement-9fbc0e75ea26
Published in Medium, 2018
District 3 Innovation Center, Concordia University
Recommended citation: Gupta, Abhishek, and Sydney Swaine-Simon. “Introduction to the Impact of AI-Enabled Automation in the Financial Services Industry.” Medium, District 3, 22 Feb. 2018, https://medium.com/district3/introduction-to-the-impact-of-ai-enabled-automation-in-the-financial-services-industry-b5cd4cd0d392. https://medium.com/district3/introduction-to-the-impact-of-ai-enabled-automation-in-the-financial-services-industry-b5cd4cd0d392
Published in Medium, 2018
District 3 Innovation Center, Concordia University
Recommended citation: Swaine-Simon, Sydney, and Abhishek Gupta. “The History of AI in Finance.” Medium, District 3, 8 Mar. 2018, https://medium.com/district3/the-history-of-ai-in-finance-7a03fcb4a498. https://medium.com/district3/the-history-of-ai-in-finance-7a03fcb4a498
Published in Medium, 2018
In this article we’ll explore the finance and AI ecosystem from the lens of the different functional areas that people are working on, the geographical distribution of companies applying machine learning to finance and a peek into the Canadian ecosystem. We’ll conclude with some predictions for where this is going in the near future.
Recommended citation: Gupta, Abhishek, and Sydney Swaine-Simon. “The Finance and AI Ecosystem.” Medium, District 3, 29 Mar. 2018, https://medium.com/district3/the-finance-and-ai-ecosystem-45d614e0a478. https://medium.com/district3/the-finance-and-ai-ecosystem-45d614e0a478
Published in NewCities, 2018
When we think about the use of Artificial Intelligence (AI) in a smart city context, one of the primary issues that come up are around the privacy of citizens. But there are quite a number of other issues that can arise, especially around the areas of fairness, safety, bias and transparency. In this article I will explore some of the tangible possibilities where AI-enabled solutions can trigger adverse outcomes with a lack of due process and awareness to be able to address them.
Recommended citation: Gupta, Abhishek. “AI in Smart Cities: Privacy, Trust and Ethics.” NewCities, NewCities, 7 May 2018, https://newcities.org/the-big-picture-ai-smart-cities-privacy-trust-ethics/. https://newcities.org/the-big-picture-ai-smart-cities-privacy-trust-ethics/
Published in Stanford Social Innovation Review, 2018
Recent breakthroughs in artificial intelligence offer enormous benefits for mission-driven organizations and could eventually revolutionize how they work.
Recommended citation: Rosenblatt, Gideon, and Abhishek Gupta. “Artificial Intelligence as a Force for Good (SSIR).” Stanford Social Innovation Review: Informing and Inspiring Leaders of Social Change, Stanford Social Innovation Review, 11 June 2018, https://ssir.org/articles/entry/artificial_intelligence_as_a_force_for_good. https://ssir.org/articles/entry/artificial_intelligence_as_a_force_for_good
Published in Oxford Internet Institute, 2018
Oxford University - Connected Life 2018
Recommended citation: Gupta, Abhishek, and Shirley Ogolla. “Inclusive Design - Methods to Ensure a High Degree of Participation in Artificial Intelligence (AI) Systems.” Connected Life Conference, Oxford Internet Institute, 20 July 2018, https://connectedlife.oii.ox.ac.uk/past-conferences/conference-life-2018/. http://connectedlife.oii.ox.ac.uk/conference-proceedings-2018/
Published in Montreal AI Ethics Institute, 2018
A topic that has so many implications and requires wide-ranging expertise can best be tackled by bringing together an eclectic group of people together and providing a loose framework within which they can debate and discuss their ideas. There are certainly many lessons to be learned in terms of what to watch out for and how best to integrate informed and competent policy making when it comes to the use of AI-enabled solutions in a smart city context.
Recommended citation: Gupta, Abhishek. “AI Ethics: Inclusivity in Smart Cities.” Medium, Montreal AI Ethics Institute, 27 Aug. 2018, https://medium.com/montreal-ai-ethics-institute/ai-ethics-inclusivity-in-smart-cities-6b8faebf7ce3. https://medium.com/montreal-ai-ethics-institute/ai-ethics-inclusivity-in-smart-cities-6b8faebf7ce3
Published in University of Montreal, 2018
Public consultation on the ethical development of AI
Recommended citation: Dilhac, Marc-Antoine. “Montreal Declaration for Responsible AI.” Montreal Declaration for a Responsible Development of Artificial Intelligence - La Recherche - Université De Montréal, Université De Montréal, https://recherche.umontreal.ca/english/strategic-initiatives/montreal-declaration-for-a-responsible-ai/. https://www.montrealdeclaration-responsibleai.com/
Published in Australian Human Rights Commission, 2019
Joint work to recommend organizational changes to spark responsible innovation in the use of AI
Recommended citation: Gupta, Abhishek, and Mirka Snyder Caron. “Response to the AHRC and WEF Regarding Responsible Innovation in AI.” Australian Human Rights Commission, Australian Human Rights Commission and World Economic Forum, 18 Mar. 2019, https://tech.humanrights.gov.au/sites/default/files/inline-files/48%20-%20Montreal%20AI%20Ethics%20Institute.pdf. https://tech.humanrights.gov.au/sites/default/files/inline-files/48%20-%20Montreal%20AI%20Ethics%20Institute.pdf
Published in arXiv preprint, 2020
Work done in response to opaque development practice of contact- and promxity apps developed in Canada and their subsequent low rates of adoption
Recommended citation: Gupta, Abhishek, and Tania De Gasperis. "Participatory Design to build better contact-and proximity-tracing apps." arXiv preprint arXiv:2006.00432 (2020). https://arxiv.org/abs/2006.00432
Published in Submitted to the Scottish National Government, 2020
Compilation of organizational and policy recommendations in response to Scotland’s National AI Strategy
Recommended citation: Gupta, Abhishek. "Montreal AI Ethics Institutes Response to Scotlands AI Strategy." arXiv preprint arXiv:2006.06300 (2020). https://arxiv.org/abs/2006.06300
Published in Canadian Society for Ecological Economics 2020, 2020
In a world increasingly dominated by AI applications, an understudied aspect is the carbon and social footprint of these power-hungry algorithms that require copious computation and a trove of data for training and prediction. While profitable in the short-term, these practices are unsustainable and socially extractive from both a data-use and energy-use perspective. This work proposes an ESG-inspired framework combining socio-technical measures to build eco-socially responsible AI systems. The framework has four pillars: compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certificate. Compute-efficient machine learning is the use of compressed network architectures that show marginal decreases in accuracy. Federated learning augments the first pillar’s impact through the use of techniques that distribute computational loads across idle capacity on devices. This is paired with the third pillar of data sovereignty to ensure the privacy of user data via techniques like use-based privacy and differential privacy. The final pillar ties all these factors together and certifies products and services in a standardized manner on their environmental and social impacts, allowing consumers to align their purchase with their values.
Recommended citation: Gupta, Abhishek, Camylle Lanteigne, and Sara Kingsley. "SECure: A social and environmental certificate for AI systems." arXiv preprint arXiv:2006.06217 (2020). http://www.cansee.ca/cansee2020-student-symposium/
Published in Submitted to the Office of the Privacy Commissioner of Canada, 2020
A blend of legal and technical recommendations provided to the Office of the Privacy Commissioner of Canada in their amendments to the Canadian Privacy Law relative to AI
Recommended citation: Caron, Mirka Snyder, and Abhishek Gupta. "Response to Office of the Privacy Commissioner of Canada Consultation Proposals pertaining to amendments to PIPEDA relative to Artificial Intelligence." arXiv preprint arXiv:2006.07025 (2020). https://arxiv.org/abs/2006.07025
Published in Presented at the IJCAI 2019 AI for Social Good workshop, 2020
Exploration of the notion of the social contract in relation to AI systems
Recommended citation: Caron, Mirka Snyder, and Abhishek Gupta. "The Social Contract for AI." arXiv preprint arXiv:2006.08140 (2020). https://arxiv.org/abs/2006.08140
Published in Submitted to the European Commission, 2020
Joint work with colleagues at the Montreal AI Ethics Institute to provide recommendations to the European Commission on their Trustworthy AI whitepaper
Recommended citation: Gupta, Abhishek, and Camylle Lanteigne. "Response by the Montreal AI Ethics Institute to the European Commissions Whitepaper on AI." arXiv preprint arXiv:2006.09428 (2020). https://arxiv.org/abs/2006.09428
Published in Montreal AI Ethics Institute, 2020
These past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivalled pace has left the internet and technology watchers aghast. Artificial intelligence has become the byword for technological progress and is being used in everything from helping us combat the COVID-19 pandemic to nudging our attention in different directions as we all spend increasingly larger amounts of time online. It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other. With this inaugural edition of the State of AI Ethics we hope to bring forward the most important developments that caught our attention at the Montreal AI Ethics Institute this past quarter. Our goal is to help you navigate this ever-evolving field swiftly and allow you and your organization to make informed decisions. This pulse-check for the state of discourse, research, and development is geared towards researchers and practitioners alike who are making decisions on behalf of their organizations in considering the societal impacts of AI-enabled solutions. We cover a wide set of areas in this report spanning Agency and Responsibility, Security and Risk, Disinformation, Jobs and Labor, the Future of AI Ethics, and more. Our staff has worked tirelessly over the past quarter surfacing signal from the noise so that you are equipped with the right tools and knowledge to confidently tread this complex yet consequential domain.
Recommended citation: Gupta, Abhishek, et al. "The State of AI Ethics Report (June 2020)." arXiv preprint arXiv:2006.14662 (2020). https://arxiv.org/abs/2006.14662
Published in Submitted to the Santa Clara Principles for revision on Online Content Moderation approaches, 2020
Joint work with colleagues at the Montreal AI Ethics Institute to provide recommendations to improve the existing Santa Clara Principles on online content moderation.
Recommended citation: Ganapini, Marianna Bergamaschi, Camylle Lanteigne, and Abhishek Gupta. "Response by the Montreal AI Ethics Institute to the Santa Clara Principles on Transparency and Accountability in Online Content Moderation." arXiv preprint arXiv:2007.00700 (2020). https://arxiv.org/abs/2007.00700
Published in arXiv preprint, 2020
Security and ethics are both core to ensuring that a machine learning system can be trusted. In production machine learning, there is generally a hand-off from those who build a model to those who deploy a model. In this hand-off, the engineers responsible for model deployment are often not privy to the details of the model and thus, the potential vulnerabilities associated with its usage, exposure, or compromise. Techniques such as model theft, model inversion, or model misuse may not be considered in model deployment, and so it is incumbent upon data scientists and machine learning engineers to understand these potential risks so they can communicate them to the engineers deploying and hosting their models. This is an open problem in the machine learning community and in order to help alleviate this issue, automated systems for validating privacy and security of models need to be developed, which will help to lower the burden of implementing these hand-offs and increasing the ubiquity of their adoption.
Recommended citation: Gupta, Abhishek, and Erick Galinkin. "Green lighting ML: confidentiality, integrity, and availability of machine learning systems in deployment." arXiv preprint arXiv:2007.04693 (2020). https://arxiv.org/abs/2007.04693
Published in 38 No. 04 Westlaw Journal Computer & Internet 02, 2020
In this article, we make the argument that social trust is critical to crisis management, and that by putting trust at the center of their decision-making framework, public and private organizations can develop more efficient crisis-management reflexes. Part I defines social trust and describes how it can be leveraged to build social norms that reinforce cohesiveness, thus allowing for more efficient responses to crisis management within a society. Part II argues that organizations that fail to understand the importance of trust generate responses to crises that are more likely to divide rather than reinforce cohesiveness. We apply this criticism to the present 2020 SARS-COV-2 pandemic and demonstrate how structures which have been built without trust-by-design principles are less likely to be resilient when stress-tested by a crisis. Finally, Part III discusses how trust can be included by design in data governance initiatives, and how organizations can build more resilient systems, applications, products and social groups by actively leveraging trust. Examples are drawn from technical and practical cases, and discuss current initiatives that are on-going and which should be of interest for organizations seeking to effectively maintain social cohesiveness during crises such as the 2020 SARS-COV-2 pandemic, providing an effective development and management strategy for addressing future crises.
Recommended citation: Gagnon, Gabrielle Paris, et al. “Trust Me!: How to Use Trust-by-Design to Build Resilient Tech in Times of Crisis.” 38 No. 04 Westlaw Journal Computer & Internet 02, Thomson Reuters WestLaw, 24 July 2020, https://content.next.westlaw.com/Document/I83ead3cccb1811eabea4f0dc9fb69570/View/FullText.html?contextData=(sc.Default)&transitionType=Default&firstPage=true. https://content.next.westlaw.com/Document/I83ead3cccb1811eabea4f0dc9fb69570/View/FullText.html?contextData=(sc.Default)&transitionType=Default&firstPage=true
Published in National Library of Medicine, 2020
Joint work to provide a concrete checklist format and items for the ethical use of AI in mental health applications
Recommended citation: Mörch, Carl-Maria, Abhishek Gupta, and Brian L. Mishara. "Canada protocol: An ethical checklist for the use of artificial Intelligence in suicide prevention and mental health." Artificial intelligence in medicine 108 (2020): 101934. https://pubmed.ncbi.nlm.nih.gov/32972663/
Published in Datafication + Cultural Heritage ECSCW 2020, 2020
Archives play a crucial role in the construction and advancement of society. Humans place a great deal of trust in archives and depend on them to craft public policies and to preserve languages, cultures, self-identity, views and values. Yet, there are certain voices and viewpoints that remain elusive in the current processes deployed in the classification and discoverability of records and archives. In this paper, we explore the ramifications and effects of centralized, due process archival systems on marginalized communities. There is strong evidence to prove the need for progressive design and technological innovation while in the pursuit of comprehensiveness, equity and justice. Intentionality and comprehensiveness is our greatest opportunity when it comes to improving archival practices and for the advancement and thrive-ability of societies at large today. Intentionality and comprehensiveness is achievable with the support of technology and the Information Age we live in today. Reopening, questioning and/or purposefully including others voices in archival processes is the intention we present in our paper. We provide examples of marginalized communities who continue to lead ‘community archive’ movements in efforts to reclaim and protect their cultural identity, knowledge, views and futures. In conclusion, we offer design and AI-dominant technological considerations worth further investigation in efforts to bridge systemic gaps and build robust archival processes.
Recommended citation: Gupta, Abhishek, and Nikitasha Kapoor. "Comprehensiveness of archives: A modern AI-enabled approach to build comprehensive shared cultural heritage." arXiv preprint arXiv:2008.04541 (2020). https://dataficationandculturalheritage.blogs.dsv.su.se/program/
Published in Submitted to the World Intellectual Property Organization, 2020
This document posits that, at best, a tenuous case can be made for providing AI exclusive IP over their ‘inventions’. Furthermore, IP protections for AI are unlikely to confer the benefit of ensuring regulatory compliance. Rather, IP protections for AI ‘inventors’ present a host of negative externalities and obscures the fact that the genuine inventor, deserving of IP, is the human agent. This document will conclude by recommending strategies for WIPO to bring IP law into the 21st century, enabling it to productively account for AI ‘inventions’. Theme: IP Protection for AI-Generated and AI-Assisted Works Based on insights from the Montreal AI Ethics Institute (MAIEI) staff and supplemented by workshop contributions from the AI Ethics community convened by MAIEI on July 5, 2020.
Recommended citation: Cohen, Allison, and Abhishek Gupta. "Montreal AI Ethics Institutes (MAIEI) Submission to the World Intellectual Property Organization (WIPO) Conversation on Intellectual Property (IP) and Artificial Intelligence (AI) Second Session." arXiv preprint arXiv:2008.04520 (2020). https://www.wipo.int/export/sites/www/about-ip/en/artificial_intelligence/conversation_ip_ai/pdf/ngo_maiei.pdf
Published in Montreal AI Ethics Institute, 2020
Joint work to analyze the deficiencies and public risks in the contact-tracing solution from Mila and recommendations for improvements.
Recommended citation: Cohen, Allison, and Abhishek Gupta. "Report prepared by the Montreal AI Ethics Institute In Response to Milas Proposal for a Contact Tracing App." arXiv preprint arXiv:2008.04530 (2020). https://arxiv.org/abs/2008.04530
Published in MIT Technology Review, 2020
Too many councils and advisory boards still consist mostly of people based in Europe or the United States.
Recommended citation: Gupta, Abhishek, and Victoria Heath. "AI Ethics Groups Are Repeating One of Societys Classic Mistakes." MIT Technology Review, MIT Technology Review, 14 Sept. 2020, https://www.technologyreview.com/2020/09/14/1008323/ai-ethics-representation-artificial-intelligence-opinion/. https://www.technologyreview.com/2020/09/14/1008323/ai-ethics-representation-artificial-intelligence-opinion/
Published in Submitted to the Partnership on AI, 2020
Work done with the team at the Montreal AI Ethics Institute in response to the publication norms work done by the Partnership on AI to help improve their process.
Recommended citation: Gupta, Abhishek, Camylle Lanteigne, and Victoria Heath. "Report prepared by the Montreal AI Ethics Institute (MAIEI) on Publication Norms for Responsible AI." arXiv preprint arXiv:2009.07262 (2020). https://arxiv.org/abs/2009.07262
Published in Montreal AI Ethics Institute, 2020
The 2nd edition of the Montreal AI Ethics Institute’s The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: AI and society, bias and algorithmic justice, disinformation, humans and AI, labor impacts, privacy, risk, and future of AI ethics. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. These experts include: Danit Gal (Tech Advisor, United Nations), Amba Kak (Director of Global Policy and Programs, NYU’s AI Now Institute), Rumman Chowdhury (Global Lead for Responsible AI, Accenture), Brent Barron (Director of Strategic Projects and Knowledge Management, CIFAR), Adam Murray (U.S. Diplomat working on tech policy, Chair of the OECD Network on AI), Thomas Kochan (Professor, MIT Sloan School of Management), and Katya Klinova (AI and Economy Program Lead, Partnership on AI). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
Recommended citation: Gupta, Abhishek, et al. "The State of AI Ethics Report (October 2020)." arXiv preprint arXiv:2011.02787 (2020). https://montrealethics.ai/oct2020
Published in Future of Privacy Forum 2020 - Privacy & Pandemics: Responsible Uses of Technology & Health Data, 2020
COVID-19 has exposed glaring holes in our existing digital, data, and organizational practices. Researchers ensconced in epidemiological and human health work have repeatedly pointed out how urban encroachment, climate change, and other human-triggered activities and patterns are going to make zoonotic pandemics more frequent and commonplace. The Gray Rhino mindset provides a useful reframing (as opposed to viewing pandemics such as the current one as a Black Swan event) that can help us recover faster from these (increasingly) frequent occurrences and build resiliency in our digital, data, and organizational infrastructure. Mitigating the social and economic impacts of pandemics can be eased through building infrastructure that elucidate leading indicators via passive intelligence gathering so that responses to containing the spread of pandemics are not blanket measures; instead, they can be fine-grained allowing for more efficient utilization of scarce resources and minimizing disruption to our way of life.
Recommended citation: Gupta, Abhishek. "The Gray Rhino of Pandemic Preparedness: Proactive digital, data, and organizational infrastructure to help humanity build resilience in the face of pandemics." arXiv preprint arXiv:2011.02773 (2020). https://fpf.org/wp-content/uploads/2020/10/5-The-Gray-Rhino-of-Pandemic-Preparedness-pages-Main-Article.pdf
Published in RE-WORK Blog, 2020
An outlook for things to expect in the domain of AI ethics in 2021
Recommended citation: Gupta, Abhishek. “Ethics Experts Gives 2021 Predictions.” RE, RE•WORK Blog - AI & Deep Learning News, 3 Dec. 2020, https://blog.re-work.co/what-can-we-look-forward-to-in-ai-ethics-in-2021/. https://blog.re-work.co/what-can-we-look-forward-to-in-ai-ethics-in-2021/
Published in NeurIPS 2020 Resistance AI Workshop, 2020
Decoded Reality is a creative exploration of the Power dynamics that shape the Design, Development, and Deployment of Machine Learning and Data-driven Systems.
Recommended citation: Khan, Falaah Arif, and Abhishek Gupta. “Decoded Reality.” Resistance AI Workshop @ NeurIPS 2020, NeurIPS 2020, 11 Dec. 2020, https://sites.google.com/view/resistance-ai-neurips-20/home?authuser=0. https://sites.google.com/view/resistance-ai-neurips-20/accepted-papers-and-media?authuser=0
Published in Towards Data Science, 2021
Tips on enhancing the procurement process for AI products to improve Responsible AI outcomes in practice in the use of AI systems in the public sector
Recommended citation: Gupta, Abhishek. “Prudent Public-Sector Procurement of AI Products.” Medium, Towards Data Science, 17 Jan. 2021, https://towardsdatascience.com/prudent-public-sector-procurement-of-ai-products-779316d513a8. https://towardsdatascience.com/prudent-public-sector-procurement-of-ai-products-779316d513a8
Published in Towards Data Science, 2021
Outlining how civic competence is one of the more effective instruments in the addressing of ethics, safety, and inclusivity concerns in AI
Recommended citation: Gupta, Abhishek. “Why Civic Competence in AI Ethics Is Needed in 2021?” Medium, Towards Data Science, 18 Jan. 2021, https://towardsdatascience.com/why-civic-competence-in-ai-ethics-is-needed-in-2021-697ca4bed688. https://towardsdatascience.com/why-civic-competence-in-ai-ethics-is-needed-in-2021-697ca4bed688
Published in Towards Data Science, 2021
Making discussions in AI more inclusive and effective, we need to move beyond just looking at haughty credentials and focus on lived experiences, multidisciplinary background, and pay attention to the body of work.
Recommended citation: Gupta, Abhishek. "To Achieve Responsible AI, Close the Believability Gap." Medium, Towards Data Science, 23 Jan. 2021, https://towardsdatascience.com/to-achieve-responsible-ai-close-the-believability-gap-cf809dc81c1e. https://towardsdatascience.com/to-achieve-responsible-ai-close-the-believability-gap-cf809dc81c1e
Published in Montreal AI Ethics Institute, 2021
The 3rd edition of the Montreal AI Ethics Institute’s The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the field’s ever-changing developments. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: algorithmic injustice, discrimination, ethical AI, labor impacts, misinformation, privacy, risk and security, social media, and more. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Unique to this report is ‘The Abuse and Misogynoir Playbook,’ written by Dr. Katlyn Tuner (Research Scientist, Space Enabled Research Group, MIT), Dr. Danielle Wood (Assistant Professor, Program in Media Arts and Sciences; Assistant Professor, Aeronautics and Astronautics; Lead, Space Enabled Research Group, MIT) and Dr. Catherine D’Ignazio (Assistant Professor, Urban Science and Planning; Director, Data + Feminism Lab, MIT). The piece (and accompanying infographic), is a deep-dive into the historical and systematic silencing, erasure, and revision of Black women’s contributions to knowledge and scholarship in the United Stations, and globally. Exposing and countering this Playbook has become increasingly important following the firing of AI Ethics expert Dr. Timnit Gebru (and several of her supporters) at Google. This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
Recommended citation: Gupta, Abhishek, et al. "The State of AI Ethics Report (January 2021)." arXiv preprint arXiv:2105.09059 (2021). https://arxiv.org/abs/2105.09059
Published in Submitted to the National Security Commission on AI, 2021
This report prepared by the Montreal AI Ethics Institute provides recommendations in response to the National Security Commission on Artificial Intelligence (NSCAI) Key Considerations for Responsible Development and Fielding of Artificial Intelligence document. The report centres on the idea that Responsible AI should be made the Norm rather than an Exception. It does so by utilizing the guiding principles of: (1) alleviating friction in existing workflows, (2) empowering stakeholders to get buy-in, and (3) conducting an effective translation of abstract standards into actionable engineering practices. After providing some overarching comments on the document from the NSCAI, the report dives into the primary contribution of an actionable framework to help operationalize the ideas presented in the document from the NSCAI. The framework consists of: (1) a learning, knowledge, and information exchange (LKIE), (2) the Three Ways of Responsible AI, (3) an empirically-driven risk-prioritization matrix, and (4) achieving the right level of complexity. All components reinforce each other to move from principles to practice in service of making Responsible AI the norm rather than the exception.
Recommended citation: Gupta, Abhishek. "Making Responsible AI the Norm rather than the Exception." arXiv preprint arXiv:2101.11832 (2021). https://arxiv.org/abs/2101.11832
Published in Towards Data Science, 2021
A list of actionable steps that one can take to move towards practicing Responsible AI rather than just talking about it.
Recommended citation: Gupta, Abhishek. “Small Steps to *Actually* Achieving Responsible Ai.” Medium, Towards Data Science, 7 Feb. 2021, https://towardsdatascience.com/small-steps-to-actually-achieving-responsible-ai-69f998f9eefb. https://towardsdatascience.com/small-steps-to-actually-achieving-responsible-ai-69f998f9eefb
Published in Vidhi Center for Legal Policy, 2021
India needs more indigenous research on ethical & governance frameworks for application of AI.
Recommended citation: Jauhar, Ameen, and Abhishek Gupta. “AI for AI - Developing Artificial Intelligence for an ATMANIRBHAR India.” Vidhi Centre for Legal Policy, Vidhi Centre for Legal Policy, 23 Feb. 2021, https://vidhilegalpolicy.in/blog/ai-for-ai-developing-artificial-intelligence-for-an-atmanirbhar-india/. https://vidhilegalpolicy.in/blog/ai-for-ai-developing-artificial-intelligence-for-an-atmanirbhar-india/
Published in Towards Data Science, 2021
A quick tour through the ethical and governance issues
Recommended citation: Gupta, Abhishek. “Introduction to Ethics in the Use of AI in War.” Medium, Towards Data Science, 24 Feb. 2021, https://towardsdatascience.com/introduction-to-ethics-in-the-use-of-ai-in-war-9e9bf8ba71ba. https://towardsdatascience.com/introduction-to-ethics-in-the-use-of-ai-in-war-9e9bf8ba71ba
Published in Towards Data Science, 2021
How to play a more active role in the change that happens in the field of AI ethics
Recommended citation: Gupta, Abhishek. “Becoming an Upstander in AI Ethics.” Medium, Towards Data Science, 2 Mar. 2021, https://towardsdatascience.com/becoming-an-upstander-in-ai-ethics-577a38b23e45. https://towardsdatascience.com/becoming-an-upstander-in-ai-ethics-577a38b23e45
Published in Springer Nature AI & Ethics Journal Blog, 2021
A discussion on the business considerations and importance of AI Ethics; summarized thoughts from the Springer Nature AI & Ethics Journal Panel
Recommended citation: Gupta, Abhishek. “Ai, Ethics, & Your Business.” AI, Ethics, & Your Business | For Librarians | Springer Nature, Springer Nature, 3 Mar. 2021, https://www.springernature.com/gp/librarians/landing/aiandethics. https://www.springernature.com/gp/librarians/landing/aiandethics
Published in Towards Data Science, 2021
Building more trust with your customers and showcasing maturity in the AI ethics methodology of an organization
Recommended citation: Gupta, Abhishek. “Get Transparent about Your AI Ethics Methodology.” Medium, Towards Data Science, 8 Mar. 2021, https://towardsdatascience.com/get-transparent-about-your-ai-ethics-methodology-ec88103aa28. https://towardsdatascience.com/get-transparent-about-your-ai-ethics-methodology-ec88103aa28
Published in Towards Data Science, 2021
An organizational and design methodology for making AI ethics easier to implement in the existing workflows of your employees
Recommended citation: Gupta, Abhishek. “Simple Prompts to Make the Right Decisions in AI Ethics.” Medium, Towards Data Science, 18 Mar. 2021, https://towardsdatascience.com/simple-prompts-to-make-the-right-decisions-in-ai-ethics-e0475bda8f41. https://towardsdatascience.com/simple-prompts-to-make-the-right-decisions-in-ai-ethics-e0475bda8f41
Published in Springer AI & Ethics Journal, 2021
A macro perspective examining the general nature of AI implementations and how enforcement should be structured under the new frontier of AI technologies is severely needed. The paper critically analyzes real and potential ethical impacts of AI-enabled systems as well as the standard process regulators, researchers, and firms use to assess the risks of these technologies.
Recommended citation: Huang, Jimmy Yicheng, Abhishek Gupta, and Monica Youn. "Survey of EU ethical guidelines for commercial AI: case studies in financial services." AI and Ethics 1.4 (2021): 569-577. https://link.springer.com/article/10.1007/s43681-021-00048-1
Published in Shanghai Institute for Science of Science, 2021
The report was contributed by 52 experts from 47 institutions, including AI scientists, academic researchers, industry representatives, policy experts, and others. This group of experts covers a wide range of regional developments and perspectives, including those in the United States, Europe and Asia.
Recommended citation: Shanghai Institute for Science of Science. “AI Governance in 2020 - a Year in Review: Observations from 52 Global Experts.” A Year in Review: Observations from 52 Global Experts | AI Governance in 2019 - A Year in Review: Observations from 50 Global Experts, https://www.aigovernancereview.com/. https://www.aigovernancereview.com/
Published in Towards Data Science, 2021
The exercise of goal setting helps us foreground the reasons for doing a certain project. This is the first step in making sure that we can centre responsible AI principles in the project and inject that in the foundations.
Recommended citation: Gupta, Abhishek. “The Importance of Goal Setting in Product Development to Achieve Responsible AI.” Medium, Towards Data Science, 23 Mar. 2021, https://towardsdatascience.com/the-importance-of-goal-setting-in-product-development-to-achieve-responsible-ai-eda040809292. https://towardsdatascience.com/the-importance-of-goal-setting-in-product-development-to-achieve-responsible-ai-eda040809292
Published in Towards Data Science, 2021
In a highly fragmented software ecosystem, this article explores the role that FOSS can play in holding the field more accountable in the tools that are built for Responsible AI
Recommended citation: Gupta, Abhishek. “Why Free and Open Source Software (FOSS) Should Be the Future of Responsible AI.” Medium, Towards Data Science, 29 Mar. 2021, https://towardsdatascience.com/why-free-and-open-source-software-foss-should-be-the-future-of-responsible-ai-a3691b47fd79. https://towardsdatascience.com/why-free-and-open-source-software-foss-should-be-the-future-of-responsible-ai-a3691b47fd79
Published in Towards Data Science, 2021
Design decisions for AI systems involve value judgements and optimization choices. Some relate to technical considerations like latency and accuracy, others relate to business metrics. But each require careful consideration as they have consequences in the final outcome from the system. This article provides details on how to make tradeoff determinations in an effective manner.
Recommended citation: Gupta, Abhishek. “Tradeoff Determination for Ethics, Safety, and Inclusivity in AI Systems.” Medium, Towards Data Science, 4 Apr. 2021, https://towardsdatascience.com/tradeoff-determination-for-ethics-safety-and-inclusivity-in-ai-systems-60f20a3d0d0c. https://towardsdatascience.com/tradeoff-determination-for-ethics-safety-and-inclusivity-in-ai-systems-60f20a3d0d0c
Published in Towards Data Science, 2021
When thinking about building an AI system and the impact that it might have on society, it is important to take a systems design thinking approach to be as comprehensive as possible in assessing the impacts and proposing redressal mechanisms.
Recommended citation: Gupta, Abhishek. “Systems Design Thinking for Responsible AI.” Medium, Towards Data Science, 19 Apr. 2021, https://towardsdatascience.com/systems-design-thinking-for-responsible-ai-a0e51a9a2f97. https://towardsdatascience.com/systems-design-thinking-for-responsible-ai-a0e51a9a2f97
Published in Towards Data Science, 2021
Given that the sociotechnical environment within which AI systems are deployed are inherently dynamic and complex, we need the systems to be adaptable to mitigate negative consequences that arise from the deployment of these systems.
Recommended citation: Gupta, Abhishek. “The Importance of Systems Adaptability for Meaningful Responsible AI Deployment.” Medium, Towards Data Science, 26 Apr. 2021, https://towardsdatascience.com/the-importance-of-systems-adaptability-for-meaningful-responsible-ai-deployment-a14e6ccd0f35. https://towardsdatascience.com/the-importance-of-systems-adaptability-for-meaningful-responsible-ai-deployment-a14e6ccd0f35
Published in Montreal AI Ethics Institute, 2021
The 4th edition of the Montreal AI Ethics Institute’s The State of AI Ethics captures the most relevant developments in the field of AI Ethics since January 2021. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, with a particular focus on four key themes: Ethical AI, Fairness & Justice, Humans & Tech, and Privacy. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Opening the report is a long-form piece by Edward Higgs (Professor of History, University of Essex) titled ‘AI and the Face: A Historian’s View.’ In it, Higgs examines the unscientific history of facial analysis and how AI might be repeating some of those mistakes at scale. The report also features chapter introductions by Alexa Hagerty (Anthropologist, University of Cambridge), Marianna Ganapini (Faculty Director, Montreal AI Ethics Institute), Deborah G. Johnson (Emeritus Professor, Engineering and Society, University of Virginia), and Soraj Hongladarom (Professor of Philosophy and Director, Center for Science, Technology and Society, Chulalongkorn University in Bangkok). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
Recommended citation: Gupta, Abhishek, et al. "The State of AI Ethics Report (Volume 4)." arXiv preprint arXiv:2105.09060 (2021). https://montrealethics.ai/volume4/
Published in Towards Data Science, 2021
Resiliency is a key idea if we want to build sustainable ethical, safe, and inclusive AI systems that don’t succumb to failures in their lifespan of existence.
Recommended citation: Gupta, Abhishek. “Building for Resiliency in AI Systems.” Medium, Towards Data Science, 29 May 2021, https://towardsdatascience.com/building-for-resiliency-in-ai-systems-24eed076d3d6. https://towardsdatascience.com/building-for-resiliency-in-ai-systems-24eed076d3d6
Published in Microsoft Developer Blogs, 2021
Digital services consume a lot of energy and it goes without saying that in a world with accelerating climate change, we must be conscious in all parts of life with our carbon footprints. In the case of the software that we write, specifically, the AI systems we build, these considerations become even more important because of the large upfront computational resources that training some large AI models consume, and the subsequent carbon emissions resulting from it. Thus, effective carbon accounting for artificial intelligence systems is critical!
Recommended citation: Gupta, Abhishek. “The Current State of Affairs and a Roadmap for Effective Carbon-Accounting Tooling in AI.” Sustainable Software, 17 June 2021, https://devblogs.microsoft.com/sustainable-software/the-current-state-of-affairs-and-a-roadmap-for-effective-carbon-accounting-tooling-in-ai/. https://devblogs.microsoft.com/sustainable-software/the-current-state-of-affairs-and-a-roadmap-for-effective-carbon-accounting-tooling-in-ai/
Published in Towards Data Science, 2021
This article addresses the common challenges that someone building an AI ethics team at an organization is likely to face and what they can do to overcome those challenges.
Recommended citation: Gupta, Abhishek. “How to Build an AI Ethics Team at Your Organization?” Medium, Towards Data Science, 10 July 2021, https://towardsdatascience.com/how-to-build-an-ai-ethics-team-at-your-organization-373823b03293. https://towardsdatascience.com/how-to-build-an-ai-ethics-team-at-your-organization-373823b03293
Published in Data & Policy Journal, Cambridge University Press, 2021
Data sharing efforts to allow underserved groups and organizations to overcome the concentration of power in our data landscape. A few special organizations, due to their data monopolies and resources, are able to decide which problems to solve and how to solve them. But even though data sharing creates a counterbalancing democratizing force, it must nevertheless be approached cautiously. Underserved organizations and groups must navigate difficult barriers related to technological complexity and legal risk. To examine what those common barriers are, one type of data sharing effort—data trusts—are examined, specifically the reports commenting on that effort. To address these practical issues, data governance technologies have a large role to play in democratizing data trusts safely and in a trustworthy manner. Yet technology is far from a silver bullet. It is dangerous to rely upon it. But technology that is no-code, flexible, and secure can help more responsibly operate data trusts. This type of technology helps innovators put relationships at the center of their efforts.
Recommended citation: Wu, D., Verhulst, S., Pentland, A., Avila, T., Finch, K., & Gupta, A. (2021). How data governance technologies can democratize data sharing for community well-being. Data & Policy, 3. https://www.cambridge.org/core/journals/data-and-policy/article/how-data-governance-technologies-can-democratize-data-sharing-for-community-wellbeing/2BFB848644589873C00E22ADEA6E8AB3
Published in Branch Magazine, 2021
AI systems are not without their flaws. There are many ethical issues to consider when thinking about deploying AI systems into society—particularly environmental impacts.
Recommended citation: Gupta, Abhishek. “What Does Ecologically Responsible AI Look like?” Branch, 21 July 2021, https://branch.climateaction.tech/issues/issue-2/secure-framework/. https://branch.climateaction.tech/issues/issue-2/secure-framework/
Published in Montreal AI Ethics Institute, 2021
This report from the Montreal AI Ethics Institute covers the most salient progress in research and reporting over the second quarter of 2021 in the field of AI ethics with a special emphasis on “Environment and AI”, “Creativity and AI”, and “Geopolitics and AI.” The report also features an exclusive piece titled “Critical Race Quantum Computer” that applies ideas from quantum physics to explain the complexities of human characteristics and how they can and should shape our interactions with each other. The report also features special contributions on the subject of pedagogy in AI ethics, sociology and AI ethics, and organizational challenges to implementing AI ethics in practice. Given the mission of MAIEI to highlight scholars from around the world working on AI ethics issues, the report also features two spotlights sharing the work of scholars operating in Singapore and Mexico helping to shape policy measures as they relate to the responsible use of technology. The report also has an extensive section covering the gamut of issues when it comes to the societal impacts of AI covering areas of bias, privacy, transparency, accountability, fairness, interpretability, disinformation, policymaking, law, regulations, and moral philosophy.
Recommended citation: Gupta, Abhishek, et al. "The State of AI Ethics Report (Volume 5)." arXiv preprint arXiv:2108.03929 (2021). https://montrealethics.ai/volume5/
Published in The Gradient, 2021
AI systems are compute-intensive: the AI lifecycle often requires long-running training jobs, hyperparameter searches, inference jobs, and other costly computations. They also require massive amounts of data that might be moved over the wire, and require specialized hardware to operate effectively, especially large-scale AI systems. All of these activities require electricity — which has a carbon cost. There are also carbon emissions in ancillary needs like hardware and datacenter cooling. Thus, AI systems have a massive carbon footprint. This carbon footprint also has consequences in terms of social justice as we will explore in this article. Here, we use sustainability to talk about not just environmental impact, but also social justice implications and impacts on society. Though an important area, we don’t use the term sustainable AI here to mean applying AI to solve environmental issues. Instead, a critical examination of the impacts of AI on the physical and social environment is the focus of our discussion.
Recommended citation: Gupta, Abhishek. “The Imperative for Sustainable AI Systems.” The Gradient, The Gradient, 6 Dec. 2021, https://thegradient.pub/sustainable-ai/. https://thegradient.pub/sustainable-ai/
Published in Green Software Foundation, 2021
AI systems can have significant environmental impact. We are risking severe environmental and social harm if we fail to make greener AI systems.
Recommended citation: Gupta, Abhishek. “What Do We Need to Build More Sustainable AI Systems?” Green Software Foundation, 26 Oct. 2021, https://greensoftware.foundation/articles/what-do-we-need-to-build-more-sustainable-ai-systems. https://greensoftware.foundation/articles/what-do-we-need-to-build-more-sustainable-ai-systems
Published in Green Software Foundation, 2021
The Software Carbon Intensity (SCI) standard gives an actionable approach to software designers, developers and deployers to measure the carbon impacts of their systems.
Recommended citation: Gupta, Abhishek. “Software Carbon Intensity: Crafting a Standard.” Green Software Foundation, 27 Oct. 2021, https://greensoftware.foundation/articles/software-carbon-intensity-crafting-a-standard. https://greensoftware.foundation/articles/software-carbon-intensity-crafting-a-standard
Published in Expert Speak, The Observer Research Foundation, 2021
How AI systems impact the environment and how we can be more sustainable in their design, development, and deployment.
Recommended citation: Gupta, Abhishek. “The Machines Rage against the Planet.” ORF, 20 Oct. 2021, https://www.orfonline.org/expert-speak/the-machines-rage-against-the-planet/. https://www.orfonline.org/expert-speak/the-machines-rage-against-the-planet/
Published in Green Software Foundation, 2021
Should sustainability be a first-class consideration for AI systems? Yes, because AI systems have environmental and societal implications. What can you do to make green AI a reality?
Recommended citation: Gupta, Abhishek. “Sustainability Should Be a Key Consideration for AI Systems.” Green Software Foundation, 27 Oct. 2021, https://greensoftware.foundation/articles/why-should-sustainability-be-a-first-class-consideration-for-ai-systems. https://greensoftware.foundation/articles/why-should-sustainability-be-a-first-class-consideration-for-ai-systems
Published in Proceedings of the 17th Annual Social Informatics Research Symposium and the 3rd Annual Information Ethics and Policy Workshop, 2021
Governments across the world have increasingly focused on creating national policy frameworks to take advantage of AI developments for their strategic national interests, as well as to adapt and adjust AI technologies that operate within their socio-cultural and political constraints (Schiff et al 2020). However, most empirical research has mainly utilized AI-related ethics documents produced by governments located in the Global North. In this study, we present a critical analysis of Responsible AI #AIforAll: Approach Document for India (thereafter, the Approach Paper), a national AI strategy document published by NITI Aayog, a premier public policy think-tank of the Government of India1. This document is one of the first of its kind in the Global South. Not only it would serve as an important public policy reference for creating and discussing responsible AI in India, but it also has potential to serve as an exemplary policy document for other developing countries. We identify and discuss key missing elements in the document such as lack of Indian context, deterministic framing, epistemic incompleteness, and inaccuracies. We conclude with a list of recommendations for improving the process of generating a national strategy document on responsible AI.
Recommended citation: Than, N., Gupta, A., & Jauhar, A. (2021, October). Critical Analysis of “Responsible AI# AIforAll: Approach Document for India”. In Proceedings of the 17th Annual Social Informatics Research Symposium and the 3rd Annual Information Ethics and Policy Workshop. https://www.ideals.illinois.edu/handle/2142/111789
Published in Post Pandemic University 2020 Conference, 2021
The pandemic has shattered the traditional enclosures of learning. The post-pandemic university (PPU) will no longer be contained within the 4 walls of a lecture theatre, and finish once students have left the premises. The use of online services has now blended home and university life, and the PPU needs to reflect this. Our proposal of a continuous learning model will take advantage of the newfound omnipresence of learning, while being dynamic enough to continually adapt to the ever-evolving virus situation. Universities restricting themselves to fixed subject themes that are then forgotten once completed, will miss out on the ‘fresh start’ presented by the virus.
Recommended citation: Gupta, Abhishek, and Connor Wright. "The Co-Designed Post-Pandemic University: A Participatory and Continual Learning Approach for the Future of Work." arXiv preprint arXiv:2112.05751 (2021). https://postpandemicuniversity.net/2020/09/06/the-co-designed-post-pandemic-university-a-participatory-and-continual-learning-approach-for-the-future-of-work/
Published in Branch Magazine, Green Software Foundation, and Data Center Dynamics, 2022
In measuring energy consumption of software a move towards multi-dimensional, rich metadata-supplemented metrics offer better opportunities to implement actions that actually make software greener.
Recommended citation: Gupta, Abhishek. “The Need to Move beyond Single-Dimensional Metrics to Guide Digital Greening.” Branch Magazine, Branch, 7 Dec. 2021, https://branch.climateaction.tech/issues/issue-3/beyond-single-dimensional-metrics-for-digital-sustainability/. https://branch.climateaction.tech/issues/issue-3/beyond-single-dimensional-metrics-for-digital-sustainability/
Published in Ethical Intelligence Equation Issue 2, 2022
From controlling lighting to what music gets played and whether there is enough milk in the fridge, smart technologies have permeated into all facets of our homes.
Recommended citation: Gupta, Abhishek. “Assisting a More Accessible Home.” Equation Issue 2, Ethical Intelligence , 26 Jan. 2022, https://www.ethicalintelligence.co/equation-issue-two. https://www.ethicalintelligence.co/equation-issue-two
Published in Montreal AI Ethics Institute, 2022
This report from the Montreal AI Ethics Institute (MAIEI) covers the most salient progress in research and reporting over the second half of 2021 in the field of AI ethics. Particular emphasis is placed on an “Analysis of the AI Ecosystem”, “Privacy”, “Bias”, “Social Media and Problematic Information”, “AI Design and Governance”, “Laws and Regulations”, “Trends”, and other areas covered in the “Outside the Boxes” section. The two AI spotlights feature application pieces on “Constructing and Deconstructing Gender with AI-Generated Art” as well as “Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?”. Given the mission of MAIEI to democratize AI, submissions from external collaborators have featured, such as pieces on the “Challenges of AI Development in Vietnam: Funding, Talent and Ethics” and using “Representation and Imagination for Preventing AI Harms”. The report is a comprehensive overview of what the key issues in the field of AI ethics were in 2021, what trends are emergent, what gaps exist, and a peek into what to expect from the field of AI ethics in 2022. It is a resource for researchers and practitioners alike in the field to set their research and development agendas to make contributions to the field of AI ethics.
Recommended citation: Gupta, Abhishek, et al. "The State of AI Ethics Report (Volume 6, February 2022)." arXiv preprint arXiv:2202.07435 (2022). https://montrealethics.ai/volume6/
Published in arXiv, 2022
This paper outlines a conceptual framework titled The Golden Circle that describes the roles of actors at individual, organizational, and societal levels, and their dynamics in the content moderation ecosystem. Centering harm reduction and context moderation, it argues that the ML community must attend to multimodal content moderation solutions, align their work with their organizations’ goals and values, and pay attention to the ever changing social contexts in which their sociotechnical systems are embedded. This is done by accounting for the why, how, and what of content moderation from a sociological and technical lens.
Recommended citation: Gupta, A., Kozlowska, I., & Than, N. (2022). The Golden Circle: Creating Socio-technical Alignment in Content Moderation. arXiv preprint arXiv:2202.13500. https://arxiv.org/abs/2202.13500
Published:
Montreal Delegation Organizer - District 3 : Discussion of AI Ethics work being done in Montreal
Published:
Panelist : Table-ronde : Les algorithmes dans l’économie numérique
Published:
Speaker : Exploring AI Ethics and NLP using Chatbots
Published:
Speaker : Meticulous Transparency — A Necessary Practice for Ethical AI
Published:
Facilitator And Co-Organizer : Government and AI
Published:
Organizer And Facilitator : Ethical Development of AI - A Practical Approach - Workshop 1
Published:
Panelist : Artificial Intelligence and Social Inclusion
Published:
Speaker : Ethical Implications of using AI in Smart Cities
Published:
Panelist : As Our AI Systems Become More Capable, Should Ethics be an Integral Component to your Business Strategy?
Published:
Panelist : What are the Main Practical Safety Issues with AI Products?
Published:
Speaker : Session #1: The New Intelligence
Published:
Panelist : Artificial Intelligence and Ethics – the requirement for transparency
Published:
Speaker : Lightning Talk on the ethical development of AI
Published:
Panelist : Ethics in AI: Meet the Experts
Published:
Canada Delegation Lead : Discussion of AI Ethics work being done in Canada
Published:
Panelist : Workers Data Rights - Making sure the human remains in human resources
Published:
Speaker : Ethical Development of AI: A Practical Approach
Published:
Contributor : Practical Industry and Academia experiences: AI ethics
Published:
Speaker : Session on understanding the impacts of AI in the field of finance
Published:
Speaker : Inclusive design -How do we ensure a high degree of participation in Artificial Intelligence (AI) systems?
Published:
Program Committee : Special Track: Artificial Intelligence, Law and Justice
Published:
Panelist : Global Policy Surrounding AI and Autonomous Systems
Published:
Panelist : Humanising AI and the Ethical Implications of Technology
Published:
Panelist : Panel on regulations and legal frameworks on data privacy
Published:
Panelist : How can we use AI ethically, transparently, and safely?
Published:
Speaker : Privacy and Security in AI
Published:
Panelist : Addressing Tech’s Ethical Dark Side: How Can We Ensure that AI is Used for the Good
Published:
Panelist : Social Debates and Issues in Public Policy Analysis - Artificial Intelligence Panel
Published:
Speaker : An introduction to the ethical development of AI
Published:
Speaker : The Future of AI: Utopia or Dystopia?
Published:
Speaker : Ethical Development of AI : A Practical Approach
Published:
Speaker : How might an ethically and morally-informed AI be conceived in a culturally diverse global context?
Published:
Contributor - Ai Expert : Guiding discussions from a technical and policy perspective on the impact that AI will have on wellbeing
Published:
Panelist : AI for Law Enforcement and Fraud
Published:
Speaker : AI Ethics: Current Challenges
Published:
Speaker : A roadmap for addressing organizational barriers in implementing Responsible AI practices.
Published:
Panelist : AI Innovation: Where do we fit in?
Published:
Panelist : Impact of AI and Big Data on Society
Published:
Panelist : Ethics and Responsible Development of AI: Necessity or Opportunity
Published:
Invited Speaker : Overview of AI Ethics
Published:
Panelist : Discussion on the ethical impacts of AI
Published:
Moderator : AI automation and employee replacement – What precautions can be taken to avoid sector specific unemployment?
Published:
Panelist : Development of AI-enabled technology that is effective while preserving user privacy.
Published:
Speaker : Applied AI Ethics: Building ethical, safe, and inclusive AI apps to fight COVID-19
Published:
Speaker : Introduction to Artificial Intelligence and the ethical development of AI
Published:
Guest Lecturer : Implementing AI ethics in research projects
Published:
Area Chair : Problems and Demos Track
Published:
Moderator : Ethics, Fairness, and Bias in AI
Published:
Invited Expert : Session 1: Accessibility of SARS-CoV-2 Data
Published:
Panelist : Webinar series on the technological, military and legal aspects of lethal autonomous weapon systems
Published:
Speaker : AI ethics groups are repeating one of society’s classic mistakes
Published:
Panelist : AI, Ethics, and Your Business
Published:
Speaker : Ethics in the use of AI in War
Published:
Speaker : Technology and Social Justice
Published:
Speaker : Presentation on the subject of bias and AI ethics along with required organizational changes from a business and management perspective to realize these changes in practice
Published:
AI Expert : Joint workshop hosted by UNODA and Nanyang Technological University, National University of Singapore, and Singapore University of Technology and Design
Published:
Panelist : What we can learn from the Physics community to advance the building on ethical AI systems?
Published:
Speaker : Designing for Humans - Responsible AI - Getting Responsible AI in Practice
Published:
Speaker : Presentation on the subject of navigating the jungle of policymaking as technologists
Published:
Speaker : Presentation on the subject of mutualism as a way to build trustworthy AI at a global level
Published:
Panelist : Discussion with the authors of the Abuse and Misogynoir Playbook that was featured as a part of the Montreal AI Ethics Institute’s State of AI Ethics Report January 2021
Published:
Co-Organizer : The goal of this event is to share the evolution of research ideas through specific examples of negative results, retrospectives, and project post-mortems.
Published:
Speaker : Presentation on operationalizing AI ethics and how to move from principles to practice.
Published:
Panelist : Unpacking the policy lens and considerations of digital transformation, and exploring what these considerations might mean for employees and senior leaders within the public service.
Published:
Guest Lecturer : Presentation to senior undergraduate and graduate students in the Computer Science programme at McGill University on how to operationalize AI ethics as it relates to industry and research work
Published:
Technical Expert : Table-top exercises in partnership with military, legal, and technical experts to assess meaningful human control in the context of autonomous weapons systems.
Published:
Guest Lecturer : Guest lecture for MBA students on key lessons for organizations in responsible AI.
Published:
Speaker : Talk on The Lab Notebook: Bringing Science Back to Data Science
Published:
Invited Speaker : Using lab notebooks to bring back rigorous science to data science
Published:
Moderator : Panel discussing the research and reporting in Q2 2021 based on The State of AI Ethics Report Volume 4 published by the Montreal AI Ethics Institute
Published:
Speaker : Ethics in the use of AI in war
Published:
Speaker : A roadmap to more sustainable AI systems
Published:
Poster Presentation : Carbon accounting as a way to build more sustainable AI systems: An analysis and roadmap for the community
Published:
Panelist : Ethical and Transparent Artificial Intelligence
Published:
Moderator : To achieve the promise of AI for societal impact, black-box models must not only be ‘accurate’ but also satisfy trustworthiness properties that facilitate open collaboration and ensure ethical outcomes. The purpose of this un-symposium is to discuss the interdisciplinary topics of robustness, fairness, privacy, and ethics of AI tools. In particular, we want to highlight the significant gap in deploying these AI models in practice when the stakes are high for commercial applications of AI where millions of human lives are at risk.
Published:
Moderator : A conversation series with experts on Green AI covering an overview, hardware, and tooling to build more sustainable, and greener AI systems.
Published:
Moderator : A panel featuring Kathleen Siminyu, Priya L. Donti, and Jason Edward Lewis exploring what alternative AI futures can look like and how we can get there.
Published:
Keynote Speaker : Turning the Gears: Organizational and Technical Challenges in the operationalization of AI Ethics
Published:
This workshop on AI Ethics is being hosted by the Montreal AI Ethics Institute (MAIEI) in partnership with Goethe-Institut to make the ideas of the societal impacts of AI more accessible with the goal of equipping and empowering us all to reshape this powerful technology so that we can achieve a more fair, just, and well-functioning society. The workshop will be centred on the films screened by Goethe-Institut during the week prior to the workshop along with some other materials provided by MAIEI as suggested readings and viewings. Drawing from those as inspiration, this will be a collaborative workshop bringing together people from all walks of life to discuss the subjects of human-machine interaction, machine-mediated conversations, behavior shaping and nudging, future of work, and alternative futures in an AI-infused world. The participants of this workshop will walk away with a more nuanced understanding of the impacts that this technology has on our lives and how we can engage our own communities, colleagues, and families in a more critically informed discussions to move towards a positive future.
Published:
Panelist : This event addressed the significance of gender issues in the development and deployment of military artificial intelligence (AI) systems, providing an opportunity to present findings from UNIDIR research about gender bias in data collection, algorithms, and computer processing and their implications for AI military systems.
Published:
Panelist : Machine learning technology is advancing at an unprecedented speed. Many major industries are now starting to rely heavily on A.I. to do everything from the most basic tasks to the most complex processes. As technology evolves and our co-dependence on A.I. becomes even stronger, there are still many questions related to the impact of A.I.
Published:
Speaker : Guest Lecture - AI Ethics
Published:
Invited Speaker : This month, as part of Ethics Week, the Ethics Office is pleased to welcome a special guest at the forefront of research in responsible artificial intelligence (AI). Abhishek Gupta, Founder of the Montreal AI Ethics Institute, will speak on ‘Ethics and Artificial Intelligence’. He will present on matters related to applied technical and policy methods to address ethical concerns in using AI.
Published:
Invited Speaker : Our member Abhishek Gupta is convening a group interested to discuss AI and ethics together on a regular basis. In this meeting, the group will discuss the latest State of AI Ethics Report published by the Montreal AI Ethics Institute which Abhishek founded. The State of AI Ethics Report (Volume 6) published in February 2022, captures the latest in research and reporting in the domain of AI ethics.
Published:
Invited Speaker : The workshop will explore the challenges and responsibilities that arise from the development and (commercial) use of AI in a context of socio-economic inequalities as well as differences in cultural values, political systems, regulation, scientific capacities, and other factors.
Published:
Presentation at the Montreal AI Symposium on a framework for ethical development of AI systems
Published:
Featured interview on the ethics of AI, community work and the importance of public competence in building responsible AI systems.
Published:
Presentation at the Brookfield Institute for Entrepreneurship and Innovation on ethics in AI and the moral attributes of intelligent systems
Published:
Overview of my work and how AI can be used to make a positive impact higlighting the importance of interdisciplinary discussions and public competence for getting good governance around AI systems
Published:
Interview with BorealisAI that took a dive into the threat automation poses to job loss based on the current science, whether bias is the biggest problem we face in responsible AI, and what we should consider reasonable trade-offs for improving fairness.
Published:
In this wide-ranging conversation, the future of a world where AI becomes important was discussed from a variety of viewpoints, with an exploration of both how it might lead to a utopia or a dystopia.
Published:
An interactive documentary series about the transformative power of artificial intelligence in the field of arts.
Published:
Presentation for the Teens in AI hackathon to guide the participants on applying responsible AI principles to the apps that they were building as a part of the hackathon
Published:
On Machine Learning Street Talk, Dr. Tim Scarfe, Dr. Keith Duggar, Alex Stenlake and Yannic Kilcher have a conversation with the Founder and Principal Researcher at the Montreal AI Ethics institute – Abhishek Gupta. We cover several topics from the Social Dilemma film and AI Ethics in general.
Published:
This is the first in a three-part Webinar series on the technological, military, and legal aspects of Lethal Autonomous Weapon Systems (LAWS).
Published:
In conversation with Rumman Chowdhury, Danit Gal, Amba Kak, Katya Klinova, and Victoria Heath
Published:
In this webinar, panelists discuss how stakeholders can be more ethical in their decision-making using AI. Topics covered include risk management, regulations, implementation of ethical AI and the barriers of putting it into practice, AI in Fintech and a discussion of how the cost of ‘getting it wrong’ outweighs the challenges of application.
Published:
In this interview, the capabilities and limitations of AI systems are discussed when it comes to their application to reduce inequality in the world. This is supplemented by insights on the accompanying societal changes that are required to bolster the efficacy of these deployments.
Published:
All of us deal with data. A lot of us do data science. And yet only some of us get a chance to really infuse science into that data science work. Ever visit one of your old experiments and find that you want to pull out your hair because you are not sure how you arrived at some of the models that you ended up selecting, why you transformed your data the way you did, and other choices that now seem arbitrary but were perhaps perfectly reasonable then?
Published:
In this TEDx talk, I walk through the notion of civic competence as a way of creating broad-based awareness on the harms that can arise from AI systems and equipping everyday people with the appropriate knowledge to challenge the systems to create a more just, fair, and well-functioning society.
Published:
Workshop: Addressing Bias in Machine Learning Models on Candidate Selection
Published:
AI has a sizeable carbon footprint, both during training and deployment phases. How do we build AI systems that are greener? The first thing we need to understand is how to account and calculate the carbon impact of all the resources that go into the AI lifecycle. So what is the current state of carbon accounting in AI? How effective has it been? And can we do better? This conversation will answer these questions and dive into what the future of carbon accounting in AI looks like and what role standards can play in this, especially if we want to utilize actionable insights to trigger meaningful behavior change.
Published:
The field of AI Ethics has reached a stasis in terms of applicability of ideas in real-life scenarios. This talk dives into some of the areas of concern and how to address the challenges so that we can move to a future with more meaningful and applicable solutions.
Published:
Conversation on the practical challenges in the domain of AI Ethics
Published:
Turning the Gears: Organizational and Technical Challenges in the operationalization of AI Ethics