IVOW AI - CULTURAL INTELLIGENCE FOR AI
  • Home

What We're up to

Toward a Declaration of Citizen, Machine, and Culture

4/25/2018

 
Picture
IVOW’s Kee Malesky at the 2018 #AICulture Symposium at Morgan State University.
by Davar Ardalan and Kee Malesky

We are the storytelling species. It’s natural then to want to discuss the future of storytelling in artificial intelligence and the cultural implications that come with it.

To consider “Artificial Intelligence, Culture, and Storytelling” more than 50 thought leaders from a variety of disciplines gathered for a Symposium co-sponsored by IVOW and Morgan State University in Baltimore on April 23. Representatives from the United Nations, the Australian government’s innovationXchange program, and Management Systems International, as well as renowned machine learning experts, educators, journalists, technologists, and business leaders from across the US and Mexico engaged in conversations about AI as a tool for culturally rich and socially conscious storytelling.

By early afternoon, our focus turned to the need for a future “Declaration of Machine, Citizen, and Culture” that could guide engineers, designers, machine learning experts, and users to understand and protect human rights, and to help ensure inclusivity and diversity in the digital world.

We considered the fact that we’ve been modern civilized human beings for about 10,000 years, with evolving levels of self-awareness that have allowed us to ask essential questions, experience individual consciousness and share it with others.

So we asked ourselves, How do we bring the Machine into this discussion of human rights? What issues/concerns are specific to culture-related AI applications? What does human-centered AI look like? What are the rights and privileges of human beings in the digital universe?

As with any new technology, it’s important to create guidelines on the proper ways to utilize these new tools. Do we need to create machines to hold other machines accountable and accurate, or a responsible third party to review new products before launch?

One participant pointed out early, we need to identify specific issues and current inadequacies; the problem isn’t in the algorithms, it comes from people and society. Data is agnostic and amoral and diverse datasets do exist; but people have biases. A multidisciplinary approach is essential to find balanced solutions. Systems will need to be trained to be aware of cultural context. Dominant biases have considerable power to negatively impact the lives of others; we have to keep humans accountable too. AI expert Mark Riedl of the Georgia Tech School of Interactive Computing, suggested that we should look to Europe and new laws around AI accountability.

AI expert Mark Finlayson of Florida International University urged us to pause and consider what the problem is first before making any declarations.

Lisha Bell, from Pipeline Angels, brought up the point that some biases have more power than others, and we need to hold humans accountable for making AI balanced and diverse. An AI system must be interrogatable; we should be able to understand why a system made a decision.

Louise Stoddard, who works with the United Nations Office of Least Developed Countries, asked “What are the barriers to access and how can we remove them? Who owns this process?” She stressed the need to listen to stories “from the ground.”

Ellen Yount, Vice President of Management Systems International (MSI), liked the idea of having a charter that encourages us to “Do no harm.” AI developers should consider the social implications of telling stories, as well as any unintended consequences.

A Lesson From History
Ahead of our conversation, we looked at some highlights in the history of human rights — and the granting or asserting of those rights, which have evolved over time and been extended to women, minorities, workers, children, disabled, immigrants and refugees:

  • Cyrus Cylinder, 6th century BCE (“I permitted all to dwell in peace”)
  • Magna Carta, 1215 (“No Freeman shall be taken or imprisoned but by lawful judgment of his Peers”)
  • US Declaration of Independence, 1776 (“All men are endowed by their Creator with certain unalienable rights”)
  • US Bill of Rights, 1791 (freedom of religion and the press, and to petition and assemble peaceably)
  • French Declaration on the Rights of Man and Citizen, 1789 (“the natural and imprescriptible rights of man…are liberty, property, safety and resistance against oppression”)
  • UN Declaration of Human Rights, 1948 (“Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind”)
  • Beijing Declaration, 1995 (“women’s rights are human rights”)
And one idea from 1950s’ fiction that resonates today: Isaac Asimov’s Laws of Robotics, the first of which says “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

We didn’t intend to produce an actual Declaration of Citizen, Machine, and Culture at the symposium; we wanted to begin a conversation between our IVOW team and a wide variety of experts. We are well aware that others are actively debating how to balance human needs and rights with the challenges of the digital universe, including these current endeavors:

  • Microsoft released a book in January called The Future Computed, on the effects of artificial intelligence on society, which argued that perhaps coders should take a “Hippocratic oath” as physicians do: “First, do no harm.”
  • Al Jazeera held a Future Media Leaders’ Summit last month, to discuss how to frame the ethics behind AI in a human-centric field like journalism. Robots are more efficient, but can they empathize and make human judgments?
  • At MIT Technology Review’s EmTech Digital conference, the Partnership on AI presented its guiding principles: “working to protect the privacy and security of individuals; striving to respect the interests of all parties that may be affected by AI advances; helping keep AI researchers socially responsible; ensuring that AI research and technology is robust and safe; and creating a culture of cooperation, trust, and openness among AI scientists to help achieve these goals.” The conclusion of the conference: For better AI, diversify the people building it.
  • AI-4-ALL is a nonprofit that runs summer programs teaching AI to students from underrepresented groups. “AI will change the world; who will change AI?” is their tagline. “Our vision is for AI to be developed by a broad group of thinkers and doers advancing AI for humanity’s benefit.”
  • From The Future Computed: “Artificial Intelligence can serve as a catalyst for progress in almost every area of human endeavor. But, as with any innovation that pushes us beyond current knowledge and experience, the advent of AI raises important questions about the relationship between people and technology, and the impact of new technology-driven capabilities on individuals and communities. We are the first generation to live in a world where AI will play an expansive role in our daily lives.”

As the declaration workshop came to a close, IVOW’s product designer Nisa McCoy pointed out that design isn’t perfect, and we need to look at the initial intention of any technology, and then review. We need to know more about a product and its impact before it’s launched. Apps and products must be safe, reliable, private, secure, inclusive, transparent, and accountable. We need a cross-cultural understanding of the audience and users.
After the session, one of the symposium attendees, Paris Adkins-Jackson, founder and CEO of DataStories by Seshat, a research, evaluation, and data analysis company, compiled this summary of the highlights of our discussion, putting it in the form of a declaration:

We declare that citizen, machine, and culture are inherently and essentially connected and in communication with each other; and
We declare that those connections produce inequities and amplify biases;
Thus, we declare that when engaging research, product development, or other actions related to Artificial Intelligence and Machine Learning, there must be a thorough examination prior to development of the potential impact of any product on people including access, appropriation, and amplification of injustice; and
We declare that such information will be a large component of the criteria in the decision to develop the product;
Also, we declare that a system of accountability be developed to mitigate any unforeseen challenges that arise which indicate there has been an adverse impact on people;
Lastly, we declare we will make all efforts possible to work with diverse and non-traditional disciplines to investigate impact, and develop and implement accountability platforms as well as to assist with product development.

Comments are closed.

    Archives

    March 2022
    February 2022
    December 2021
    September 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    December 2020
    November 2020
    October 2020
    September 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    June 2019
    May 2019
    April 2019
    March 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    February 2018

    Categories

    All

    RSS Feed

  • Home