IVOW AI - CULTURAL INTELLIGENCE FOR AI
  • Home

What We're up to

LET's TEACH MACHINES TO BE CULTURED

10/8/2018

 
Picture
(Photo courtesy of Miguel Gandert)
By Davar Ardalan and Kee Malesky
A research assistant at MIT Media Lab was working on facial recognition in social robots, and discovered that the software was unable to detect her face. Why? Because she’s black and the code libraries that software developers draw from are not diverse or inclusive, so her dark skin tone didn’t fit the “normal” model. She had to wear a white mask to get the camera and software to recognize her. That researcher, Joy Buolamwini, went on to found the Algorithmic Justice League to “increase awareness, report bias, and develop practices for accountability in design, development, and deployment of coded systems.”


At IVOW, we’re looking for Deeply Inclusive AI. We believe that effective fusion of AI, culture, and storytelling will help diminish bias in algorithmic identification and train AI software to be much more inclusive. Earlier this year, we experimented with creating a culturally sensitive storytelling prototype that could recognize diverse elements of photographs. We ran a test with AWS Rekognition, and it identified a ceremonial male dancer wearing a large headdress as a female in a parade.

Much has been written about social and cultural biases in AI; it’s an acknowledged issue, with an accepted explanation: AI is biased because the humans behind it have opinions and prejudices that — intentionally or not — affect their work. To create more diverse AI, we need a more diverse pool of engineers, academics, coders, and administrators, at every level of design and development.

Boyang Albert Li is a senior research scientist at Baidu Research, who participated in IVOW’s spring symposium on AI & Storytelling, and will join us at an upcoming seminar at Stanford University MediaX. We asked for his thoughts on bias in artificial intelligence:

"We must make our best effort to eliminate social bias in machine learning systems, which is like eliminating software bugs. Despite substantial research and rigorous testing, software bugs have caused significant damage and loss of human lives. The black-box nature of ML aggravates the problem, since we don't understand what caused biased predictions.

Fundamentally, social bias in machine learning is derived from our inability to describe intentions precisely and error-free. That's why machines need to learn from data instead of following the rules humans designed. But the machines tend to learn everything indiscriminately, including human bias. Unfortunately, the datasets are already so big that thoroughly examining them is difficult. Yet they are too small because they capture very limited possibilities. For example, a model trained on today’s news data would not have seen a female or Hispanic US president, even though it is entirely conceivable to a human observer.

Two emerging research directions have the potential to help eliminate social bias. The first is the interpretability of ML models, which can help identify which data points lead to biased predictions. The second is to make it easier for ML to work with human-designed rules. Just like software bugs, the fight against social bias in ML is a long one, but we can't give up."


The Global Network of Internet & Society Centers held a summit last November to consider “the uneven access to and impact of AI and related technologies on often marginalized populations, which include urban and rural poor communities, women, youth, LGBTQ, ethnic and racial groups, people with disabilities – and particularly those at the intersection of these marginalized groups – contribute to the disturbing risk of amplifying digital inequalities across the world.”

How do we deal with and correct these cultural biases? In “Learning to Listen: Critically Considering the Role of AI in Human Storytelling and Character Creation,” Anna Kasunic and Geoff Kaufman of Carnegie-Mellon University “ argue that there is a need for alternative design directions to complement existing AI efforts in narrative and character generation and algorithm development…. In our vision, AI collaborates with humans during creative processes and narrative generation, helps amplify voices and perspectives that are currently marginalized or misrepresented, and engenders experiences of narrative that support spectatorship and listening roles.”

An article in Forbes magazine describes IBM’s efforts to detect and mitigate racial bias: “The software giant has developed a rating system that can rank the relative fairness of an AI platform and explains how decisions are made. IBM is going to launch its AI Fairness 360 toolkit and make the new software easily accessible by the open source community, as a way to combat the current state of homogeneous developers.” According to IBM, “The fully automated software service explains decision-making and detects bias in AI models at runtime – as decisions are being made – capturing potentially unfair outcomes as they occur. Importantly, it also automatically recommends data to add to the model to help mitigate any bias it has detected.” 
​

But as the Forbes reporter points out, “A little food for thought here that still has us a bit puzzled, who decides how fair the so called ‘fairness algorithms’ are?”
Human error, human frailties, and human biases limit the ability of all people to benefit from and enjoy the advancements that AI offers. We need comprehensive and inclusive databases of world cultures and a multidisciplinary approach to find balanced solutions. Only with greatly increased diversity of the talent pool in all areas of computer science can we train AI to be deeply inclusive.

We look forward to discussing these critical concepts on October 17 at Stanford at the Forum on AI for Culturally Relevant Interactions, sponsored by mediaX at Stanford University, in conjunction with IVOW, Baidu and Flybits.

About IVOW: We are a team of journalists, educators, technologists, and app developers with extensive experience in combining timeless principles of storytelling with new emerging technologies. We are affiliates of mediaX at Stanford University. 

OUR PROMISE: A vow to design the next generation of intelligence machines to be deeply inclusive, to promote the dignity, health, and wellbeing of all life in ways that respect and celebrate cultural heritage and identity.


Comments are closed.

    Archives

    March 2022
    February 2022
    December 2021
    September 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    December 2020
    November 2020
    October 2020
    September 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    June 2019
    May 2019
    April 2019
    March 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    February 2018

    Categories

    All

    RSS Feed

  • Home