Some 50 thought leaders, representatives from the United Nations, AI experts, journalists, ethnographers, educators, and students joined an AI, Culture and Storytelling Symposium at Morgan State University on April 23, 2018.
by Davar Ardalan and Robert Malesky Can machines understand cultural contexts? That’s what AI researcher Wolfgang Victor Yarlott asked himself several years ago when he was a graduate student at MIT. A member of the Crow tribe, Yarlott’s pioneering research took place when he was working with the Genesis Story Understanding System at MIT’s Computer Science and Artificial Intelligence Laboratory. Yarlott in collaboration with his Professor Patrick Winston wanted to determine whether the system could understand stories from Crow folklore as well as it understood the works of Shakespeare. At first his “audience” was the program itself -- he wanted to instill an understanding. Over the course of his work, he analyzed three collections of Crow literature, created a list of cultural features present in the stories, identified four as particularly important (unknowable events, medicine, differences as strengths, and uniform treatment of entities), and developed a set of five Genesis-readable stories in which those features were prominent. This led to several new elements in the story understanding model. With these new elements, he was able to prove that Genesis is indeed capable of understanding stories from the Crow culture, bringing it one step closer to being a universal story understanding system. Yarlott was one of 50 thought leaders from a variety of disciplines who gathered to consider “Artificial Intelligence, Culture, and Storytelling” at a Symposium co-sponsored by IVOW and Morgan State University in Baltimore on April 23. Renowned machine learning experts, educators, journalists, technologists, and business leaders from across the U.S. and Mexico joined representatives from the United Nations, the Australian government’s innovationXchange program, and Management Systems International, to engage in conversations about AI as a tool for culturally rich and socially conscious storytelling. Throughout our daylong symposium, the centrality of storytelling to human existence was ever-present. We are the storytelling species; it is what makes us human. What, then, might the future of storytelling look like when it is combined with emerging AI and machine-learning technologies? That was the focus of our final workshop of the day, “AI and the Future of Intelligent Narrative.” Jason Farman is associate professor in American Studies at the University of Maryland, and has been writing about mobile media and the value of locative technologies. He’s interested in how mobile can enable polyvocality, and how mobile devices can tell many stories about a single location and override the dominant narrative. Farman believes we must pay close attention to the platforms we use. “We write as if stories can transfer from medium to medium and that is not how it works,” he said. We should be considering the unique requirements and vagaries of each platform, Farman said, and design for the medium in order to achieve narrative flow. “With mobile, space is a part of the story itself. How we represent a space changes the way that we live in that space. It's not just the stories that we tell, it is the media through which our stories are told.” Also speaking at the symposium was world-renowned ethnographer and photographer Miguel Gandert, emeritus professor of Journalism at the University of New Mexico. For the last twenty years Gandert has photographed the people, social rituals, and landscape of his native New Mexico, capturing the sacred and secular practices of Indian and Hispanic people living along the upper Rio Grande Valley. Gandert discussed the need for a thorough understanding of history. Often in anthropology departments, academics have not been allowed to study their own communities. Now, scholars are encouraged to study their own stories, which offers a very different lens to the field. In Mexico, Miguel noted, journalists in the past were entirely white, but this is starting to change at least in the social sciences and slowly in journalism. “We need to further encourage and allow people to tell their own stories.” Jerome James, Jr. is the chief strategist and founder of Impact Immersive, a collective of businesses, consultants, industry experts, and educational partners who are applying immersive technologies in diverse ways to explore its possibilities for deployment across industries and everyday life. His goal is to level the playing field in terms of who comes to the VR/AR table. His early background in journalism informs his passion for both sides of the story and for facilitating communication between experts in order to look at things holistically. Using AI at places like the Washington Post frees up journalists to really go in depth on stories that are meaningful and it allows them to fill in the places where AI misses out, James explained. It increases efficiency and depth. To design meaningful consumer cultural storytelling robots or other apps, we will need to consider carefully and thoughtfully which elements are critical in ensuring the public can access and engage with these new tools. As it began, the session ended with the group talking about the necessity of inclusion without bias, and the crucial role of maintaining the journalistic precepts of accuracy, honesty, objectivity, and integrity. The outlook was optimistic and the audience was enthusiastic about the possibilities emerging in the new field of AI storytellling. IVOW’s Kee Malesky at the 2018 #AICulture Symposium at Morgan State University. by Davar Ardalan and Kee Malesky We are the storytelling species. It’s natural then to want to discuss the future of storytelling in artificial intelligence and the cultural implications that come with it. To consider “Artificial Intelligence, Culture, and Storytelling” more than 50 thought leaders from a variety of disciplines gathered for a Symposium co-sponsored by IVOW and Morgan State University in Baltimore on April 23. Representatives from the United Nations, the Australian government’s innovationXchange program, and Management Systems International, as well as renowned machine learning experts, educators, journalists, technologists, and business leaders from across the US and Mexico engaged in conversations about AI as a tool for culturally rich and socially conscious storytelling. By early afternoon, our focus turned to the need for a future “Declaration of Machine, Citizen, and Culture” that could guide engineers, designers, machine learning experts, and users to understand and protect human rights, and to help ensure inclusivity and diversity in the digital world. We considered the fact that we’ve been modern civilized human beings for about 10,000 years, with evolving levels of self-awareness that have allowed us to ask essential questions, experience individual consciousness and share it with others. So we asked ourselves, How do we bring the Machine into this discussion of human rights? What issues/concerns are specific to culture-related AI applications? What does human-centered AI look like? What are the rights and privileges of human beings in the digital universe? As with any new technology, it’s important to create guidelines on the proper ways to utilize these new tools. Do we need to create machines to hold other machines accountable and accurate, or a responsible third party to review new products before launch? One participant pointed out early, we need to identify specific issues and current inadequacies; the problem isn’t in the algorithms, it comes from people and society. Data is agnostic and amoral and diverse datasets do exist; but people have biases. A multidisciplinary approach is essential to find balanced solutions. Systems will need to be trained to be aware of cultural context. Dominant biases have considerable power to negatively impact the lives of others; we have to keep humans accountable too. AI expert Mark Riedl of the Georgia Tech School of Interactive Computing, suggested that we should look to Europe and new laws around AI accountability. AI expert Mark Finlayson of Florida International University urged us to pause and consider what the problem is first before making any declarations. Lisha Bell, from Pipeline Angels, brought up the point that some biases have more power than others, and we need to hold humans accountable for making AI balanced and diverse. An AI system must be interrogatable; we should be able to understand why a system made a decision. Louise Stoddard, who works with the United Nations Office of Least Developed Countries, asked “What are the barriers to access and how can we remove them? Who owns this process?” She stressed the need to listen to stories “from the ground.” Ellen Yount, Vice President of Management Systems International (MSI), liked the idea of having a charter that encourages us to “Do no harm.” AI developers should consider the social implications of telling stories, as well as any unintended consequences. A Lesson From History Ahead of our conversation, we looked at some highlights in the history of human rights — and the granting or asserting of those rights, which have evolved over time and been extended to women, minorities, workers, children, disabled, immigrants and refugees:
We didn’t intend to produce an actual Declaration of Citizen, Machine, and Culture at the symposium; we wanted to begin a conversation between our IVOW team and a wide variety of experts. We are well aware that others are actively debating how to balance human needs and rights with the challenges of the digital universe, including these current endeavors:
As the declaration workshop came to a close, IVOW’s product designer Nisa McCoy pointed out that design isn’t perfect, and we need to look at the initial intention of any technology, and then review. We need to know more about a product and its impact before it’s launched. Apps and products must be safe, reliable, private, secure, inclusive, transparent, and accountable. We need a cross-cultural understanding of the audience and users. After the session, one of the symposium attendees, Paris Adkins-Jackson, founder and CEO of DataStories by Seshat, a research, evaluation, and data analysis company, compiled this summary of the highlights of our discussion, putting it in the form of a declaration: We declare that citizen, machine, and culture are inherently and essentially connected and in communication with each other; and
We declare that those connections produce inequities and amplify biases; Thus, we declare that when engaging research, product development, or other actions related to Artificial Intelligence and Machine Learning, there must be a thorough examination prior to development of the potential impact of any product on people including access, appropriation, and amplification of injustice; and We declare that such information will be a large component of the criteria in the decision to develop the product; Also, we declare that a system of accountability be developed to mitigate any unforeseen challenges that arise which indicate there has been an adverse impact on people; Lastly, we declare we will make all efforts possible to work with diverse and non-traditional disciplines to investigate impact, and develop and implement accountability platforms as well as to assist with product development. by Davar Ardalan On Monday, April 23, some 50 thought leaders joined a day-long Artificial Intelligence (AI) and Culture Symposium at Morgan State University, together with another 300,000 that were reached through the #AICulture hashtag on Twitter and Instagram. Representatives from the United Nations, the Australian Aid program’s innovationXchange, Management Systems International as well as renowned machine learning experts, educators, journalists, technologists, and business leaders from across the US and Mexico participated in the symposium to engage in the latest research on AI as a tool for culturally rich storytelling. AI & Culture expert Rafael Pérez y Pérez was one of the thought leaders attending the symposium. A professor at Universidad Autónoma Metropolitana at Cuajimalpa, México City, he specializes in artificial intelligence and computational creativity, particularly in automatic narrative generation. Perez is the author of MEXICA 20, short narratives developed by the computer program MEXICA. Plots describe fictional situations related to the Mexicas (also known as Aztecs), ancient inhabitants of what today is Mexico City. Ellen Yount of Management Systems International (far left) along with Nisa McCoy of IVOW, Louise Stoddard of the United Nations and AI expert Rafael Perez y Perez at the Baltimore #AICulture symposium. “To tell compelling stories about those living in the developing world — including lack of water, the impact of climate change, the need for girls’ education, and the dire consequences of conflict — it’s very important that we make sure that those least advantaged and often ‘unheard’ are able to tell their stories using their eyes and voice. Looking at the role that AI plays in that journey is critical,” said Ellen Yount, Vice President, Management Systems International (MSI). IVOW’s team of journalists and data scientists are collaborating with renowned ethnographer/photographer Miguel Gandert focusing on the Mexican cultural traditions of the American Southwest. Gandert, who also attended the symposium, was born in Española in 1956, and raised in Santa Fe where he developed an interest in photography. Since then he has captured in pictures not only the people and landscape of New Mexico, but their history, culture, and social norms as well. Gandert is also a Distinguished Professor Emeritus at the University of New Mexico. There was also a group conversation about the need for some kind of “Declaration of Citizen, Machine, and Culture” that considered the rights and responsibilities of human beings in the digital universe. The final workshop looked at the future of journalism as intelligent narrative that is truthful and inclusive and gives voice to the underrepresented. The event was co-hosted by Morgan State University and IVOW, our AI-powered cultural storytelling platform. Symposium sponsors included Management Systems International, Florida International University, and Southwestern Indian Polytechnic Institute. We’ll be reporting on the #AICulture symposium with a multimedia series in the next week. IVOW Team members Robert Malesky, Ben Kreimer, Dr. Mahmudur Rahman (Advisor Morgan State), Joy Elias, Cindy Guijosa, Davar Ardalan, Nisa McCoy, and Simin Kargar at Morgan State University.
|
Archives
February 2021
Categories |