Artificial Intelligence in Asia: What's Similar, What's Different? Findings from our AI Workshops

20 December 2017 - A Workshop on Other in Geneva, Switzerland

Agenda

Proposer's Name: Ms. Malavika Jayaram
Proposer's Organization: Digital Asia Hub
Co-Proposer's Name: Ms. Julianne Chan
Co-Proposer's Organization: Digital Asia Hub
Co-Organizers:
Mr. Kyung Sin Park, Civil Society, Open Net Korea
Ms. Vidushi Marda, Civil Society, Centre for Internet and Society


Session Format: Round Table - 90 Min

Proposer:
Country: China
Stakeholder Group: Civil Society

Co-Proposer:
Country: China
Stakeholder Group: Civil Society

Speaker: Vidushi Marda
Speaker: KS Park
Speaker: Malavika Jayaram

Content of the Session: (updated Dec 20th with refined session description and complete speaker list)
Ideas about the future and about what progress means are heavily contested, and context-specific. Digital Asia Hub set out to investigate whether the future of artificial intelligence - heralded as a game changing technology - was constructed and implemented differently in Asia, and to explore whether the problems that AI was deployed in service of signalled different socioeconomic aspirations and fears. Was the focus on health, ageing and augmentation uniquely Asian? Was the lack of a “creep factor” about machine intelligence unremarkable in cultures accustomed to mythical creatures and legendary spirits ? Was the lack of legal safeguards a competitive advantage that spurred innovation in this field, or a regulatory gap that needed attention? We also wanted to kick-start a deeper conversation about ethics and governance, before policies and regulations baked in the business case for AI without factoring in the potential human costs and collateral effects. We felt this was particularly crucial in this region, where commerce can trump dignity, autonomy and inalienable human rights by stealth.

We conducted a 3-city "AI in Asia" conference series in Hong Kong, Seoul and Tokyo between November 2016 and March 2017. The 3 events covered themes such as ethics, security, privacy, innovation, healthcare, urban planning, automation and the future of labour, legal implications, authorship and creativity, and AI for social good. This series unearthed critical lessons in a region that many AI researchers are only now setting their sights on. We will share insights from that multistakeholder, interdisciplinary conference series. We will also share some insights from an event on AI and Trust, which we convened during the 39th International Conference of Data Protection and Privacy Commissioners, which took place in Hong Kong in September.

Building on the lessons from our AI series, we will be convening deep dives into the research questions that have particular salience in the developing world when it comes to the AI.  We will be collaborating with partners in Asia, such as the Centre for Internet and Society, India, on reseraching issues such as autonomy, discrimination, privacy, and the replication of existing societal disparities and bias. We will also examine the question of how to optimize the positive benefits of AI for societal gain, without harming individuals (especially marginalised and digitally unsophisticated users in the Global South). In this session, Elonnai Hickock from CIS will share their new work in this area, particulalry focused on healthcare as a use case.

Our agenda is to present a synthesis of the key findings from these 2 projects, especially of themes that are distinct from the (so far largely western-focused) narrative about the promise and perils of AI. We will then open up the discussion to include insights from others working with AI in Asia:

Vidushi Marda, Article 19

KS Park, OpenNet Korea and Korea University Law School

Jac sm Kee, lead, Women's Rights Programme at APC

Jake Lucchi, Head of Content and AI, Public Policy, Google Asia Pacific, Hong Kong

Danit Gal, Yenching Scholar, Peking University, China, & Chair, Outreach Committee, The IEEE Global AI Ethics Initiative


Relevance of the Session:
Relevance of the Issue:

Our session has particular relevance for the following Internet Governance (IG) issues:
- the governance of infrastructure: The technologies and platforms that AI is built on and in turn shapes (through machine learning and deep learning) have huge implications for how search, browsing, tracking, surveillance, advertising and other activities are carried out
- the question of inclusion and multistakeholder governance: Many technologies are developed in Silicon Valley or in a western technological paradigm, then "exported" fully-formed to the rest of the world, leaving little room for other interests and perspectives. For example, it has taken a while for the Internet to be governed in a more inclusive and global way, thanks to efforts like the IGF which promote diversity and multistakeholder problem-solving. We want AI not to be another game changing technology that is deployed without the input of global perspectives and diverse, lived experiences
- the issue of transparency and scrutability - AI poses particular risks to the idea of understanding and controlling the systems that we create, given that - by design - it is not coded upfront, and learns on the go, from real life datasets. The idea of transparency and accountability of systems that seem opaque and inscrutable is particularly key to the governance of AI.

Our session is also extremely relevant for the construction of a Digital Future: AI will shape, and is itself shaped, by human behaviour, and has implications for everything from the future of work to informational self determination to the costs of inclusion and exclusion. AI is already “under the hood” in many of the world’s most popular technologies, including browsers, mobile phones, apps, telephone communication with banks and service providers, decision making about credit and benefits, policing and law enforcement, and other aspects affecting citizenship and participation. If we don’t get this right, the social contract between individuals on the one hand, and governments and companies on the other, will be severely imbalanced. If our Digital Future is to be an inclusive, just, transparent and equitable one, a discussion of AI in Asia, not just the western tech hubs, is hugely important.

Tag 1: Artificial Intelligence
Tag 2: Inclusive Digital Futures
Tag 3: Emerging Tech

[Original proposal follows, slightly updated as above]
Interventions:
3 of us, myself, Prof. Park and Vidushi Marda, will present a synthesis about AI in Asia based on the findings from our 2 reports, as described. We will prepare a joint presentation, with each of us highlighting aspects of the findings based on our individual expertise. My perspective, as the Executive Director of a regional hub that organized these events to build capacity about AI; Prof. Park's perspective on how the more developed Asian economies see and implement AI, and what legal or regulatory safeguards might be needed; and Ms. Marda's thoughts on how developing economies approach AI and which angles are different from that of the prevailing western narrative. 

Diversity:
Our session has 3 primary presenters, of which 2 are women. I am based in Hong Kong, of Indian ethnicity and British citizenship, running an organization with a regional mandate for Asia, incubated by the Berkman Klein Center for Internet and Society at Harvard University. Ms. Marda is also female, is based in India, and is the youngest and freshest to the field, out of our group. She represents the Centre for Internet and Society in Bangalore, which has done great work in the field of internet governance and digital rights. She is the next generation of policy advocates in this 10 year old organization. Prof Park is male, based in Seoul, and is a reputed law professor and advocate, as a co-founder of Open Net Korea. He is well known to the IGF world.

I am a first time IGF session organizer, Ms. Marda has never attended an IGF before this one, and Prof. Park has significant experience with the IGF system and the internet governance space. We therefore represent developing and developed perspectives within Asia, a geographic mix, and policy perspectives.

We will invite people from the technical, governmental and private sector worlds to participate in our roundtable, especially those who participated in our AI in Asia series and our Indian capacity building workshop. We have named them here, given the roundtable format, as we wish to treat all prospective attendees voices as equally valuable. However, we will exert considerable efforts to generate a multistakeholder universe within the room, to better engage with the issues. We will especially reach out to young persons whose experience of AI might be very different, to persons with disabilities, and to different geographies. 

Onsite Moderator: Malavika Jayaram
Online Moderator: Julianne Chan
Rapporteur: Julianne Chan

Online Participation:
We would be very happy to permit online participation. We have use of the Berkman Klein Center and Harvard Law School's "Question Tool" platform, and will set up an instance in advance of the event to solicit questions during our presentations. Our online moderator will raise them during the interactive part of our 90 minute session (we will present for 30 minutes amongst 3 of us, and leave 60 minutes for open discussions). If there is an IGF platform that we should be using instead, we would be happy to use...