Artificial Intelligence in Asia: What's Similar, What's Different? Findings from our AI Workshops

20 December 2017 - A Workshop on Other in Geneva, Switzerland

Also available in:
Full Session Transcript

>> MALAVIKA JAYARAM:  Hi, everyone.  We'll just get going.  We are waiting for one speaker, but we assume with the distance between rooms, he might show up in a second. 

 

 

So we've a really great panel, and I think this sort of started out of our fear that all of the narratives that were (?) and we were getting really bored of seeing really old science fiction dystopias inform the way we discuss AI and we felt a lot of it didn't make sense in a lot of Asia.  Their interaction and relationship with that kind of pop culture is sort of one level removed from local pop culture, which actually, given that a lot of western culture derives from Japanese anime and all kinds of other imaginations of robots and futures and utopias, we actually thought it didn't make a lot of sense to just keep on and on beating the same narratives everywhere else. 

 

 

So we started working on this last year, and we were really interested in making it a very interdisciplinary conversation, because as you can imagine, in Asia, technology drives a lot of the conversation around new technology.  It's the techies that actually have a strangle hold on the conversation.  Policymakers kind of come second, whether they do or don't understand the technology.  And everyone else, especially the humanities, come way, way behind.  So we really set out to change that. 

 

 

So we've got a great bunch of people who are going to help us sort of unpack those kinds of strokes in storytelling. 

 

 

So I'm going to start from the other end.  We have jack, who runs the women's rights programme at APC based out of Malaysia. 

 

 

We have Benetel based out of Beijing, and she's also the AI ethics initiative.  She's the outreach chair and doing a lot of work in China, Japan, and Korea.  We have Vidushi, who is with Article 19, who has also been with CIS and has done a lot of work in India. 

 

 

We have Ellen, who is based at CIS Bangalore, and they've started a new project on AI, so she is going to present the new work in that field. 

 

 

And we have KS Park from Seoul, who works with Open Net Korea and Korea National University in Seoul.  I know the different university.  Definitely not as good.  And who is a co‑host of one of our events. 

 

 

So without further ado, we are waiting for one person because we thought industry needed a say while we kept sliding them off, but we don't have Google in the room yet.  Jake Lucci will join us, the head of AI for public policy based out of Hong Kong. 

 

 

This was a series ‑‑ I don't know how well you can see it.  The Hong Kong event was throwing out all the big picture questions for the region, realizing we didn't even all have the same vocabulary, which, of course, is something common to AI conversations everywhere. 

 

 

In Seoul, we did a deep dive into ethics, safety, and societal impact, so a lot of conversations around privacy and security of AI, which is also sort of endemic to building trust. 

 

 

And the last one was held in Tokyo, and it was on AI for social good.  I don't like the term beneficial AI because it implies there's a whole trench of evil AI.  I feel it's unnecessary.  We don't do that with any other technology, save copyright for good.  Of course, we know there's copyright for evil, but we don't feel the need to keep saying it. 

 

 

So what is it about AI that we feel the need to keep telling people, no, no, it's really good.  We thought Tokyo was a great place to actually talk about social good, given adoption, given a lot of very peculiar problems endemic to Japanese society that actually help with uptake. 

 

 

So those were the three events that we did.  We've got a few just images and posters from the events.  And we also had a pop‑up art show in Hong Kong, partly because I really believe you can't have conversations about anything new without looking at how artists, writers are actually viewing the world, what they've seen way before law and policy catch up.  So we had a pop‑up art show on works created using artificial intelligence, as well as works critiquing artificial intelligence.  So things looking at the trolley problem, playing with the moral machine game, looking at how Hong Kong actually used DNA samples from cups and other things to regenerate using AI.  Faces of the people.  And it was called the Face of Litter campaign, sort of a very powerful way of saying these are the awful people we need to ostracize because they littered. 

 

 

So we had a lot of really controversial examples of how it's used, and it really generated a fantastic conversation.  I think one of the computer science professors, he kept looking at the agenda saying, is this part of our conference?  Like why is there an art show?  But at the end of the evening after a couple of glasses of wine, he said, you know what?  Everything else I knew.  This totally blew my mind.  I've never really thought about art as a computer scientist. 

 

 

So these are some images from our event.  We created reading lists from each of them.  We transferred some from learning.  The Tripoli document, into Korean, into Japanese, because we thought unless you bridge that barrier, you're not actually even talking about the same things. 

 

 

And then these are some of the reasons why we wanted to talk about what's different and what's new. 

 

 

So when you look at this image, anyone know what this is?  Guesses?  Ingredients of this image?  Just shout out suggestions.  Who is the guy there?  Priest.  Yeah.  And what is the thing he's doing?  Any guesses?  He's conducting a funeral for the little robot dogs, and I think this was an image from our Tokyo conference, that if I had to encapsulate, if I picked one image to sum up our series, this was it.  We always talk about artificial intelligence as something external, not embodied, not something that has a lot of elements, and here ‑‑ when Sony retired the eye bowl and stopped supporting them in 2014, people were so distraught that they were getting counseling, they were having therapy and actually conducting funerals in very serious temples for all of these dead dogs. 

 

 

So this really spoke ‑‑ I mean, I think this was something very, very clear, the idea of affect and emotion and the way we form relationships with technology.  I think we've started this with the Tamagotchi pets a long time ago, for those of you long enough to remember it, right down to the eye bowl, and forms of virtual girlfriends and other types of things that we play, like avatars online.  How it's not just something that's technology as a slave or an object to lift heavy thing.  It's also about companionship, and I think that's something that came out very strongly for us in Asia. 

 

 

And in terms of use cases, these were a lot of the images that kept coming up.  Farming, the way that AI could use it.  Disaster management, especially with typhoons.  Microinsurance schemes that actually look at how people can be insured differently around natural disasters in Asia.  The idea of rote learning in places like China and a lot of Asia, how AI could disrupt education and provide more humane ways of people learning at their own pace, in ways that are personalised and tailored. 

 

 

And this is one of my favorite images.  This woman is a hero.  I mean, she's such a rock star.  Aging is a very, very big problem in a lot of Asia.  People in Hong Kong, in Japan are living longer and longer, but they are not having children, largely because of companion robots removing the need to have sex at all in a lot of places.  People finding relationships complicated.  People finding work more and more all‑consuming. 

 

 

So we have aging populations that don't have children and grandchildren who are going to look after them.  So one term we kept hearing in our conversations was around the idea of AI being a solution to missing workforces.  And someone in our Tokyo conference said something deeply politically incorrect, but as any outsider would have felt very embarrassed to say it, but he was Japanese and he said it.  He said you know why we feel comfortable with the idea of AI and domestic censors and robots, we really don't want immigrants.  We don't want foreigners.  We would rather trust someone that even through machine learning and translation can speak Japanese and embody our cultures and values than actually have immigrants come and perform jobs that we no longer have people to do.  It was a really polarising comment.  People said, I can't believe you actually said that out loud.  He said, it's true.  We're a very homogenous society.  We don't like outsiders.  It's easier to imagine a machine at home, than to imagine people we cannot understand or we feel resonate with our culture. 

 

 

That was a real Eureka moment for a lot of us. 

 

 

In terms of responses to some of these problems of aging, disaster, these were some of the examples that came up of how with a lot of crops and disease, AI can actually help diagnose what our plans in remote areas.  You take the image, you send it, it analyzes what blight or what crop damage is occurring with rice plants. 

 

 

This was a great start‑up in Myanmar, called Bendez, which wants to provide more local content in the Burmese language, and be a solution where Google search might fail, and it's not meant as a replacement, because they'll never be as big as Google or as ubiquitous or as good, but they were saying maybe we'll fill a local niche in local content.  They have now partnered with one of the local news providers to translate news into the Burmese language and make it hyperlocal in a lot of ways for what people need. 

 

 

This was the sort of robot goes to school, but actually goes to school to get educated as a student.  You can see all of the older people in school are thrilled to have this robot to engage with.  The robot said something like I really hope to be a good student, I'm going to try my best. 

 

 

So there are lots of these ways in which they're invading the classroom.  The farming spaces.  Really looking at disasters.  There were a lot of positive use cases that came up.  And I think in a lot of cases we almost had to remind them to be worried and say, what about privacy, what about security?  And I think there was this sort of real ‑‑ it wasn't so much hubris that the technology would work.  I think it was just genuine desire for it to work, but also the sense that somebody else will fix that.  We're just going to keep producing the tech.  There will be a lawyer somewhere.  There will be a philosopher.  And actually, in that Korea conference, during the keynote, there was this wonderful keynote about living with other intelligences, where the speaker said, well, we've actually lived with all kinds of other intelligences in this region. 

 

 

With spirits.  Spirit animals.  With gods and goddesses that have personalities.  The idea of AI having a personality or machine learning or Siri, or Her, as we've seen in the movies, that's not really unusual.  It's just along that whole trajectory.  We don't find them alien. 

 

 

So these are just some of the learnings.  We are going to produce a report that actually summarises a lot of the really rich content.  I can't really read it here, so I don't expect you can.  I'll just tell you very quickly some of them.  Things like the effects on labour might be very different, making some jobs and workers redundant and creating and modifying other jobs, which might be very, very different in Asia given the nature of labour markets. 

 

 

The informal gig economy having completely different structures in Asia.  The idea of mobility and different cultural context of policy and trust.  There's a lot of great work in Japan on policies like the tentative AI R&D meetings, and that was a great use case of how Japan said instead of formulating national policy, why don't we actually take it to an international forum so that whatever we do in the region is already imbedded in international processes and structures, and it was a very detailed policy framework.  In terms of education and learning, we talked about how we increasingly need to rely on humans to provide different skills than the easily automatable, create subjective reasoning, imagination, storytelling, all of which are not as strong in a lot of Asian countries that rely on rote learning, and how we need to develop different areas of cognitive functioning. 

 

 

Then the idea of capability augmentation came up.  I think there's less fear about displacement and a lot of openness to working with AI as opposed to AI being a worker instead of us.  And all the different ways that can help.  Tensions between innovation and safety in countries that look to technology like AI, to leapfrog various stages of development, and see it as a way that they can actually excel. 

 

 

I'm going to call on (?) to talk about the way the media frames it as an arms race between the U.S. and China, and how China has overtaken the U.S. in terms of filing patents in AI.  So Daneet will talk about that. 

 

 

We also talk about when ethics get encoded and what cultural values are imbedded in the technology.  There was a lot of actual hope about law not just as a constraint, but as an actual enabler, saying can governments and policymakers set good incentives for the law to actually make AI actually realize the potential for good and minimize harm in many ways. 

 

 

A very strong undercurrent through all of our conferences was the idea of engaging the underrepresented.  And there will be others who talk about how the training data sets on which a lot of machine learning applications are developing are not necessarily representative data sets, and there are many, many reasons for that, including the fact that a lot of Asians are not actively engaging in the activities of data production.  They are not actually producing data themselves, and I think for me this is a particular paradox, where damned if you do, damned if you don't.  You are excluded if you don't participate in the data.  You are also excluded if you participate and it reinforces and amplifies existing social policies.  You know, which way do you go? 

 

 

So I'm just going to stop here.  I've got a couple more slides, but I'll just actually go to the last one to say what we're doing next in this space in case you want to engage.  I mean, those are pictures of black boxes and flight recorders, which are very endemic also in the way we talk about it in Asia. 

 

 

These are the sort of keyword takeaways that came up, the idea of context, culture, which are the actors involved, what are the processes and means by which we regulate our don't regulate AI, the concept of ownership and stewardship, especially the idea of control and consent, which I think we're seeing that consent doesn't work, but it particularly doesn't work when a lot of it is ambient intelligence being gathered without the opportunity to say don't put me in this data gathering exercise. 

 

 

The idea of collaboration and community, because the problems that AI is being used in service of seems so intractable without algorithms that can actually do this at scale, that I think that does shift the debate in a lot of developing, emerging economies.  The idea of sustainability I think also came up in our conference a lot. 

 

 

And finally, diversity. 

 

 

So on the last slide, some next steps.  We are going to be doing a set of AI case studies, specifically looking at use cases in Asia.  We're going to do a call for people who would like to contribute to this collection.  A companion volume is a set of essays on AI futures, specifically coming up with non‑western ways of imagining the future with AI.  You know, building on a lot of local storytelling and dreams and myths and gods and goddesses and all the things I mentioned before from Asian pop culture. 

 

 

We are doing a new series of events.  We did Hong Kong, Seoul, and Tokyo last year.  This year, we're going to do India in February.  We're doing New Zealand/Singapore to be decided this week.  And we're doing Indonesia.  Indonesia, we'll spend a lot more time looking at issues of content and the ways in which automation can actually help or make hate speech and radicalisation better or worse. 

 

 

So open questions there. 

 

 

And then we're doing an AI cookbook, sort of recipes for what good AI policy looks like.  Again, one sort of way of demystifying the vocabulary and using more grounded ways of talking about it like a cookbook and recipes for people to think about when, you know, you want to use AI, should you use AI?  What can it solve?  What is it good at?  Why is the technology not there yet?  And some of the collaborations and co‑convening activities with the partnership on AI, which many of you may know. 

 

 

Six companies started out with all the main players in this race, and has now expanded to actually include a lot of not for profits and Civil Society.  CIS too from Asia.  The ethic and governance of the AI fund, and the Brookman Center, we have a grant from them.  And we'll be collaborating with Chatham House UK on the policy impact on this space. 

 

 

We also did an issue on AI and trust at the CIDCCP, which came up with lots of really interesting ways that people are thinking about trust. 

 

 

So I'll just end here, and open it up to our panel here.  I ask them to each come up with either their idea for utopia or dystopia using AI from their context, or to actually share like a favorite story in this space. 

 

 

Jake, do you want to start? 

 

 

Actually, before we do that, we ask people to think of three key words.  When they think of AI in Asia, what do they think of?  I'm actually going to throw it out to all of you.  When you think of AI in Asia, throw out some key words to see if our group actually matches what you were going to say.  One‑word key words. 

 

 

Did someone say big? 

 

 

[ Laughter ]

 

 

It's true.  Yeah.  Anything?  Okay.  We'll just hand it over.  Jake, do you want to start from there? 

 

 

>> PANELIST:  I don't even know where I come from anymore.  So I was confused do I answer United States?  Do I answer Hong Kong where I live now, or Thailand where I spent most of the last decade?  I decided to go with Thailand.  I've been very heartened by a lot of interesting healthcare research.  My utopia, to answer your question, would be where sort of geographic origin or where you live in a place like Thailand doesn't determine your access to healthcare or education services.  That's my utopian vision. 

 

 

>> MALAVIKA JAYARAM:  Regardless of where you are?  Portability?  Okay.  Jack? 

 

 

>> PANELIST:  Initially I had a story about robots.  I guess, if it's about healthcare and regardless of where you are, I think it's also about regardless of what kind of body you have, in particular if you have the gender diverse body, the different kinds of abilities, then, yeah, access to that.  And a utopia would also be data sets will have an expiring date, that they will just erase at some point. 

 

 

>> MALAVIKA JAYARAM:  Self‑destructing data sets. 

 

 

>> PANELIST:  So in China, you have what you might call babysitting robots.  They are toys that look like very famous anime characters, but they have the ability to converse with your child, and they are part caretakers, part friends, part educators, part even sibling for kids who are lonely.  So this is something that is getting a lot of traction in China and really helping parents leave their kids alone or with grandparents or with caretakers.  I don't know if you've noticed this, but there are a lot of stories about abuse in China, about caretakers abusing the kids or in kindergartens, and the parents are very much afraid of leaving their kids alone, even with strangers that they think they trust. 

 

 

So this is kind of a solution for parents who are able to go out and work, but still have access to what the kid is doing and even can teach them English for a lower price, which is something that the parents are very passionate about. 

 

 

Is it utopian?  Is it dystopian?  I feel like that would depend on our perspective.  I think that from a local perspective, I think it's very utopian, but from a westerner perspective, people are like, oh, my God, this is completely dystopian.  So I guess that is left up to you to decide. 

 

 

>> MALAVIKA JAYARAM:  Thank you.  Vidushi. 

 

 

>> VIDUSHI MARDA:  I think my idea of an AI future would be deliberate.  I think right now we're in this phase where it's fashionable to be worried about the race and to be ahead of the curve and to be leading the race, so to peek, and I think it would be really nice if it was fashionable to be slow and deliberate and think about what are we running with and where are we running to and why are we doing this?  That would be great. 

 

 

>> MALAVIKA JAYARAM:  As a lazy person, I can totally buy that.  Don't run at all. 

 

 

>> PANELIST:  I don't know if this is dystopian or utopian, but I'm interested in concepts of virtual reality and what is AI doing to our social interactions between each other.  I guess in some ways, that's very context specific, because as you say, your vision of robot could be utopian, but it can also be dystopian depending on what perspective you're coming from. 

 

 

>> MALAVIKA JAYARAM:  Okay.  KS. 

 

 

>> KYUNG SIN PARK:  When you asked for a specific word, I almost said data.  Because AI is a programme.  I mean, OS is a programme.  It can be copied, it can be made available on Cloud. 

 

 

So, a programme has gotten more intelligent than before, it should not be a game changer.  But what's going to change the game is the data that's needed to train AI.  The brain is just ‑‑ you know, the brain is just a bunch of molecules.  What makes brain intelligent is the memory of the experience, the identities that the brain ‑‑ that constitute the human brain. 

 

 

So a programme without data really does not mean anything.  So for me, utopia is data socialism, where people have more or less equal access to data that's going to enable the great futures of AI so that it's not replaced by a smaller number of players. 

 

 

That data socialism I think ‑‑ I'm hoping that will abate some of the economic impacts that we are concerned about, displacing workers and worsening inequality.  So, you know, what I just said I think includes my vision of dystopia as well, where data sits in silos controlled by a small number of players, not being made possible ‑‑ not being accessible to a great number of people. 

 

 

Where did I get this idea? 

 

 

>> MALAVIKA JAYARAM:  Science fiction? 

 

 

>> KYUNG SIN PARK:  Did you watch the movie "Ex Machina"?  If you saw through to the ending, the guru reveals where he got the data to train the robots.  Does anyone remember what the source was? 

 

 

Yes, the Internet.  So it was from us.  We all together own the data.  That's why we talked about data sources. 

 

 

>> MALAVIKA JAYARAM:  Thank you.  So much to unpack there.  The idea of data and memory actually brings up something that came up in one of our conferences where someone showed us demos of psychologists or therapists, which were AI, made in Japan.  There was this idea that people would feel really weird talking about their deepest, darkest secrets to something that wasn't real.  They actually found it was exactly the opposite because they felt a lack of judgment, which again is really interesting given that we don't want to talk to other people about our problems because we feel judged, yet humans have an ability to forget things, whereas this is something that actually collectively listens and stores every single horrible thing you told it, and that is almost permanent.  Yet people feel they are not being judged by that and decisions aren't being made.  So that sort of was a very interesting paradox as well. 

 

 

>> PANELIST:  There's actually an app called Replica, if people are familiar with it, that creates a replica of you on the Cloud.  It creates kind of a virtual identity, and the more you speak to it, the more it knows you.  It's basically almost as if you were having a conversation with someone who is you, who understands you and knows what you're thinking and what you're doing, and people love it.  The original idea behind it was from a woman ‑‑ her husband died, he passed away and she really missed him and she missed having conversations with someone who really knew her and was her best friend and she created that app, and now it's really getting all over and people have testimonials of saying it's the most meaningful and not judging conversation that they had, and it's been helping people a lot.  It really connects to an application of that. 

 

 

>> MALAVIKA JAYARAM:  I'm going to turn now to Ellenoy to tell us about the work that CIS has been doing on AI in India, and then we'll open it up to the others. 

 

 

>> PANELIST:  Thanks, Malavika.  So CIS began research in AI just a couple months ago in India, and we are doing a sectoral analysis.  So we're looking at four different sectors and the implementation of adoption of AI in those sectors, including finance, healthcare, IT, and governance.  I think we're the furthest in our healthcare studies, so I was going to share some of the learnings that have come out of that research, because I think they give context to some of the questions that Malavika posed, at least on the thread of what is unique about Asia and perhaps India as a context when you're talking about AI. 

 

 

So, as part of our research, we had a round table with healthcare start‑ups in Bangalore using AI and developing AI predicts and we also brought in practitioners, so surgeons who might be using AI or doctors who are using AI.  I'll just kind of go through some of those learnings, perhaps starting with the data. 

 

 

So, start‑ups were seeing vets getting access to initial data to create a prototype was difficult in the Indian context because India does not have really good open data, open medical data.  So they were actually taking and using open data from context like the U.S. or the UK and building their prototype, and then going to hospitals with their prototype. 

 

 

Once the hospital saw the prototype, they were more open to sharing their data with them.  The challenge in that is that the initial prototype is built off of U.S. data with U.S. demographics, so retraining this ‑‑ a need to retrain that data to Indian demographics, which is incredibly important when you're talking about healthcare. 

 

 

Then they talked about once they had access to data from the hospital, there is a large amount of data.  India doesn't have very strict regulations around sharing the data, but that data came in many different shapes and sizes.  It could be a picture of doctors' notes and they had to try to unpack those.  You know, what exactly was that picture saying, and then make it usable.  So there's still a lot of work in terms of having access to standardised and usable data. 

 

 

Then it was interesting when they were talking about design.  You know, design of these products and these platforms.  They call it the need for standard design guidelines, and many of the countries are adopting their own ‑‑ making their own guidelines as they are designing these platforms.  Some of the key principles that came out that are guiding the design include incorporating the principle of do no harm, which is a traditional principle in the healthcare sector.  A strong emphasis on choice and consent for the user.  That the compliance is on the data controller, so regulatory compliance always rests with the data controller.  The service should always augment and should never take over the position of practitioner.  So they were not aiming and were not in the present creating systems that were autonomous and making autonomous decisions for doctors. 

 

 

And one way that they're addressing the privacy question is to just not use identifying information.  So they might use personal data, but it's not identified, or you can't link it to a person. 

 

 

Last of all, promoting human‑to‑human interaction through their services.  So one example was of ‑‑ an app, or it was a chat bot that worked with individuals with individuals with a mental health ‑‑ with perhaps depression, and so the idea was that the chat bot would help to guide the person to a practitioner that could then assist them. 

 

 

In terms of ‑‑ we talked to them about what are challenges that they're facing in adoption of their services.  One big one that came up was that when they go to hospitals or doctors with their product, often they're asked for proof of a clinical trial, which is a little more traditionally something that you find around drugs and drug testing and is not an appropriate standard for mobile or health or medical devices.  They identified a regulatory gap on medical devices, but I think India's considering filling this as there's some draft rules coming out in 2018. 

 

 

And then they also found a knowledge gap between ‑‑ with doctors and their able to use the systems that they were designing. 

 

 

When we discussed implementation, it was interesting, specifically with the example of the chat bot that worked with different patients from different ‑‑ a mental background.  They actually found the chat bot was very empathetic.  When they were taking user feedback, a lot of the feedback is that it's a very empathetic service and they prefer using that service versus trying to talk to a family member about what they're going through.  It's also interesting around questions of liability.  The doctors were saying at this point, at least the systems are making them more liable because ‑‑ so if it's used for monitoring a patient, they have AI working and they're getting updates on the phone.  They no longer can say like the nurse forgot to tell me or I somehow missed this, but there's very clear accountability on if they have the information and if they reacted to that information. 

 

 

Generally, I think that there was consensus that the tools and services that are coming outs are assisting doctors, and it's kind of aligned with the research that we did around news items and different reports around the use of AI and the whole sector in India, and that it is ‑‑ those reports are very positive and that it's going to help reach a number of different patients that are not reached right now. 

 

 

And it's interesting because it's a different narrative than we see for IT sector in India, where a lot of the news items are going to be about how India is going to displace jobs.  In the health sector, it's a very positive narrative.  So those are some of our key learnings from the research. 

 

 

>> MALAVIKA JAYARAM:  Did you find that doctors saw this as competition in some way, that they didn't want to engage because they thought it was going to do their job better? 

 

 

>> PANELIST:  At least not with the participants at the round table.  I think we've seen news items that might interview a doctor who initially might think it's competition, and after they've adopted the service, they realize that it augments their job rather than replaces their job. 

 

 

>> MALAVIKA JAYARAM:  Thank you.  KS, if you could go next. 

 

 

>> KYUNG SIN PARK:  This is (?) we are going to talk from the experience of having assignment on AI.  About the same time a year ago. 

 

 

I am kind of split in between, because the reason we have this event, IGF, is that UN ‑‑ Internet is great equaliser, great liberator, giving people the same voice, powerful information, the same as governments and companies. 

 

 

Now, AI also I think has the potential of being great equaliser and liberator.  But at the same time, when you go to Asia, you will see that just as many of the Asian countries saw more of the economic significance of the Internet and social significance, I see a similar trend in Asia.  AI is being more appreciated for its economic potential than its social potential.  And I'm hoping that that is not the reason why many Asian commentators are more pragmatists, and also the way people receive AI is more pragmatist than what we see in the west. 

 

 

So, I mean, one title from the (?) was safety of AI or AI for safety.  So what the presenter was trying to make a point is that there are too many people talking about whether AI is safe, when AI can be used to make other things safer.  I don't want that to be part of this trend where Asian countries see the focus too much on the economic potential of AI as opposed to social potential. 

 

 

And it is really important for Asian countries because the social welfare system is not as well‑developed in, for instance, Europe.  Much of the taxation rate ‑‑ I mean, we think that welfare in the U.S. is not that great, but the taxation rate is like 35%.  Korea's taxation rate, the rate of ‑‑ the total amount of tax versus GDP, in Korea, 25%.  Japan, like 27%.  The social welfare safety net provided by the government is not that great compared to European standard, so if AI is really sought for as a truthful national economic development, we're not going to be able to do much on abating the inequality problems that are caused. 

 

 

So, yeah, that's why I'm split. 

 

 

>> MALAVIKA JAYARAM:  I think that split also came up in people couldn't decide if it was utopia or dystopia. 

 

 

Vidushi has been doing a lot of work on this and I think she has particular critiques about this whole trend of the move to accountability, that whole category of research and work.  So, over to you. 

 

 

>> VIDUSHI MARDA:  Hi, everyone.  I'm Vidushi, like Malavika said.  We look at ethical, legal, and regulatory approaches to AI.  So we're a part of the ethically aligned design initiative, but also part of the partnership on AI. 

 

 

At the same time, I'm based in Bangalore, India, where I read the newspaper sometimes, and I read ‑‑ and I listen to the discourse in our area at home.  And there's this kind of distinct difference between how AI is spoken about and engaged with. 

 

 

So what I wanted to share with you, just the reflections that I've had.  In my attempt to, like, articulate my angst about, like, why isn't everyone seeing the same thing?  To, like, what are the problems that underlie this difference? 

 

 

So I think the first thing is that we have dominant narratives that have kind of like led the discourse in our area.  Particularly, an interesting area of study only a few years back.  And what happens particularly in India is that we're trying to fit western ideas and western narratives into our cultural and national situation.  So, for example, let's look at data protection.  So when we talk about data protection at the iTripoli or the partnership on AI, or at any such event that is usually dominated by western narratives, what we find is there's a lot of focus on data protection and there's a lot of focus on how do I know how my data is being used. 

 

 

So let's take that and see how it translates the something like the Aadhaar scheme in India, where people are like, please add me into your data set, and please make sure I'm counted, because in India, being a data subject is not a cause of concern.  It's actually a sign of power.  It signals to your government to please, look at me, I'm visible, I'm a person, and I deserve some sort of, you know, help from you.  This is one thing I find problematic.  We're taking western ideas and stamping them onto national realities, but there's a deep ‑‑ what's the word?  Like, disharmony?  Is that a word?  Yeah.  There's a deep divide if how we think of it and also how we engage with it. 

 

 

The second thing is that we often ignore the cultural realities of the situation, of the place which AI is implemented.  The first is jurisdiction and legal, and ideas about how the government lets you interact with them. 

 

 

The second thing is, when we're talking about building data sets, for example, like, we know that, okay, it's a problem that we don't always have representative data, and that data sets aren't always perfect.  But in India, what happens is that this promise of efficiency over the large population is really seductive.  Everyone's like, we have the system that's going to make it official for 1.3 billion of us, let's just go for it. 

 

 

But what we don't think about are more intangible concerns that happen in the medium term, which is humanised implications on freedom of expression and privacy.  Because we don't see them, we don't always immediately engage with them.  And this is especially true when we talk about policy and legal discourse, not so much in sectoral regulation.  (?) spoke about her sector, there's a little more careful regulation because we're talking about particular (?). 

 

 

But as legal and our policy communities, I feel like we often talk in really broad terms, which ends up being (?) to what we actually want to do. 

 

 

So what often happens is that we have really accurate answers, but we've asked the wrong questions.  We don't really say ‑‑ let me think of an example.  In the U.S., you have critical sectors.  So you have actual regulation and they're very strict about what you can or can't do with respect to data, but in India, we don't have data protection law. 

 

 

On top of that, we think of a situation where it's being deployed in a way that's less critical than in the west.  That's another really uncomfortable situation that we find ourselves in often, especially at the legal ‑‑ I just want to clarify, I'm not saying that we're not thinking about it at all.  Maybe we are in the sectoral sense, but not so much in the broader legal sense. 

 

 

And the last thing is, I'm really critical of this idea of transparency and fairness as this thing that's going to save us all and make all our AI features utopian, because I feel like it's a very complex and loaded term.  When we talk about transparency, usually transparency, it indicates like a vertical relationship with a state and a citizen, so you owe me an explanation of how you're governing me, and there's some constitutional legal frameworks that apply to that particular relationship.  And when we try to translate that vertical relationship onto horizontal relationships, what we find is that transparency doesn't always work with companies, and I'm sure Jake can tell us more about the challenges that you face on a day‑to‑day basis because we can't hold them to the same standard.  We shouldn't hold democratically elected governments to the same standard as we hold private companies. 

 

 

But that level of granularity is really missing, especially in India, but I also think around the world.  I feel it more because I live in India, but definitely around the world. 

 

 

And also, when it comes to machine learning, transparency can give us very little because by definition, machine learning recreates rules with time, and so even if we have the algorithm, we need the data, and if we don't have the data, you know, transparency means very little.  It doesn't help us achieve fairness and definitely doesn't help us achieve accountability. 

 

 

I know this is a very really and very popular area of study within academic circles, but what I find really problematic is we're not thinking of it in as deep and in as critical terms when we're implementing AI and definitely not when we're thinking of developing afternoon, AI.  And so these are a few insights that I think we need to think about going forward, because it's easy to talk about.  You know, we shouldn't deploy AI unless it's fair.  But we don't know what fair means.  We don't know how to articulate fair in legal terms, and we definitely don't know how to use the word fair in a way that makes sense to the person developing AI.  So I think these are some challenges we should think about going forward. 

 

 

>> MALAVIKA JAYARAM:  Thank you.  Some of you may have seen this great tweet storm from professor (?) at Princeton, CITP, where he was saying lawyers are used to dealing with ideas of fair and equitable.  Computer scientists are not.  And if you want them to programme fairness, there needs to be a shared common standard or understanding, and he said, when they actually looked at it in a conference, there were 21 standards of fairness, which legal systems are completely comfortable with working around and we're used to different jurisdictions, we're used to different systems of law, civil law, common law, we're used to this. 

 

 

But he said as a computer scientist, if you want me the programme and actually action something, I can't work with 21 definitions of fairness, like what does that even look like?  And I think a lot of humanity is people under threat push back saying, well, it's sort of like saying it is one Internet.  There are many Internets.  There are many ways of looking at this. 

 

 

So it isn't as intractable a problem, but I think it kind of gets to what you're saying as well.  This sort of definition of implementation challenge.  When it comes to very abstract ideas of fairness and equity. 

 

 

I'm going to ask Daneet to talk ‑‑ I mean, China is a big success story.  It raises the utopian/dystopian paradox that we're coming across.  So tell us about all the recent trends or what you're working on in China. 

 

 

>> PANELIST:  So obviously, I don't know if you can tell, but I'm not Chinese.  I'm based in China, and China is obviously a very, very big country, and they have a pretty fragmented market in the sense that there are so many developments in so many areas, but then again, they do have a common thread.  And for those of you who have read the next generation of AI policy or development plan that came out from China in July, I believe, have a pretty good sense of where this is headed, at least from the government side.  And I think that one thing that has always been a point of interest when speaking about this in the west is kind of the differences in how we feel like ‑‑ and I think that ties into what Vidushi said.  We speak in the same code, but we are saying completely different things. 

 

 

So even in the level of applications that we build, they may be based on the same language, but their uses, their databases, their demands are completely different.  And what I kind of sense is that we're getting to a point where we are going to see a pretty serious divergence in technology, where certain applications of machine learning and other AI kind of technology are going to be used for certain things.  And if there's one thing that I think we all know is that there's a cycle in the sense that humans create technology, but technology also creates the way that we behave as humans.  And this kind of reinforcing cycle is going to lead us to a point where we're going to start seeing technology that is completely different, even though if you open the code, you might be speaking if the same language. 

 

 

So, I mean, I think that there are very strong themes in China in terms of healthcare, in terms of infrastructure.  Some pretty great work in smart cities.  Attention is going into look and helping make your services more accessible.  (?) is going to autonomous vehicles.  I could go on forever, because there are so many companies doing incredible things in China with incredible amounts of data and computing power that this is really a market that for me has no end in that sense.  I think that what is interesting to talk about in this kind of context is the idea of an arms race, which has been catching on pretty strongly, and I think that it is ‑‑ it's a scary terminology that kind of creates a zero sum game mentality, which I hope that we can avoid.  And that sense of winner takes it all there.  Are only two very powerful countries that kind of broadcast, because if we look at the U.S., we say, okay, the U.S. is home to all the really big multi‑national corporations that have all these amazing technologies and are basically going around the world and providing a lot of services. 

 

 

And then in China, we're looking at one belt, one road that is building a lot of infrastructure and has a lot of geopolitical influence, so we say through that they will take their technology and build it into the infrastructure, which I think is true, by the way. 

 

 

So we're kind of tempted to see this in a very binary sense, and I think that that is also the problem with programming, right?  Because it's like true or false, zero or one, China or the U.S.  It's very tempting to think about it, because it really simplifies the way we look at the application of AI, but I would strongly urge you against it, I think that a panel called AI in Asia is perhaps the best way to do that, to really show the difference in the countries and in the applications that they want and they need and they have and how they treat data and how they think about development of AI. 

 

 

There is so much diversity to be discussed, and I really think reducing it to kind of China versus U.S. is in a way efficient, but it gives us very accurate answers, but are we really asking the right questions in that sense?  So I feel like it's very tempting to go in and say, you know what?  The U.S. does this and China does this, and this is exactly what AI is going to look like.  But I really don't think that is the case, because even if we provide you kind of services, and I think Jake could really speak about that, Google provides the same structure, or the same services that are based on the same code, but they are used in completely different ways and are applied in completely different ways within countries.  So I feel like if there's one takeaway that I would like to give you, is that the technology may be the same, but the people using it are different, and this kind of difference is going to continue translating into a technology that's going to be different even in the level of design. 

 

 

>> MALAVIKA JAYARAM:  Thanks, Daneet.  I think Google is interesting in all kinds of ways, because I think it does represent one fear that people have, that this is a game that only the really big companies can play just because of the amount ‑‑ the volume of data that they have.  So it's very hard for a new start‑up to get into this space and make impact. 

 

 

But on the other hand, Google has things like flow which allows people to use it in an open source manner, and say here's a tool.  All of you can play at this AI game, too, so I think that's another paradox where you're also making a lot of your tools and increasingly your research available on safety and machine learning. 

 

 

So over to you if you want to share what you're working on in Asia and where you see the challenges and opportunities. 

 

 

>> PANELIST:  Thank you.  I've been very struck by what so many of you have said and it resonates a lot.  I think in Asia we are in a situation where we have tons and tons of great talent, and we also have lots of countries who are willing to stamp a global leadership role in this.  China is one.  Japan is another very important one who's really taking a leadership role in thinking through how some of the international community can come together to look at how to promote this technology and make sure that it is beneficial to as large a number of people as possible, while also addressing some of the concerns around safety and ethics and so on and so forth. 

 

 

So... I mean, at Google, we think about this a few ways.  There's products that we build that try to make sure that they are bringing new accessibility to new populations.  One of the examples I often talk about is Translate, and the fact that Translate makes it that much more accurate and allows people who don't speak English to access the Internet in ways that were never possible before and to communicate with one another and to do lots of new things that were never done before. 

 

 

But then beyond the product perspective, at Google, we're also trying to do more and more to collaborate with researchers in this region.  So, for example, we often talk about this case of us collaborating with Indian researchers to be able to diagnose diabetic retinopathy, which is a very treatable disease which results in blindness if it's not caught early.  It's relatively easy to diagnose.  But if you live in a country like India where many people don't have access to health services, it's very difficult to get that diagnosis.  So we've been able to work with Indian researchers to use machine learning as a way of diagnosing this problem at scale, very low cost. 

 

 

But then beyond those direct partnerships, there's a lot of I think the future here and how we can make sure that people in Asia can really continue to play a leadership role, by making sure that they then have access to the tools to develop home grown solutions.  Malavika mentioned our open source software library that allows people to do machine learning, take any data that they have items and to use tensor flow as a way of devising machine learning to problems. 

 

 

We've also been opening up more and more data sets, quite large data sets.  So, for example, video, image data sets that are based from YouTube, natural language, voice data sets that cover many, many languages around the world.  While these are not equally representative of many people around the world, they do come from a store of data that is being developed from all around the world, and they allow researchers to take these data sets and use tensor flow and devise interesting solutions to problems, whether image‑based, voice‑based.  So this is another way that we're trying to make sure that people from diverse backgrounds from around the world can really participate in this technology in new ways. 

 

 

I wanted to also touch quickly on some of these fairness points that were raised earlier, because at Google, this is one of the things we spend the most time talking and thinking about internally, which is how do we make sure that machine learning is a source for actually combatting discrimination and bias, and that is helping to promote, you know, positive values and not amplifying bias.  But it's a really, really thorny and complicated issue, because as Malavika mentioned, fairness is not a very easy to define concept.  What do we mean when we talk about fair?  Do we want these things to be neutral, meaning that anyone that has an equal chance basically of having a particular outcome, or do we want it to actively promote some notion of equality, where if there's bias that exists in the offline world, we want machine learning to actually proactively combat that bias rather than being neutral toward that bias.  These are open questions, and people have different views on it.  And whose role is it to make these decisions, this is very challenging. 

 

 

So one of the ways that we have been trying to help get at that within Google, we developed this people plus AI research initiative, which basically brings together researchers from different parts of the company to come together and produce tools that are useful for thinking through these challenges.  One of them that we recently released is what we called a facets tool, which allows people to look at what are the features within a data set that were relevant for particular outcomes, and to visualise those.  So if you're using a machine learning way of saying ‑‑ of identifying shoes and you give the classifier a bunch of images of shoes and ask it to then identify shoes, this tool allows you to look at features from a data set that you put in and see which ones are the relevant ones that it's looking at and then visualise those. 

 

 

So then you can see, well, is it paying attention to the right features that we care about?  And so you can imagine from the perspective of bias and discrimination, this is really important, because if you're looking at a data set that involves people and you want to know is it looking at the right features or not, and once you can visualise that, you can make adjustments to reduce that bias and reduce that discrimination. 

 

 

So across the company, we're working on tons and tons of initiatives like that.  That's just one example. 

 

 

But then even within ‑‑ this is sort of a question about research, but also I think there's an interesting question about how we can take machine learning as a tool to look at bias in the offline space.  And so Google.org, we funded a project with the Gia Davis institute that basically used an automated way of identifying a percentage of speaking roles that men and women have in movies.  So the 200 top grossing box office movies of two years ago were basically plugged in and able to find that men have twice the speaking time that women have, not a surprise to any of us. 

 

 

But another useful result from the study is that the myth that movies with men as leading actors make more money was actually proven false, and that movies were women were the lead roles actually made more money than the ones where men were in the leads.  So actually maybe it's in Hollywood's interest to cast more women in higher profile roles and give them more speaking time. 

 

 

And this kind of solution wouldn't have really been possible without this technology.  So how can we take technology and apply it to challenging offline environments and make sure that we're helping to reduce discrimination where we can. 

 

 

Just a few things at the top of mind for me.  Thanks. 

 

 

>> MALAVIKA JAYARAM:  Thank you.  I know this bias issue is very, very fraught, and I know a lot of platforms have had really bad pushback when they've tried to use automated ways, like when Facebook took down the famous napalm girl picture, because it violated their guideline around pornography.  Can a computer tell what is art?  We can barely tell what is art.  So I think that was a huge backlash. 

 

 

And finding images of black people who were tagged as gorillas.  There are so many ways in which these sort of methods of tagging and filtering and automation are posing big issues.  But I'm glad they're sort of coming out in the research community, because I think what it really does is hold up a mirror to how biased people are and how humanity offline is actually very biased.  So it's a terrible way to do it, but I'm glad it's at least surfaced a conversation around bias.  I think that's a really good point to bring in Jack, who does a lot of good work around women's rights and gender and sexuality around Asia.  Especially responding to Jake or even your own work in the field, what does AI in Asia sort of mean for you? 

 

 

>> PANELIST:  Thanks, Malavika. 

 

 

As I'm listening to the whole panel, I just came from a session around gender and access.  I feel like I am existing in parallel universes, no?  It's kind of a weird situation where like ten minutes before, we were talking about trying to deploy affordable, accessible, as cheap as possible technology to enable access to disconnected areas, which is still 50%.  And then ten minutes later, we're talking about robotics and machine learning and funerals for dogs and so on and so forth.  And it's in some way part of the same conversation, right?  Because it's about access to technology that has the potential to change quite fundamentally power structures and the way in which we relate to each other, and it's the same question about asking who has access to this technology, who's making decisions around them, what is the technology, and what change will the access to the technology or the control over the technology do in terms of how we are currently living our lives and understanding how our society has been shaped and formed, and addressing some of the issues around discrimination, exclusion, disparity that we are still contending with. 

 

 

And sometimes I find that in conversations around AI and machine learning and so on, there is ‑‑ it's almost like a three‑part question.  So I work a lot with people ‑‑ well, I work a lot with women, and women being a very diverse group, and I also work a lot with queer peoples, so trans, lesbian, bisexual, and so on.  And when it comes to data, we have three problems.  One is you're not being counted, the one that Vidushi talks about, the one that you're not part of the data set because you do not exist, whether this is you're an refugee, or I just don't accept trans people as a category to exist.  Or you are deviant and outside of the norm, and therefore you become very big and not quite, like ‑‑ you know, your shadow is bigger than who you are. 

 

 

Or you are weirdly, oddly, wrongly represented.  And this is something we know, right?  Like these three problems exist currently in our engagement with data anyway, and we are now very enthusiastic to try and see what we can do with these data sets which are wholly expanded and wrong, which is great.  I love technology, I'm enthusiastic about it.  But I find this question about utopia/dystopia also a very difficult one, because like a Facebook status, it's complicated.  It's like existing in simultaneous reality.  We're kind of at the precipice of it.  The reality of, on the one hand, I'm sitting in a session that talks about trying to get more diversity, to make sure that the research we're doing around understanding the human body is actually reflective, so the medicine we create is reflective of the diversity in human bodies.  And in another session, we are still talking about access and not having access to technology and where decisions are being made about you anyway, regardless of whether or not you have access. 

 

 

So for me, there's a couple things, I guess.  One is the conversation on AI and application and so on with the conversation on connectivity and access to the Internet is an interesting one and one that helps to tick down sometimes where the AI conversation sits into kind of materiality. 

 

 

Second is the three components when you talk about technology and access, which is about technology and the technology developments and application, and then the structures of power around who owns the technology, what is the framework and narrative that is driving the development and application of the technology, and everything else in between in terms of continuity, power, decision making. 

 

 

And another triangle is the body, like what actually body is going to be affected, whose actual bodies are going to be affected, and this relationship. 

 

 

So often we find the conversation happening at this level, which is the technology developments in application, or at this level, which is about structures of power, but more about policy and so on.  But very at kind of a relational level.  Like how do we create this relationship that speaks to all three levels. 

 

 

And the thing ‑‑ and then it got me thinking about the context of Asia.  I'm from Malaysia, and what can I think of in terms of this context that is Malaysia, that is Southeast Asia, for example, Alibaba data.  I mean, China is not even Asia ‑‑ let's not even go there.  I'm like, oh, white Asia. 

 

 

[ Laughter ]

 

 

It's kind of a different ball game, no? 

 

 

So what do we have in the context such as Malaysia?  We have an acute lack of checks and balances in terms of norms and policies to begin with, and around privacy to begin with.  And we have the very enthusiastic deployment of economic development as a framework to talk about anything that is basically a shorthand for let's not talk about human rights.  Let's just remove it. 

 

 

And then we have specific ways in which sexuality, gender, nationality, and other markers of identity intersect.  And we are very early adopters of biometric technology.  I wrote about biometric and identity cuts in 2003, and these identity cards with your name, your race, your gender, you cannot actually change without killing yourself. 

 

 

And then we have really bad inaccuracy in data collection, often to justify a very specific position.  So, for example, in India, there was research recently looking at the categorization of data and the expression online, and then you have this randomly made up category of the sexual freak next to the category of the heckler that has no categories to begin with. 

 

 

And then also as kind of like a region have less able to negotiate international policy adoption around technology or adoption of application of technologies.  So this is the context.  And then you put in AI.  And then you talk about.  And then you bring in this whole conversation.  Let's bring at this from a big access to technology framework.  I don't even know where to start. 

 

 

But the good thing is because I'm an optimist and I'm an activist is that there's a whole bunch of ‑‑ I mean, there's also a really, really vibrant activist community that really intersects also with activism around culture, around urban planning, around design, around technology that is very young as well ‑‑ well, maybe not so young.  I'm not young.  Okay.  Youngish. 

 

 

But it's also full of imagination and things which are coming up.  There is a reclaiming of a growing awareness around deterritorialisation and decolonisation, that I hope will present a different response, to some of these questions.  No answers, but just a set of thoughts. 

 

 

>> MALAVIKA JAYARAM:  Thank you.  I'm glad you brought up the ageism point among other good points.  It reminds me of that great statement, where people were saying that old people don't know how to use the Internet, and famously said, no, actually, I think we invented it, which I always come back to, I think that was such a great sort of reality check. 

 

 

Lots of really great points there.  Before we go around to do another round of comments from our panel, I was wondering if people had questions, comments, responses to any of what's been talked about?  Yes, gentleman at the back, if you could just say who you are.  Thank you. 

 

 

>> PARTICIPANT:  Thank you to the panelists for sharing valuable information about AI in general.  Application in every walk of life.  Presently ‑‑ my name is Kashar from Pakistan.  We have a draft, a digital Pakistan policy and included a dedicated chapter on artificial intelligence and its importance.  Regarding the AI R&D and (?). 

 

 

As we know, (?) to change the future of the world, we can say that artificial intelligence and big data will affect about 50% of world economy as per international reports.  All human tasks can be formed by artificial intelligence by 2060. 

 

 

So, considering such projections about human versus technology race, there is my suggestion proposal that we as a developing country, would like to share that our economy is to run a sample pilot project of artificial intelligence through leading economies of the world.  We hope to support millions of jobs and (?) will begin, which could open the door to new technology to harness human energy as well as to displacing (?) in the developing countries. 

 

 

In this regard, private sector of the government like Pakistan are ready to play a role to run AI pilot programmes regarding (?) to machines by digital technology through AI application.  Thank you. 

 

 

>> MALAVIKA JAYARAM:  Thank you.  I am all for pilots in anything.  I think a lot of the biometric schemes we've seen, which are evil on so many levels, would be so much better had we actually done pilots about the externalities, so I think the call for any kind of impact assessment or pilot in this field is really warranted.  I don't know if others disagree, but I would urge that before we rush off to do pilots, we also make sure that we have laws and policies that go hand in hand, because pilots are often done in countries that are what I think of as low rights, where there's a sort of arbitrage of there's no data protection, let's run this here.  So that's something I would like to see, they actually trigger some sort of policy reform around those things also.  Thank you. 

 

 

Any other questions, responses? 

 

 

KS, you had some more points on data that you wanted to come back to. 

 

 

>> KYUNG SIN PARK:  As you've noticed, half the commentators were all going through dilemma, some sort of divide, some sort of split in looking at how AI is being treated in their respective jurisdictions. 

 

 

One way out of that I think is looking at data governance, emerging data governance in Asia.  In Asian countries, data protection laws are either new or underdeveloped or are being adopted right now.  So we have the privilege of reviewing and sharpening the data protection law so that it doesn't contribute to the buildup of data silos, but it contributes to inputting more data into the social reservoir of open data, which can be fed into whatever AI software that people have access to.  In that regard, if you look at Indian ‑‑ you talked about IT act and partial ‑‑ okay I don't know where the noise is coming from. 

 

 

A partial data protection role within IT act, and if you look at section 43‑A of IT act, it makes exemption for publicly available data.  I think that allows ‑‑ what is publicly available, yes, it is contested idea, but still, if you put it there, people will have conversation about what's public or not, but a list that this course will be framed under ‑‑ discourse will take place so that people will freely access certain amount of data and be able to benefit from it.  Singapore has the same exception for data.  Australia, part of Asia‑Pacific, has a very detailed set of exemptions for publicly available data. 

 

 

I mean, privacy, the private and publicly achieved from jurisdictions, but by having that exemption, we can open up (?) through which open data can flourish, allowing more people to have benefit from AI.  So I think that's an opportunity there where we can make sure that AI is not just viewed as economic development in Asia, but also, you know, we make sure that it delivers on its social potential as the Internet has done and as the Internet should. 

 

 

>> MALAVIKA JAYARAM:  Thank you.  Any other questions?  Yes, please.  And then the lady in the red at the back. 

 

 

>> PARTICIPANT:  (?) I have two questions.  And first one is for Vidushi.  You talked about how transparency doesn't really work, and most of the time, transparency is a part of accountability mechanisms of other decision‑making processes, let's say.  I totally agree with you that transparency doesn't work, by the way, but what would you replace transparency with to, like, build accountability? 

 

 

And the second question is about integrating cultures to AI.  So I would like to share a story.  I think two months ago in Berlin at one of network centers meeting, that was one of the co‑founders of Magenta, the Google and art projects co‑founder.  He was there, and he was talking about Magenta.  One of the questions was, you know, like AI is taking over just like artistry, is AI going to replace artists, and what's going to happen, kind of talking about the future, let's say. 

 

 

And he said that he defined AI art as communicating with societies, and he doesn't think that AI is going to ever, like, understand this culture of society enough to communicate, so he doesn't believe that AI is going to replace artists because he sees AI is like a tool to make Jimi Hendrix, like he has the guitar, he wants to learn the guitar and stuff. 

 

 

So how do you think culture can be integrated to AI?  Thank you. 

 

 

>> VIDUSHI MARDA:  Thank you for your question.  I think why transparency doesn't work is because systems are usually very complex.  The other problem I have with it is usually something that's post the fact.  So we only ask for transparency once something has gone wrong and we want to go back and try to understand what happened. 

 

 

So I think it would be way more useful if we shift the conversation from transparency to scrutability.  If you are able to explain them.  And you can have different levels of scrutability.  If I am going to use AI in the judicial system, you have to have a high level of scrutability.  And I think it's really interesting to think of how we can figure those levels, as opposed to saying we want transparency. 

 

 

Another thing we could do is say feature engineering, which I think Jake spoke about briefly, is the actual problem with machine learning.  When it's a question of math, I think we're pretty good at it.  We don't have problems with accuracy.  What we do have problems with is what to look for.  So we have all of this data and we need the computer to optimize on X number of features.  Engineering those features is often the most challenging part of machine learning and AI development.  So I think if we start building norms around what features you shouldn't use, like protected characteristics maybe, that would also be useful. 

 

 

And all of this ties into a much larger point, which I think is we need to make AI systems scrutable by design.  So it's in the process, opposed to a mechanism because we will always fall short. 

 

 

>> MALAVIKA JAYARAM:  We have only two minutes left.  Rather than responding on the art point, I would rather let these two people ask their question, so at least we can take it outside the room.  The lady at the back and the gentleman here in the green.  Sorry, I think in colors. 

 

 

>> PARTICIPANT:  Actually, my question builds upon the question that was just asked.  It was about questions of fairness and accountability and remedy.  When you talk about scrutability of systems, I think there's complication of what AI is being used for, for example, countering violent extremism, when there's categories of people identifying based on things that are very hard to define and are very politically sensitive.  The problem is when you use things for things like CVE online, then it's very hard to go back after the fact, but how do you make it scrutable in the way you say by not using particular forms of identity or things that should be protected because that is exactly what they're looking for, and there's this blurring between the private and the public where the state is often outsourcing these functions to private companies who are then not under the same obligations for accountability and remedy than the state is. 

 

 

So I guess it's just a question about the complexity of this question of remedy and going back.  Yeah, some sort of (?) for anything that goes wrong. 

 

 

Also in terms of machine learning, surely it's quite hard to find out what has gone wrong, and often companies don't know how to go back and say what exactly they mean. 

 

 

>> MALAVIKA JAYARAM:  Those are important points.  We didn't spent a lot of time talking about predictive policing, whether the past can dictate the feature, and whether you can say people who did something once will do something again.  I think it raises the questions about who is policing and what rights we have against them, which might be different, certain rights against the state versus private companies as Vidushi was also saying. 

 

 

And the last question here, you can have the last word. 

 

 

>> PARTICIPANT:  Okay.  Very short.  I used to work in the field of AI.  At that time, it was much more on design, supporting designers, to make things much more smart, like thanks to (?) of things.  Now it seems like the vision of the vision of the future, such kind of AI is much more a creation of universe that is not in the same (?) but the universe according to the profile that is tailored by big companies or by advertisements or by opinion leaders in the way that a person willing and specific (?) a created universe, and this is a vision of the future.  It's already working in this methodology, if you consider the use of search engines, your behavior, your other things, you receive completely different (?) if you check the same ‑‑ you put the same query on different computers. 

 

 

And this is a little bit concerning me.  So my optimistic vision is about the opportunity to gain the control, the system, to decide which kind of future, which kind of decision we can take without a kind of ‑‑ due to AI allotted by someone else.  That's all. 

 

 

>> MALAVIKA JAYARAM:  I think one person's personalisation is another person's big brother, and that's another dichotomy we come up with. 

 

 

Thank you all for being here, and I hope we've given you a small flavor of what's similar, what's different in the way we talk about these things in Asia, and stay in touch with all of us.  Thanks very much.  Bye. 

 

 

[ Applause ]