Core Internet Values

6 December 2016 - A Dynamic Coalition on Other in Guadalajara, Mexico

Also available in:
Full Session Transcript

The following are the outputs of the real-time captioning taken during the Eleventh Annual Meeting of the Internet Governance Forum (IGF) in Jalisco, Mexico, from 5 to 9 December 2016. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

>> OLIVIER MJ CREPIN-LEBLOND: Good morning, everybody, and welcome to this workshop on the Core Internet Values. I gather you've all had to battle with traffic this morning. It is a big city. I'm glad to see that most of our guests have made it yet. We're still waiting for one. I would like to introduce you to the Chief Internet Evangelist for Google. We have Mr. Vinton Cerf, the Chair of the Coalition on Internet of Things. He has a meeting in 45 minutes just across the lobby. And you'll have to leave as well. Time sharing, that's fine. We're glad you're here. We have Lise Fuhr from the European Telecommunication Network Operator, the Director-General, a good term, of that organization. And PIR board member as well. And PTI board member also. A lot of acronyms already. PIR, Public Internet Registry. The name changes. It's another one. Okay. And we'll be hopefully having another person joining us as well.

The agenda is divided into two parts. The first part will be discussing the issues paper that has been published on the Internet Governance Forum website and the second part will be our internal issues. Anne Franklin has also made it into the room, if you can take a seat. We'll see.

And the second part will be the internal issues of the Dynamic Coalition.

Without any further ado, the first thing is to adopt the amendments. Any additions anybody wishes to make to the agenda? No? Okay. The agenda is adopted as presented and we can -- first thing I wanted to thank the mag for having provided the Dynamic Coalitions with their own space at the IGF and also a session taking place later on this week with all the Dynamic Coalitions meeting in the main room and presenting their work and there will be an opportunity for feedback on the work and for discussion. We have the moderator in the room who will be moderating that main session. All eyes will be on you at that point.

Let's move on quickly and let's have a look at the paper itself. I'm not quite sure who is able to beam this. Hopefully that link will work. Technology is always a challenge.

We have a paper that has been published on the IGF website. Maybe I can try and -- I was going to take you briefly through the different component parts of the core values. So really, this Dynamic Coalition is looking at the Internet core values, which are the more technical values as opposed to the Dynamic Coalition on Internet Rights and Principles looking at the societal aspects of the Internet. There is a paper you can access from the main IGF website under Dynamic Coalitions and the 2016, and the one on Core Internet Values is -- are you able to beam this or not? If you're not, you can go directly on the website and that's fine. The paper is structured on this occasion.

Welcome Alejandro, welcome, glad you could make it here.

The paper is structured on the previous papers that we have published in the past that looked at a specific set of core Internet values. On this occasion we looked at the past 12 months and found out if there had been any changes in the -- the evolution in the past 12 months.

The first value that we were looking at was that the Internet is a global medium open to all regardless of geography or nationality. And what we have seen in the past year is there has been a significant rise in the Internet being blocked or restricted due to local conflicts, governments are seeing the Internet as a threat. Many governments are seeing the Internet as a threat and as soon as there is turmoil, military coups, elections or so on.

The next value is the Internet should not be interruptible. We've seen gains and challenges. We've seen IPV6 growing and being implemented in more places but there has also been the new HDML5 standard that has been rolled out and that has required some plug-ins for some browsers. The main challenge has been the expansion of applications, apps on the net, which has made it more into walled gardens that aren't the Internet but your own app and your own world. You have also seen there has been recent discussions regarding the social networking sites where people preach to their own world or put themselves in their own parallel worlds and so on. Interoperability on a technical level still remains something that is working and in the past 12 months there hasn't been any significant change to this core value.

On the IOT Internet of Things site, we have seen that there is a strong need or there appears to be a strong need for identification on this. That was the sign-up sheet. I've already signed up. I'm here. And on the positive side, yes, as we said, the IPV6 has actually made it more reliable because in the early days it was very loosely meshed and there were some network black-outs in some parts of the world. The Internet should remain open as a network of networks. Any service application or type of data, video, audio, text, etc.  is allowed on the Internet and the core architecture is based on open standards. There has been a shift in some cases where open standards have not been adhered to and we have seen some proper pry tore or heavily regulated part of the Internet with specific standards being put on there. In the paper we do speak about some countries where we have some specific concerns.

Decentralized. It is free of any centralized control and, of course, we're looking at the DNS that is all distributed around the world. 13 root servers. That is still running very well. And although we have seen all sorts of denial of service attacks, I'm not aware of any attacks on the DNS servers that has brought the Internet down. Wait for the second part of the sentence. That has brought the Internet down. There are attacks all the time and I can see you are looking at me thinking cybersecurity, no attacks, what are you talking about? And then the end-to-end principle, which is one big principle that we've had with application-specific features residing in the communicating and nodes of the network rather than the intermediary notes such as gateways that exist to establish the network.

We've spoken about IPv6 connectivity. Before that there was a rise in carrier grade network addressed translation, CG net that broke the end-to-end principle. But as we're seeing now, IPv6 rising now and to some extent -- speaking about the UK. It's being seen as an alternative that might not be no longer viable due to costs and scalability issues. We might be going back on the correct track for tend to end part of the net. Users maintain full control over the type of application and service they want to share and access. I think that's pretty straight forward. Not much going on here apart from traffic filtering that you do see in some parts of the world, and an increased amount of traffic filtering especially for terrorist sites and criminal websites, etc.

Robust and reliable. The robustness of the Internet is legendary. It was designed to be robust and it has remained robust. I don't know if -- hopefully it will remain. Despite everything and the exponential rise in cybersecurity attacks that we see around the world. I was on a panel I think it was in Geneva and the rise of attacks is quite surprising.

These are the core values that we have here. At the bottom of our paper, the last paragraph, we mention one thing, finally there has been a sustained increase -- to launch attacks to impact on the Internet negatively. You might have heard of the denial of service attacks that have been performed recently on the DYN and it was not an issue in the early days of the Internet development, at least I will probably ask about this but it probably wasn't foreseen you would have these massive attacks especially with the Internet of Things. Times have changed and the question really that is being put here to the floor to our panelists and everyone, should there be a new core value that should drive efforts at standardization and protocol development.

I think that's enough for me speaking. I can turn the floor over to Vint.

>> VINTON CERF: Does that work? Apparently it does. I'll argue for a core value that I'll call freedom from harm. You could also call it safety, I suppose. And I know -- I want to note ahead of time that Alex will challenge me on this. Let me start out by observing that when the Internet was being designed and when it was used in the early stages before commercialization, most of the people who were using it were geeks who really didn't have any interest in attacking anyone. They just wanted the network to work. They wanted to use it to carry out their research. And so this notion of safety wasn't really very visible at all. But in the ensuing commercialization and spread of the Internet, particularly to the general public, it has become a less safe place than it was before. And although you might say well, shouldn't this be off into the social and behavioral part of the debate, there are technical issues associated with achieving a safe Internet and I argue that we should be attentive to that.

I'm going to mention a few hazards that are intended only to demonstrate that there is a lot of technical challenge here. Malware floats around in the network and causes a lot of trouble. Detecting it and eliminating it is a technical challenge. Updates to software, particularly devices like the Internet of Things, which will be discussed in the session later today, making sure those updates come from a valid source, is also an extremely important issue. Just general resistance to hacking is important. And to come back to denial of service attacks, the most recent attack on the system was launched by a way of a number of webcams, half a million of them and they were unresistant to anything. They had pass words to get control over them. These were known and the hackers made use of that to create a significant bot net capable -- if you have a half million of those you're talking about a 500 gigabyte per second attack against the target. Of course, the people who designed those devices didn't have any idea that they would be abused in that way. They weren't thinking about that. And that's why we have to say it's important to think about that.

Malfunctioning in general is another big issue. Software that has bugs, or that didn't take any account all possible operating situations, could, in fact, be quite hazardous. This is especially true as we move this Internet of Things environment. Malfunctioning software that manages the stock trading systems, or that handle your financial services, or for that matter medical analysis. Misdiagnosis, misreadings and misinterpretation of medical information is also very hazardous. Identity theft. You can complete the list.

I don't have to take more time on that. I would like to argue that we should have a principle that should drive the technical community towards addressing safety as part of its architectural and implementation thinking. And Mr. Chairman, I don't know how you want to manage this, but having had a forewarning of Alejandro's attitude about this, I would like to invite him to respond immediately if this is all right with you. Because the points that Alejandro, I believe, will make, are quite important to see how this principle could be abused.

So Alejandro, over to you.

>> OLIVIER MJ CREPIN-LEBLOND: If you're ready, go ahead. Alejandro.

>> ALEJANDRO: Thank you, Chair, I'm ready as well to follow the order you had announced. Okay, so I will -- Alejandro is speaking. I will give you the bullet points and then maybe we'll have more of a discussion. I have taken part in the online discussions of the Dynamic Coalition on Internet values and also have some face-to-face conversations with Vint, which I very much appreciate for your patience and tolerance. So the points are as follows, my points.

First, freedom from harm is traditionally one of the most basic core functions of the State. The social contract. I'm speaking outside my field because I'm not a political scientist. The social contract at least since we know of its history has been the bargain between citizens and state where citizens relinquish something like liberty, money, mobility in exchange from being protected from harm by the state. So translating this to Internet scale, without invoking the state overly, without actually bringing in the state the way that will be damaging is a challenge for this idea. Let me say I'm not opposing the idea. I'm throwing challenges at it so that we can have a good landing if it is correct to proceed.

Second the safety and harm and freedom from harm even varies very much culturally and within each country and certainly from one country to another. Individual or collective safety and so forth.

Third, I see very serious implementation issues which one would have to work out. I think you have an idea about them. One thing is to get freedom from harm or safety as one more thing in the checklist of RFCs and ITF have to fulfill so you have security considerations. Privacy considerations and now freedom from harm considerations. And that is only the ITF and as we know Internet of Things and these devices that you mentioned have also very strong other components that are managed by the radio, spectrum management organizations, GSMA and so forth, who would have to adopt the values.

And the loss of regulation and certifications, one of the options in the U.S. model, to have the UL, the underwriters laboratories, to have a private laboratory that certifies that things fulfill the standards. How do you take that outside a single country?

Number five is scaling beyond borders. Size, countries, what happens if you have zillions of devices that are made in con country that doesn't comply at all with this. Even in the country that actually builds a wall.

And number six, big question is there an Internet way to do this? Domain name system gives us a good example. It was originally a centralized numbering and naming resource with a minimal centralization but very tempting for state anchors. They relinquished their function and can the Internet community build something by itself that fulfills these functions?

>> OLIVIER MJ CREPIN-LEBLOND: Before we ask for your responses, let's go to our next speaker.

>> MARTIN: Technical depth isn't as deep. I come from a different point of view, that's the organization. How do you organize society in a way that supports a safer -- an Internet with values? And in that what we find is that some kind of transparency is key. So maybe it could be a core Internet value as well to ensure this transparency which helps people to understand what's going on, which helps to understand that yes, the Internet is not as secure as we would wish but if everybody would live up to best practice, it would be a lot safer than it is today. And it would also help to make those organizations that would things to the Internet to a certain level of responsibility in securing the systems beforehand to at least an adequate level. So that will be my main point.

Obviously the accountability, it is clear that people who can take responsibility are also seen to take that and last but not least, this is the stakeholder environment, it would be great if there is always choice. So even if let's say -- take Google doing a very responsible job but it would be good that there is choice that keeps Google sharp, that keeps the big players sharp, that there is always an alternative combined with transparency and accountability I think we're in for a great ride with some great motors in change and innovation with the perspective that, yeah, the best possible perspective of the Internet values.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you, Martin. Lise.

>> LISE FUHR: Actually, I'm very much in line with Alejandro in that I believe that it's quite important to keep the principles -- no one can actually argue against freedom from harm, but if you get into the cybersecurity, if you get into the security business, you actually risk end up having standards or maybe creating more of these wall gardens that we also heard Olivia talking about. Interoperability is key. If we move into getting more standardized here, we might create even less -- for me it's a matter of we're seeing attacks, we're seeing things that are not good. But we should also be aware that there is work being done now to actually prevent this. To create systems that will not allow these attacks. And I think -- I know the domain name business is doing a great job and the Telcos are stepping into this work as we speak. I think having this openness on this issue is good and not that having a principle is bad, but it opens up for issues that I really support what Alejandro is saying, what is the definition of harm? What is the definition of spam?

We were here some years ago talking about the often issues they should step into and look into and I think it's dangerous -- it's a dangerous area. Thank you.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you. Before we come back to Vint, we have Marianne Franklin in the room, the past Chair of the Internet Rights Coalition. I wanted her to come down and give us a few of her views on this.

>> MARIANNE FRANKLIN: Thanks for having me. I think it's a fascinating idea. What we're talking about here is actually a societal, political issue. It's not a technical issue. It has technical manifestations and so I take the point. It has new challenges. So in light of the actual interaction between these two Dynamic Coalitions which were once merged to form the Internet rights and principles coalition. The need to have an Internet Values Coalition is clear to implement the technical side of a rights-based understanding of Internet design, axis and use. That's the basis in which I'm working.

So we have here a desire to address some very pressing social and political issues. And the trouble is, if we think about it in a purely technical way, we are misunderstanding the embeddedness of these technologies in the society. The innovations that have developed over generations that comprise the Internet we know today. The innovations ahead of us with the Internet of Things. Every designed decision is a social decision, a political decision. The point about standardization is actually a political choice to which you put technology to work. The trouble with turning that around and saying we can come up with a technical solution to what is a societal issue is to overestimate -- is to overestimate the power of the technical fix-it in itself and then we run down a road where rights -- where we have a divergence.

We can't talk about this issue as a purely technical one is basically my point. Seeing as many of the values that are part of this coalition are enshrined in the charter of human rights and principles for the Internet. Net neutrality. Given freedom from harm is -- where is the accountability the standard makers and designers? Who are they accountable to? Who can some con go to for legal remedy? It can't be understood as a parallel path to the complex issues we're talking about. The rights-based futures for Internet, design, access and use but I would like to see this conversation continue within the human rights and principles domain because I think technologists have some important ideas and you are also a social being, vin, you have your values, background and emotions. So we can't separate these two. I'm concerned the technical fix on the table as if it can be done technically. Thank you.

>> OLIVIER MJ CREPIN-LEBLOND: Thanks very much. Martin has to start sharing so we'll see you after this. Vint, back to you. You've seen a lot of very different views now. Seems like there is a lot that you have to say.

>> VINTON CERF: First of all, I think you should not allow technologists to escape responsibility for the quality of work they do. The bulk of what happens in an Internet is a consequence of software. Software is the reason that there are these risks but the software doesn't work right, malformed, attacked and so on. Of course you can't solve all of the safety problems by using technical means. But you cannot and should not argue that you should therefore ignore technical opportunities to make the system safer. And I will resist strongly any argument in this forum that says that we should have nothing to do with freedom from harm or safety on the technical side. I've said many times, and I will say one more time, there are only three ways to deal with these problems.

One way is to use technical means to inhibit the harm. It doesn't always work. But sometimes it does. If you can't do that, then the next thing you do is to detect that there is harm. And try to do something about it. In the legal world, that's detecting that a crime has occurred and try to prosecute the party responsible.

The third mechanism is just plain moral persuasion. Don't right bad code because it's wrong. If there is peer pressure on the programmers to do the best they can to protect the people who use their software and the equipment that's animated by it. So I hope that this group will not reject the idea that we have a technical responsibility to protect against harm even though there are broader societal tools that are needed in order to protect people who have been harmed by using the Internet.

>> OLIVIER MJ CREPIN-LEBLOND: Thanks very much. I see quite a few hands. I would say Marianne needs to run. I'll open the floor after a response from her and our panelists will continue. Marianne Franklin.

>> MARIANNE FRANKLIN: Point is taken. By no means to deny the concern here. Not at all. My plea is that we have this discussion where it is more than a purely technical discussion. It has to be a legal discussion, a cultural discussion. Harm as we know can mean certain things when we get into different cultures. This is not -- please do not misunderstand me. Not to say reject this conversation out of hand and bring it into other forums so it is not defined only by technical criteria. Your whole point is not just a technical issue.

>> OLIVIER MJ CREPIN-LEBLOND: Let's go to the floor for a couple of people. First Matthew, you have been putting your hand up since half an hour ago. You have the floor.

>> MATTHEW: Matthew Sheers with CDT. Thanks. I would actually like to support vin on this issue with a slight twist. I actually think the interesting -- I haven't been participating in this Dynamic Coalition, I apologize for that. It is fascinating. I'm looking at these core values. The expectation of freedom from harm, I'm not quite sure that quite captures it. What I would prefer to see is cause no harm. Not an expectation but a directive, if you will. And that should be applied to each of the core values. Many of the issues that vin is raising are captured in the user-centric core value. That could be expanded to include more of the concerns. What we should be expecting here is not all of the stakeholders, technical community and others should not be causing harm to any of these core values. That's a different way of looking at this, putting a slight twist on it. That way it can be incorporated and puts more of an action orientation on the core values as a whole. Thanks.

>> OLIVIER MJ CREPIN-LEBLOND: Tatiana.

>> TATIANA: Thanks a lot. I don't find myself frequently disagreeing with Matthew but I think I will take a further twist in it. I do believe strongly that these problems should be solved on the technical side. As a lawyer I'm wondering when they are talking about responsibility and talking about accountability of the technical community, creation of the core values and belief in them is very good. But the problem is that who is going to enforce these? If I think about like, you know, historically enforcing different values, core values, whatever, if we want technical community to create safe software like write safe codes, there were lots of discussions, for example, about consumer choice. There was a big belief consumers will choose safer products. It didn't happen. Consumers will choose fancy products or cheaper products or services. Regulation I would have been strongly against this. So while I very much believe that it is responsibility of the technical community, governments and regulators, they just have no instruments to enforce this. They have no instruments to control this.

But on the other hand, if we're talking about responsibility of the technical community, how to enforce this responsibility, what kind of mechanisms of self-enforcement there could be? What kind of pressure there could be? I'm getting lost because I see the core value but I don't see the real way forward. I would like them to elaborate on this. I'm interested in this. As a lawyer I don't believe in regulation anymore but I don't see the mechanisms.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you.

>> VINTON CERF: I will have to leave in a moment as well to go to the IOT meeting. Let me try to offer some examples of means by which this problem has been addressed in other contexts. In the United States, electrical appliances are checked by something called the Underwriters Laboratory. They are -- they try very hard not to be influenced by the people who make the products. They don't take any payments from the people whose products they evaluate. There is a new kind of Underwriters Laboratory under development by a former programmer and Google employee. We can call it a cyber Underwriters Laboratory but use it as a phrase to convey what it does. I'm not trying to capture or abuse the trademark. But his idea is to evaluate software that goes with products. Increasingly all products have some software in there and he has automated the process of analysis. It is a well-known theoretical truth that no program can figure out whether another program is not functional, but you can detect some forms of malformation or possibly malware. So that's a voluntary thing. The people who make the products can choose to have their products evaluated. I don't recall, maybe someone else knows whether the Underwriters Laboratory is free to evaluate any product it wants to. I think it is but I'm not absolutely certain. But we could use that kind of mechanism not to enforce necessarily, but to entice. And if you are looking for outcomes that differ from the ones you don't like, the best thing to do is to look for the incentives that drive the behavior you don't like and figure out if you can change the incentives to change the behavior. So what I hope is that we will find ways to persuade the manufacturers of software-bearing products to attend to safety in their own best interest. So I'll stop there since I have to run away anyhow. But I thank you very much for letting me participate this morning.

>> OLIVIER MJ CREPIN-LEBLOND: Thanks very much, Vint. I think I detected. Are you suggesting a trust mark?

>> VINTON CERF: That's what the Underwriters Laboratory says. We evaluated the product and we -- here are the results of our tests. We rank ordered them. It is like the Tesla car that was testing. It broke the testing equipment it was so strong. But the users get to decide. There is no legal enforcement there. Maybe that's as close as we can get.

>> OLIVIER MJ CREPIN-LEBLOND: Thanks very much. I understand you have to run. Alejandro, just before you --

>> ALEJANDRO: I think we need to adopt a discussion of this proposal as a work program and collaborating with other groups, investigating what is already done in the UL case, what are more global or scalable issues with the boundary condition that we'll look at all Internet proper multi-stakeholder solutions as opposed to multi-lateral, which are very tempting.

>> OLIVIER MJ CREPIN-LEBLOND: Thanks very much, Alejandro. Thank you for joining us, Vint. Did you have any comments on the different points of view that have been now given around the table?

>> Actually I like the idea of saying don't write bad code. But for me it's actually the thing of having -- driving the effort towards standardization that I'm more concerned about. Because that's always a difficult thing and it is always opening up for making standards that would actually create a less open Internet. And that's my concern. I think it's a balance between having safety and still keeping the Internet open and interoperable.

>> OLIVIER MJ CREPIN-LEBLOND: Thanks very much, Lise, for this. I'll come to you, John, in a second. Who is participating remotely. He has a note. I would also ask do we have anybody else? No other questions at the moment. The point is we don't have to start this discussion from the base of national laws in different countries, the base from where the discussion on device safety and standards need to be the Internet architecture and Internet standards. Internet dealt with national law. Safety standards for devices could be evolved. As the standards evolve we could examine how the new standards interfere with international laws. That's another angle that makes it a bit wider. John Clenson.

>> JOHN CLENSON: Thank you, Olivier, I have not been participating in this coalition but asked to sit in today. I would like to pick up the point that you just read out a little bit. First of all, it is not reasonable to talk about standards or not standards. We're already neck deep in standards. If we didn't have them and they weren't complied with most of the time, there would be no functioning Internet. That's a very strong statement but it is also true. I listened to some of the earlier comments and wondered whether we should just stop development and turn things over to a more public and legal process. And I don't think people would like the outcome of that. But I'm not completely positive.

In the International Standards Organization community and most of its member bodies, distinctions are made between what are variously called technical standards and safety standards. And the difference is that in the ISO community, standards are voluntary. The safety standards are different. Safety standards typically get embodied-in-law somewhere and compliance is not optional. UL can perfectly well certify or not certify. The question of whether an uncertified product can be sold is a matter of national law.

So if the question is could we write a technical standard which says thou shalt not write bad code. We tried it and it hasn't worked very well, although the social sanctions in the community for developing bad code have seriously been deteriorated. There is a case to be made that a lot of the thinking at some points of the network at some of the code being written was the wrong people die mentality. That tends to focus the attention. Today we're seeing in a lot of areas, a lot more stuff which is being pushed out because it is interesting and exciting and experimental and users will debug it on a social basis. That's a different kind of approach to things.

So this is a third view between, I think, between the cut Alex is taking and the one Vint did. I think we need to be careful, but on the other hand saying this is really a social problem that needs to be dealt with in social and legal realms is not only probably impractical but has the interesting property of ignoring a century or two of history in international standardization and the difference between voluntary standards and laws and the difference between safety standards and other kinds of things like that and we probably should look at that stuff carefully rather than try to reinvent it or pretending it isn't there.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you, John. I do have a question for you, actually. Would a trust mark like what Vint was talking about the possible voluntary trust mark be a possible avenue? Users, consumers, don't have an idea if what they are purchasing has good or bad code?

>> JOHN: Certainly if you decide to set yourself up as the Internet Trust Mark Association and start handing out trust marks, nobody could stop you. Whether they would pay any attention or not is another issue. As many of you have probably noticed if you look at the decorative junk around the edge of webpages we already have a number of trust marks. And how much value they have until and unless somebody decides to embody requirements to those marks into law, my personal guess bad idea at this stage. But their value depends on a one by one customer assessment of whether or not that particular mark is of any use whatsoever or is something that somebody paid a few dollars for to get a gold stamp on their website which is utterly meaningless. We've seen both.

Now, Vint is talking a different area than where those things have occurred in terms of website safety and things of that nature. At least I think he is. But the answer to your question is we've already seen that and the nature of the network is if somebody wants to set themselves up as a certifier, their big problem is not setting themselves up but to get anybody to take them seriously.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you, John. Tatiana and then Marianne Franklin.

>> TATIANA: I'm sorry for legal intervention in technical discussions, but a few points. First of all, I'm very grateful to the person who pointed out that the safety notion might contradict to the openness of the Internet. And point to de-centralization. Any trusted trademarks, anybody who will be evaluating these will create a certain kind of centralization. While we're talking about technical de-centralization. This will create another central layer. Whether it contradicts to your core value, first of all. Secondly, I don't believe that this is going to happen. For one simple reason. Regulation, legal processes, compliance, I know many people will talk -- not here in this room about cars, about electricity certification, fine. But when it comes to the Internet, there are infrastructure layers, services layers, application development layers, you have a myriad of players. And here de-centralization comes in. In terms of how complicated this in with legal regulation, we cannot control even simple things like crime, like digital investigation, like prosecution. There are tough things which the government is responsible for. There are some new security obligations which everyone is saying will not be enforceable by governments and some bodies. When I think about these compliance process with standards in the de-centralized environment I see two things. It will be under no control and will be artificial. We say we have standards but what do we do with those not following them? We're saying oh my god, you are bad, we don't like you. Okay, you don't have to like me. I'll still develop my project.

The second scenario will be we have a one stop shop that will delay any development. Any things going to the market. It will be a bottleneck for innovation. I'm sorry for using buzz words but still innovation would be harmed in my opinion and then, you know, any one stop shop creates room for abuse. It's a power. Who will they be accountable to? Who is like watching the watchers? So sorry, a bit of law intervention but I have more questions than answers.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you, just a quick response from John? No, okay. Juan Fernandez.

>> JUAN FERNANDEZ: I'm sorry I have not participated in this coalition on a regular basis. This particular issue of the safety of the software and devices that are connected to the Internet. I think that all the answers that I have heard have value in itself because I think there is not one unique solution to all this problem. Because not only from the problems of the safety measures itself, but also from the potential harm. I am thinking about medical devices and medical software. Now that artificial intelligence is even being used for diagnosis, it is a line of work in my country is being developed very intensively and so the government has a high degree of regulation of this kind of software along the lines of medical equipment itself. You know, medical equipment passes very stringent regulations. I put this example. It is not the same to regulate these kind of devices like software games. Maybe software games have to be regulated from another dimension. Ethical of the contents. But it is not the same.

So I think that this have to be a multi-layered approached and I also believe that industry have to have strong self-regulation measures. It is in the own interest of industry as she said in order not to delay the roll-out of products. To create a system of self-regulation that increases the trust of the user in this. So I think that everything can be done simultaneously. The trade stamps of safety is very useful because it creates from the end user, the concept of buy-in or purchasing things that have already that certificate in a way. But while I say that, it cannot be an overarching system, and also that the responsibilities are different. Some government really have to have some responsibility in some of them that relate to public safety. For instance, in some countries now they are thinking even to privatize fire departments and use the Internet way of connecting some sort of Uber fire department. You can imagine the government has to have some connections and some responsibility on that and medical, as I told you, and maybe some other ones. There are some others in which government doesn't need to be directly involved but the industry that should be interested to be self-regulating themselves. Thank you.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you very much, Juan. Marianne Franklin, you've been very patient.

>> MARIANNE FRANKLIN: You asked me, I just want to just respond, if I may, to John Clinton if he is able to -- no one is ignoring 100 or 200 years of history. Quite a opposite if I may make that note. We're confusing our terms. We're talking here about standards in a number of ways. First operating standards as we talked about like the international standards community. Operating standards.  Simply the universally applied standard. There is another standard in the room here. The standard of quality. If something is good or bad by whatever measurement. If it works or does than work. These are two distinct understandings of standards. They're getting confused or talk through the distinction more. We're talking about safety. We don't want our electric jugs exploding on us. We don't want our cars driving us into trees. This is safety in a very technical means that we do rely on quality standards so that these things work without killing us. We know the Tesla car case. When we move to the Internet of Things. Our refrigerator that registers our likes and dislikes online. We're in a whole new realm.

For me the problem from safety switches from something does not explode to a more ethical societal legal point about safety. Where all the different morality. What is protection of harm when you talk about children? I think a good working example is toys. The understanding of the toys that are being developed which are linked more and more with software applications in them and the need for some regulation of those toys technically but also this brings in the kind of trickier bit about who decides what is good for not for which child and at which age under what terms? We have to be clear our terms of reference. Safety and standards have several applications. I agree with Tatiana's point about this is not just a technical matter that you can apply a technical standard to from a central point of mission control. At some point standard makers also need to be accountable to rule of law. But there is also sometimes very bad law so I'm back to where I began. It's sometimes just about other things than what we think.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you, Marianne. We have Jolie McPhee remotely. I don't know how the system works for him to speak. It's a gentleman with a beard. Not a her. Thanks, Jolie. Jolie McPhee (sp) from New York, hopefully. While we work out the technical difficulties maybe we can open the floor for more questions.

>> Can you hear me?

>> OLIVIER MJ CREPIN-LEBLOND: Go ahead.

>> That's good. I'm trying to get my mic working. I was just wanting to talk to Tatiana concerned about regulation. Think I there is a difference here. We're talking about core values. And you know if we look at a good example of core values like the ten commandments. You shall not covet your neighbor's ass or whatever. We don't need regulations for that. It is basically something -- a sort of common level of agreement like don't write bad code. I would further say the basis of the Internet is essentially voluntary. We agree to use protocols because we all use them. That's how things work. So I'm not sure the values are the same as standards. I think it's a different thing. I don't think -- you know, we are precluded from enumerating values by the fact they can't be made into regulations or laws. Thank you.

>> OLIVIER MJ CREPIN-LEBLOND: Thanks for that contribution. We have a lady on the back of the -- there is a flying mic somewhere. There is one coming to you. If you could please introduce yourself.

>> If we don't have translation I'm from Ecuador and I was hearing you about the freedom of harm and software and we have some kind of problems in our government with contracting of hacking team so we want to know where do you stand in those kinds of software? The software that I use to invade our privacy, software that are designed to put like -- malware that can infect our computers. That kind of software, where do you stand? Because these companies -- there are a lot of companies in the world that design these kind of software and they are -- I have heard that even Mexico they have some kind of contacts with hacking team in our country. They do exploits, they do a lot of things. So I think that we have a very interesting point of freedom of harm. Maybe an analysis about that kind of software. Thank you.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you for this. Are there any people in the software industry that can speak about software, malware, odd ware? Getting more information, info ware, whatever you call it. Ransom ware now these days. I see John, perhaps you have much knowledge on some of these issues?

>> JOHN: I've got some knowledge but I don't know what I have to add. The difficulty here is in areas we have been talking about. And I appreciate Jolie's earlier comments about the difference between values and requirements. I think as a broad community, and it is not the technical community doing nasty things, it's the much broader community the marketplace and the technical community looping and it rating and creating circumstances which then make us unhappy. If there is pressure for time to market immediately before things are finished, whether that's an automobile or a piece of software, expectations of high quality become unrealistic.

So I am less optimistic than Vint sometimes is about technical solutions but there is a complex thing here, by drawing a clear line between what a technology and what is market forces and legal environments and the difference between things which are voluntary and things which are strongly encouraged and things which are required, we end up in a kind of fantasyland discussion. So yes, I could talk about some of those specific issues but I don't think it's worth the time. I will do so if you disagree.

>> OLIVIER MJ CREPIN-LEBLOND: Thanks for this, John.

>> JUAN FERNANDEZ: I take the opportunity of this question to raise another issue. First, directly to this question. The use of intentional misuse of the Internet to cause harm either by criminal groups, either by governments, we're talking about cyber war. We're talking about this. As you know, it has been discussed very heatedly in a group of governmental experts in the first commission of United Nations, the GGE. You know it is going on. I think that this topic is linked very deeply with Internet governance and the topic of these core values and the Dynamic Coalition. One of the core values that maybe it's -- it's spirit covers the rest but maybe it should be said explicitly is that when Internet was created, they wanted it to be public good. The Internet was for education, information, whatever. Now they have many commercial things, that's okay. But it has to be someplace written that Internet should not be used to cause intentional harm or even to promote -- I hope that someday in a place like Internet Governance Forum somebody can proclaim that Internet should be a place of peace. That it should not be used for cyber war. That is a tendency. You have this cyber teams in this and all those things. I think that maybe, as I told you, this group United Nations is working on that. But besides that we need to have an ethical attitude, the user here and the stakeholder community of IGF to try to foster an ethical attitude of trying to -- not to enforce, because then I will get into the legal thing, but to really -- I don't know how to say it in English. In Spanish -- (Speaking Spanish)

>> To have a bad eye on the people doing wrong things.

>> JUAN FERNANDEZ: As everybody behaves here with some codes. You go to bathroom for doing something, you don't do that in the open. Maybe we should try to foster that same thing of ethics in Internet so people do not -- I can tell you one story. One story. Just a little story to say the value of ethics. There was -- our country was accused once, we have a biotechnology industry. That it has capacity to create -- biology weapons. And the answer to that is not that they have the capacity or not. That is the technician has the ethical values that never use medical science for harm. I think that the best way to confirm these things is from the ethical dimension. I think that this coalition should have to focus a little bit more in the ethical dimension for keeping all these values.

>> OLIVIER MJ CREPIN-LEBLOND: Thanks for this, Juan. We have Alejandro and time is going by.

>> A motion to go onto the next point of our agenda. But proposing a way forward to close this part of the -- not the close this issue. I think it has been opened, but to close this part of the session and let me just--

>> MARIANNE FRANKLIN: Just to close it, I would just like to thank you for inviting me. I would like to also put a motion if I may, if that's possible, that exactly this point about ethics. That we look forward to creating a working relationship between this Dynamic Coalition, restoring the working relationship between this Dynamic Coalition and the Internet Rights and Principles Coalition. Because the ethical things we have in common are very clear. We need to start meeting together. So let's make that -- I would like to suggest and put it in the IEP Coalition meeting tomorrow where we'll talk about intentions of the Internet -- this has been a fascinating conversation. Thanks for having me.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you very much for coming, Marianne. We're also in discussions with the coalition on Internet of Things as well since, of course, this is one of the target reasons why we're in the--

>> ALEJANDRO: I'm curious with the microphone before you leave. So I think that as you mentioned and several other speakers did. This is not only a technical issue. Solutions cannot be only technical and solutions cannot be non-technical, either. It has to have -- this is one of these fields where we really have to bring our minds and hearts together. And have a discussion that also looks at segmented solutions. Parts of the problems that you can solve and parts not solved. The bridge of ethics. The Dynamic Coalitions program and self-definition it is very important to be very restrained, to concentrate more on the technical design principle than on the higher layer rights and values, which are much less well-defined. More universally variable and so forth. Without trying to dry a hard line between these things, but to be aware that sometimes we may be over stepping into the field where you have the right expertise and creates a much more collaborative work and program. That would be my main response to all these replies. And I would move that we not right now, but that we put these discussion points into the mailing list, make a small publication of a summary of what has been spoken about. We have not reached any conclusions, but have had to at least one go through the reasoning, and then look at creating a work program for the coalition itself.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you, Alejandro and indeed it seems that we have our work cut out for the next year. I don't think that we were thinking we would actually get consensus here but certainly it is a good place to plant a seed and I'm really glad to see things growing and certainly improved cooperation between Dynamic Coalitions. Now the next part of our agenda is actually going to be quite more internal to the coalition itself. It is about how we're going the plan our work forward and what kind of leadership structure we wish to put together. We have about 15 minutes or so, 10 minutes or so because the room is being taken up by another group after us. The main question really at the moment we've run into a year and year basis, we have done some ad hoc work during the year but it is with all of the Dynamic Coalitions, actually, it is time to ramp up more, have more people involved and not only just have a top-down structure but a bottom-up structure with perhaps a committee or steering committee. It is very much in the air at the moment. And this is where I thought we would have a great discussion on this and see whether there are any proposals and anyone that would be interested in taking part in leading this coalition as a group. So I don't know how we want to start this. I was going to ask Alejandro if you had specific views on this and how we could move forward. Certainly on the work that we have to do, I think that the discussion we've had today is already a work thing so we probably don't need to look at the different work that we have to do. I would move that next year we would continue also monitoring the different core values that we were looking at this year and see if there has been any significant change as well and indeed the world is moving very fast so there have even been some changes since we wrote that document and we will be asked to actually amend the document before it gets published. That's probably one of the first things that we'll have to follow. I see Michael here going okay, if I can help. I hope, on this.

>> MICHAEL: My name is Michael Ogee (sp). I kind of want to start out with more exploring what is the purpose of our DC to then hopefully build on how we're going to go about the implementation? I really liked what Jolie said about values are not standards. I understand Tatiana's point of view and enforcement. You just said something about basically almost being like we have these standards and then main -- using it as a framework going forward to kind of gauge almost like a clearinghouse of how are these standards actually -- these values, rather, how are they being implemented across the Internet and ecosystem by various parties, by various stakeholders, etc., and how are they -- are they -- for instance, is openness being eroded or being expanded? So I think that -- I don't know if that's necessarily something we would want to do as a DC but just something that popped into my mind as kind of being -- basically trying to hold the Internet accountable to these values. Hold it accountable at least in terms of our reporting.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you, Michael. You have used that word accountable. Holding the Internet accountable. I'm not sure how you can do that. We'll get you to deal with the accountability part and you can probably start a working group on this. But that certainly you've touched on a number of points here. We really do have our work cut out on that. On the topic of leadership, I have been the chair of this coalition for the past two years. We have had different chairs every year. I don't think that I want to remain Chair forever but I'm ready to continue for another year if people are happy with. But I would really like to see a steering group, some more people involved in the leadership and being able to steer the work and certainly take responsibilities within the coalition because we seem to have such a huge set of topics that are open in front of us at the moment and being able to actually follow the different threads would certainly put less work on a very few individuals that we have that are very active in the coalition. Alejandro, did you have any points to make on this?

>> ALEJANDRO: Very briefly. Alejandro speaking. First I think that we need to -- I mean, if we don't get the message from this session that we really have to work every month of the year and not only two spikes of work, one before preparing the report and upon before the meeting. If we don't get that message, it doesn't exist. We aren't able to take it. Second, I think we have to be open to the participation of every member and every -- everyone who wants to come in and contribute to the work. We need to put some work into -- in the work program. We need to put some work into the constraint definition of principles verses the much broader definitions of values. Doesn't mean closed doors again but just to make sure we work with something that's within mission. Very much -- I would say anecdotally one of the reasons this Dynamic Coalition was created was maybe some of the discourse about rights, obligations, state intervention and so forth were actually going to tear the Internet apart bringing in specifications that couldn't be fulfilled or make borders to go against design. What makes the Internet the Internet and what things make it not Internet if you take them away? And that in this case with freedom from harm. We have a very good guideline for one year discussion. We have to do some research how W3C and many other organizations manage these things.

There are codes of conduct. There are ethics codes that are prevalent and national engineering bodies and many other organizations. That's a research program. And going to the internal governments issue. I think that we need to spell out a very lightweight but very clear and objective management structure, leadership structure. Have to define roles. Some of them may be permanent or may be let's say not subject to term limits or time constraints. Others should ensure rotation at least as a principle. Maybe you have to hold an election and you don't have candidates and you have a good person already, then you stay with them. But we will need some larger openness. The functioning. Coalition itself hasn't been transparent enough. Things should be put forward to the membership and taken from the membership's views and far beyond the base consultation. I think that as you have mentioned already, a steering committee would be very important. We can have a seed steering committee started in the coming weeks. We should make sure we do things on the email and online lists immediately because there are very few of us right now here. But we must make sure we have a seed steering committee that will set up some rules that are not strict bylaws, clear rules of procedure and hold an election and live with its results. And the steering committee, one final point, we probably would be inclined to invite some people who are not directly involved in the day-to-day that can be sort of a higher moral guidance making sure that things are equitable and fair. And then move to a more internal steering committee as we get the work done.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you, Alejandro. Please, go ahead.

>> I have to agree with Alejandro, this DC, what I mentioned before was just general. This have to be concentrated in practical ways of implementing these values through the technical protocols and all that that form the Internet. When this was created, all those general things have to be really put down into how to measure that. And in this sense, maybe I could suggest for the organization of the work to proceed to the IGF to put in stages. I think one stage we should try to identify one of these values. Maybe the barriers, the challenges to put a face of identifying. Then face of research, which of the challenge are being addressed, as I mentioned, elsewhere. And so to create the liaison with the other place, not to try to reinvent the wheel. When safety was mentioned, I remember I work in automation many years. And in the ISO standards. There is a lot of safety standards for that so we could build on that. I suggest this stage of identifying barriers, then do research and then to concentrate on those that are not sold between the collective, as he said. The stakeholder intelligence and then prepare some document with some months before the next IGF and to discuss.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you for this. Indeed, the presence of the Dynamic Coalition and -- on the rights and principles and also the Internet of Things has established those bridges, which is a good thing forward. We do have someone remotely. Sorry to have made them wait so long.

>> No problem. I have a comment from SIVA that says standards are voluntary, yes, but if there is a new situation where it becomes necessary to think of a class of standards that need to be more widely adopted, the standards process could think of ways of introducing a new class of standards that require wider commitment.

>> OLIVIER MJ CREPIN-LEBLOND: Thank you for this. That was on the previous section. Alejandro, you were going to respond. I wanted to give maybe the last word to John before we have to break up pretty soon. John.

>> JOHN: I just wanted to strongly suggest that this necessarily operates at the boundary between the technical and social, however you describe that, that you make a serious effort to get enough technical involvement in here to be sure the proposals being made work in today's Internet reality or some future Internet reality.

>> OLIVIER MJ CREPIN-LEBLOND: Thanks for this, John. Alejandro.

>> ALEJANDRO: Alejandro. I think we need to elaborate and really work for a while in good understanding of what the standards are. What are standards, what are recommendations for conduct and standards that can be tested objectively where things are creating a new type of standards that become mandatory. That is called laws and they are only enforceable within each country. We have to look -- get a good understanding and get a grip on things. Thank you.

>> OLIVIER MJ CREPIN-LEBLOND: Thanks for this, Alejandro. We do have a sign-up sheet which has been going around. I think I can see it at the end of the table over there. We do have emails of people that have come in. I hope we've captured all the emails. There were a lot of people that came into the room at some point. Of course, we need more active volunteers. So that's really my last thing before we have to close. Get moving on this. There is a lot to do. Certainly coming up with the work products that we have our work cut out. Thanks to all of you for having joined us and the people who joined us remotely and for our remote participation operator and everyone helping to make it happen. We will close. Have a very good IGF, goodbye.